query_id
stringlengths
1
6
query
stringlengths
2
185
positive_passages
listlengths
1
121
negative_passages
listlengths
15
100
1840013
Interpretable Representation Learning for Healthcare via Capturing Disease Progression through Time
[ { "docid": "pos:1840013_0", "text": "Gaining knowledge and actionable insights from complex, high-dimensional and heterogeneous biomedical data remains a key challenge in transforming health care. Various types of data have been emerging in modern biomedical research, including electronic health records, imaging, -omics, sensor data and text, which are complex, heterogeneous, poorly annotated and generally unstructured. Traditional data mining and statistical learning approaches typically need to first perform feature engineering to obtain effective and more robust features from those data, and then build prediction or clustering models on top of them. There are lots of challenges on both steps in a scenario of complicated data and lacking of sufficient domain knowledge. The latest advances in deep learning technologies provide new effective paradigms to obtain end-to-end learning models from complex data. In this article, we review the recent literature on applying deep learning technologies to advance the health care domain. Based on the analyzed work, we suggest that deep learning approaches could be the vehicle for translating big biomedical data into improved human health. However, we also note limitations and needs for improved methods development and applications, especially in terms of ease-of-understanding for domain experts and citizen scientists. We discuss such challenges and suggest developing holistic and meaningful interpretable architectures to bridge deep learning models and human interpretability.", "title": "" } ]
[ { "docid": "neg:1840013_0", "text": "A wireless massive MIMO system entails a large number (tens or hundreds) of base station antennas serving a much smaller number of users, with large gains in spectralefficiency and energy-efficiency compared with conventional MIMO technology. Until recently it was believed that in multicellular massive MIMO system, even in the asymptotic regime, as the number of service antennas tends to infinity, the performance is limited by directed inter-cellular interference. This interference results from unavoidable re-use of reverse-link training sequences (pilot contamination) by users in different cells. We devise a new concept that leads to the effective elimination of inter-cell interference in massive MIMO systems. This is achieved by outer multi-cellular precoding, which we call LargeScale Fading Precoding (LSFP). The main idea of LSFP is that each base station linearly combines messages aimed to users from different cells that re-use the same training sequence. Crucially, the combining coefficients depend only on the slowfading coefficients between the users and the base stations. Each base station independently transmits its LSFP-combined symbols using conventional linear precoding that is based on estimated fast-fading coefficients. Further, we derive estimates for downlink and uplink SINRs and capacity lower bounds for the case of massive MIMO systems with LSFP and a finite number of base station antennas.", "title": "" }, { "docid": "neg:1840013_1", "text": "This paper presents a framework for modeling the phase noise in complementary metal–oxide–semiconductor (CMOS) ring oscillators. The analysis considers both linear and nonlinear operations, and it includes both device noise and digital switching noise coupled through the power supply and substrate. In this paper, we show that fast rail-to-rail switching is required in order to achieve low phase noise. Further, flicker noise from the bias circuit can potentially dominate the phase noise at low offset frequencies. We define the effective factor for ring oscillators with large and nonlinear voltage swings and predict its increase for CMOS processes with smaller feature sizes. Our phase-noise analysis is validated via simulation and measurement results for ring oscillators fabricated in a number of CMOS processes.", "title": "" }, { "docid": "neg:1840013_2", "text": "Discovering significant types of relations from the web is challenging because of its open nature. Unsupervised algorithms are developed to extract relations from a corpus without knowing the relations in advance, but most of them rely on tagging arguments of predefined types. Recently, a new algorithm was proposed to jointly extract relations and their argument semantic classes, taking a set of relation instances extracted by an open IE algorithm as input. However, it cannot handle polysemy of relation phrases and fails to group many similar (“synonymous”) relation instances because of the sparseness of features. In this paper, we present a novel unsupervised algorithm that provides a more general treatment of the polysemy and synonymy problems. The algorithm incorporates various knowledge sources which we will show to be very effective for unsupervised extraction. Moreover, it explicitly disambiguates polysemous relation phrases and groups synonymous ones. While maintaining approximately the same precision, the algorithm achieves significant improvement on recall compared to the previous method. It is also very efficient. Experiments on a realworld dataset show that it can handle 14.7 million relation instances and extract a very large set of relations from the web.", "title": "" }, { "docid": "neg:1840013_3", "text": "Recently, many researchers have attempted to classify Facial Attributes (FAs) by representing characteristics of FAs such as attractiveness, age, smiling and so on. In this context, recent studies have demonstrated that visual FAs are a strong background for many applications such as face verification, face search and so on. However, Facial Attribute Classification (FAC) in a wide range of attributes based on the regression representation -predicting of FAs as real-valued labelsis still a significant challenge in computer vision and psychology. In this paper, a regression model formulation is proposed for FAC in a wide range of FAs (e.g. 73 FAs). The proposed method accommodates real-valued scores to the probability of what percentage of the given FAs is present in the input image. To this end, two simultaneous dictionary learning methods are proposed to learn the regression and identity feature dictionaries simultaneously. Accordingly, a multi-level feature extraction is proposed for FAC. Then, four regression classification methods are proposed using a regression model formulated based on dictionary learning, SRC and CRC. Convincing results are", "title": "" }, { "docid": "neg:1840013_4", "text": "findings All countries—developing and developed alike—find it difficult to stay competitive without inflows of foreign direct investment (FDI). FDI brings to host countries not only capital, productive facilities, and technology transfers, but also employment, new job skills and management expertise. These ingredients are particularly important in the case of Russia today, where the pressure for firms to compete with each other remains low. With blunted incentives to become efficient, due to interregional barriers to trade, weak exercise of creditor rights and administrative barriers to new entrants—including foreign invested firms—Russian enterprises are still in the early stages of restructuring. This paper argues that the policy regime governing FDI in the Russian Federation is still characterized by the old paradigm of FDI, established before the Second World War and seen all over the world during the 1950s and 1960s. In this paradigm there are essentially only two motivations for foreign direct investment: access to inputs for production, and access to markets for outputs. These kinds of FDI are useful, but often based either on exports that exploit cheap labor or natural resources, or else aimed at protected local markets and not necessarily at world standards for price and quality. The fact is that Russia is getting relatively small amounts of these types of FDI, and almost none of the newer, more efficient kind—characterized by state-of-the-art technology and world-class competitive production linked to dynamic global (or regional) markets. The paper notes that Russia should phase out the three core pillars of the current FDI policy regime-(i) all existing high tariffs and non-tariff protection for the domestic market; (ii) tax preferences for foreign investors (including those offered in Special Economic Zones), which bring few benefits (in terms of increased FDI) but engender costs (in terms of foregone fiscal revenue); and (iii) the substantial number of existing restrictions on FDI (make them applicable only to a limited number of sectors and activities). This set of reforms would allow Russia to switch to a modern approach towards FDI. The paper suggests the following specific policy recommendations: (i) amend the newly enacted FDI law so as to give \" national treatment \" for both right of establishment and for post-establishment operations; abolish conditions that are inconsistent with the agreement on trade-related investment measures (TRIMs) of the WTO (such as local content restrictions); and make investor-State dispute resolution mechanisms more efficient, including giving foreign investors the opportunity to …", "title": "" }, { "docid": "neg:1840013_5", "text": "With the emergence of the Internet of Things (IoT) and Big Data era, many applications are expected to assimilate a large amount of data collected from environment to extract useful information. However, how heterogeneous computing devices of IoT ecosystems can execute the data processing procedures has not been clearly explored. In this paper, we propose a framework which characterizes energy and performance requirements of the data processing applications across heterogeneous devices, from a server in the cloud and a resource-constrained gateway at edge. We focus on diverse machine learning algorithms which are key procedures for handling the large amount of IoT data. We build analytic models which automatically identify the relationship between requirements and data in a statistical way. The proposed framework also considers network communication cost and increasing processing demand. We evaluate the proposed framework on two heterogenous devices, a Raspberry Pi and a commercial Intel server. We show that the identified models can accurately estimate performance and energy requirements with less than error of 4.8% for both platforms. Based on the models, we also evaluate whether the resource-constrained gateway can process the data more efficiently than the server in the cloud. The results present that the less-powerful device can achieve better energy and performance efficiency for more than 50% of machine learning algorithms.", "title": "" }, { "docid": "neg:1840013_6", "text": "One of the most distinctive linguistic characteristics of modern academic writing is its reliance on nominalized structures. These include nouns that have been morphologically derived from verbs (e.g., development, progression) as well as verbs that have been ‘converted’ to nouns (e.g., increase, use). Almost any sentence taken from an academic research article will illustrate the use of such structures. For example, consider the opening sentences from three education research articles; derived nominalizations are underlined and converted nouns given in italics: 1", "title": "" }, { "docid": "neg:1840013_7", "text": "To date, the majority of ad hoc routing protocol research has been done using simulation only. One of the most motivating reasons to use simulation is the difficulty of creating a real implementation. In a simulator, the code is contained within a single logical component, which is clearly defined and accessible. On the other hand, creating an implementation requires use of a system with many components, including many that have little or no documentation. The implementation developer must understand not only the routing protocol, but all the system components and their complex interactions. Further, since ad hoc routing protocols are significantly different from traditional routing protocols, a new set of features must be introduced to support the routing protocol. In this paper we describe the event triggers required for AODV operation, the design possibilities and the decisions for our ad hoc on-demand distance vector (AODV) routing protocol implementation, AODV-UCSB. This paper is meant to aid researchers in developing their own on-demand ad hoc routing protocols and assist users in determining the implementation design that best fits their needs.", "title": "" }, { "docid": "neg:1840013_8", "text": "Handwritten character recognition is always a frontier area of research in the field of pattern recognition. There is a large demand for OCR on hand written documents in Image processing. Even though, sufficient studies have performed in foreign scripts like Arabic, Chinese and Japanese, only a very few work can be traced for handwritten character recognition mainly for the south Indian scripts. OCR system development for Indian script has many application areas like preserving manuscripts and ancient literatures written in different Indian scripts and making digital libraries for the documents. Feature extraction and classification are essential steps of character recognition process affecting the overall accuracy of the recognition system. This paper presents a brief overview of digital image processing techniques such as Feature Extraction, Image Restoration and Image Enhancement. A brief history of OCR and various approaches to character recognition is also discussed in this paper.", "title": "" }, { "docid": "neg:1840013_9", "text": "OBJECTIVE\nThe aim of the study was to evaluate the efficacy of topical 2% lidocaine gel in reducing pain and discomfort associated with nasogastric tube insertion (NGTI) and compare lidocaine to ordinary lubricant gel in the ease in carrying out the procedure.\n\n\nMETHODS\nThis prospective, randomized, double-blind, placebo-controlled, convenience sample trial was conducted in the emergency department of our tertiary care university-affiliated hospital. Five milliliters of 2% lidocaine gel or placebo lubricant gel were administered nasally to alert hemodynamically stable adult patients 5 minutes before undergoing a required NGTI. The main outcome measures were overall pain, nasal pain, discomfort (eg, choking, gagging, nausea, vomiting), and difficulty in performing the procedure. Standard comparative statistical analyses were used.\n\n\nRESULTS\nThe study cohort included 62 patients (65% males). Thirty-one patients were randomized to either lidocaine or placebo groups. Patients who received lidocaine reported significantly less intense overall pain associated with NGTI compared to those who received placebo (37 ± 28 mm vs 51 ± 26 mm on 100-mm visual analog scale; P < .05). The patients receiving lidocaine also had significantly reduced nasal pain (33 ± 29 mm vs 48 ± 27 mm; P < .05) and significantly reduced sensation of gagging (25 ± 30 mm vs 39 ± 24 mm; P < .05). However, conducting the procedure was significantly more difficult in the lidocaine group (2.1 ± 0.9 vs 1.4 ± 0.7 on 5-point Likert scale; P < .05).\n\n\nCONCLUSION\nLidocaine gel administered nasally 5 minutes before NGTI significantly reduces pain and gagging sensations associated with the procedure but is associated with more difficult tube insertion compared to the use of lubricant gel.", "title": "" }, { "docid": "neg:1840013_10", "text": "Ransomware is a type of malware that encrypts data or locks a device to extort a ransom. Recently, a variety of high-profile ransomware attacks have been reported, and many ransomware defense systems have been proposed. However, none specializes in resisting untargeted attacks such as those by remote desktop protocol (RDP) attack ransomware. To resolve this problem, this paper proposes a way to combat RDP ransomware attacks by trapping and tracing. It discovers and ensnares the attacker through a network deception environment and uses an auxiliary tracing technology to find the attacker, finally achieving the goal of deterring the ransomware attacker and countering the RDP attack ransomware. Based on cyber deception, an auxiliary ransomware traceable system called RansomTracer is introduced in this paper. RansomTracer collects clues about the attacker by deploying monitors in the deception environment. Then, it automatically extracts and analyzes the traceable clues. Experiments and evaluations show that RansomTracer ensnares the adversary in the deception environment and improves the efficiency of clue analysis significantly. In addition, it is able to recognize the clues that identify the attacker and the screening rate reaches 98.34%.", "title": "" }, { "docid": "neg:1840013_11", "text": "INTRODUCTION\nOtoplasty or correction of prominent ears, is one of most commonly performed surgeries in plastic surgery both in children and adults. Until nowadays, there have been more than 150 techniques described, but all with certain percentage of recurrence which varies from just a few up to 24.4%.\n\n\nOBJECTIVE\nThe authors present an otoplasty technique, a combination of Mustardé's original procedure with other techniques, which they have been using successfully in their everyday surgical practice for the last 9 years. The technique is based on posterior antihelical and conchal approach.\n\n\nMETHODS\nThe study included 102 patients (60 males and 42 females) operated on between 1999 and 2008. The age varied between 6 and 49 years. Each procedure was tailored to the aberrant anatomy which was analysed after examination. Indications and the operative procedure are described in step-by-step detail accompanied by drawings and photos taken during the surgery.\n\n\nRESULTS\nAll patients had bilateral ear deformity. In all cases was performed a posterior antihelical approach. The conchal reduction was done only when necessary and also through the same incision. The follow-up was from 1 to 5 years. There were no recurrent cases. A few minor complications were presented. Postoperative care, complications and advantages compared to other techniques are discussed extensively.\n\n\nCONCLUSION\nAll patients showed a high satisfaction rate with the final result and there was no necessity for further surgeries. The technique described in this paper is easy to reproduce even for young surgeons.", "title": "" }, { "docid": "neg:1840013_12", "text": "Privacy-preserving distributed machine learning has become more important than ever due to the high demand of large-scale data processing. This paper focuses on a class of machine learning problems that can be formulated as regularized empirical risk minimization, and develops a privacy-preserving learning approach to such problems. We use Alternating Direction Method of Multipliers (ADMM) to decentralize the learning algorithm, and apply Gaussian mechanisms to provide differential privacy guarantee. However, simply combining ADMM and local randomization mechanisms would result in a nonconvergent algorithm with poor performance even under moderate privacy guarantees. Besides, this intuitive approach requires a strong assumption that the objective functions of the learning problems should be differentiable and strongly convex. To address these concerns, we propose an improved ADMMbased Differentially Private distributed learning algorithm, DPADMM, where an approximate augmented Lagrangian function and Gaussian mechanisms with time-varying variance are utilized. We also apply the moments accountant method to bound the total privacy loss. Our theoretical analysis shows that DPADMM can be applied to a general class of convex learning problems, provides differential privacy guarantee, and achieves a convergence rate of O(1/ √ t), where t is the number of iterations. Our evaluations demonstrate that our approach can achieve good convergence and accuracy with moderate privacy guarantee.", "title": "" }, { "docid": "neg:1840013_13", "text": "Moving obstacle avoidance is a fundamental requirement for any robot operating in real environments, where pedestrians, bicycles and cars are present. In this work, we design and validate a new approach that takes explicitly into account obstacle velocities, to achieve safe visual navigation in outdoor scenarios. A wheeled vehicle, equipped with an actuated pinhole camera and with a lidar, must follow a path represented by key images, without colliding with the obstacles. To estimate the obstacle velocities, we design a Kalman-based observer. Then, we adapt the tentacles designed in [1], to take into account the predicted obstacle positions. Finally, we validate our approach in a series of simulated and real experiments, showing that when the obstacle velocities are considered, the robot behaviour is safer, smoother, and faster than when it is not.", "title": "" }, { "docid": "neg:1840013_14", "text": "Our research suggests that ML technologies will indeed grow more pervasive, but within job categories, what we define as the “suitability for machine learning” (SML) of work tasks varies greatly. We further propose that our SML rubric, illustrating the variability in task-level SML, can serve as an indicator for the potential reorganization of a job or an occupation because the set of tasks that form a job can be separated and re-bundled to redefine the job. Evaluating worker activities using our rubric, in fact, has the benefit of focusing on what ML can do instead of grouping all forms of automation together.", "title": "" }, { "docid": "neg:1840013_15", "text": "We introduce a new interactive system: a game that is fun and can be used to create valuable output. When people play the game they help determine the contents of images by providing meaningful labels for them. If the game is played as much as popular online games, we estimate that most images on the Web can be labeled in a few months. Having proper labels associated with each image on the Web would allow for more accurate image search, improve the accessibility of sites (by providing descriptions of images to visually impaired individuals), and help users block inappropriate images. Our system makes a significant contribution because of its valuable output and because of the way it addresses the image-labeling problem. Rather than using computer vision techniques, which don't work well enough, we encourage people to do the work by taking advantage of their desire to be entertained.", "title": "" }, { "docid": "neg:1840013_16", "text": "A robot is usually an electro-mechanical machine that is guided by computer and electronic programming. Many robots have been built for manufacturing purpose and can be found in factories around the world. Designing of the latest inverted ROBOT which can be controlling using an APP for android mobile. We are developing the remote buttons in the android app by which we can control the robot motion with them. And in which we use Bluetooth communication to interface controller and android. Controller can be interfaced to the Bluetooth module though UART protocol. According to commands received from android the robot motion can be controlled. The consistent output of a robotic system along with quality and repeatability are unmatched. Pick and Place robots can be reprogrammable and tooling can be interchanged to provide for multiple applications.", "title": "" }, { "docid": "neg:1840013_17", "text": "We discuss learning a profile of user interests for recommending information sources such as Web pages or news articles. We describe the types of information available to determine whether to recommend a particular page to a particular user. This information includes the content of the page, the ratings of the user on other pages and the contents of these pages, the ratings given to that page by other users and the ratings of these other users on other pages and demographic information about users. We describe how each type of information may be used individually and then discuss an approach to combining recommendations from multiple sources. We illustrate each approach and the combined approach in the context of recommending restaurants.", "title": "" }, { "docid": "neg:1840013_18", "text": "RF-powered computers are small devices that compute and communicate using only the power that they harvest from RF signals. While existing technologies have harvested power from ambient RF sources (e.g., TV broadcasts), they require a dedicated gateway (like an RFID reader) for Internet connectivity. We present Wi-Fi Backscatter, a novel communication system that bridges RF-powered devices with the Internet. Specifically, we show that it is possible to reuse existing Wi-Fi infrastructure to provide Internet connectivity to RF-powered devices. To show Wi-Fi Backscatter's feasibility, we build a hardware prototype and demonstrate the first communication link between an RF-powered device and commodity Wi-Fi devices. We use off-the-shelf Wi-Fi devices including Intel Wi-Fi cards, Linksys Routers, and our organization's Wi-Fi infrastructure, and achieve communication rates of up to 1 kbps and ranges of up to 2.1 meters. We believe that this new capability can pave the way for the rapid deployment and adoption of RF-powered devices and achieve ubiquitous connectivity via nearby mobile devices that are Wi-Fi enabled.", "title": "" } ]
1840014
Development of extensible open information extraction
[ { "docid": "pos:1840014_0", "text": "Open Information Extraction (IE) systems extract relational tuples from text, without requiring a pre-specified vocabulary, by identifying relation phrases and associated arguments in arbitrary sentences. However, stateof-the-art Open IE systems such as REVERB and WOE share two important weaknesses – (1) they extract only relations that are mediated by verbs, and (2) they ignore context, thus extracting tuples that are not asserted as factual. This paper presents OLLIE, a substantially improved Open IE system that addresses both these limitations. First, OLLIE achieves high yield by extracting relations mediated by nouns, adjectives, and more. Second, a context-analysis step increases precision by including contextual information from the sentence in the extractions. OLLIE obtains 2.7 times the area under precision-yield curve (AUC) compared to REVERB and 1.9 times the AUC of WOE.", "title": "" }, { "docid": "pos:1840014_1", "text": "Open information extraction (Open IE) systems aim to obtain relation tuples with highly scalable extraction in portable across domain by identifying a variety of relation phrases and their arguments in arbitrary sentences. The first generation of Open IE learns linear chain models based on unlexicalized features such as Part-of-Speech (POS) or shallow tags to label the intermediate words between pair of potential arguments for identifying extractable relations. Open IE currently is developed in the second generation that is able to extract instances of the most frequently observed relation types such as Verb, Noun and Prep, Verb and Prep, and Infinitive with deep linguistic analysis. They expose simple yet principled ways in which verbs express relationships in linguistics such as verb phrase-based extraction or clause-based extraction. They obtain a significantly higher performance over previous systems in the first generation. In this paper, we describe an overview of two Open IE generations including strengths, weaknesses and application areas.", "title": "" } ]
[ { "docid": "neg:1840014_0", "text": "In many regions of the visual system, the activity of a neuron is normalized by the activity of other neurons in the same region. Here we show that a similar normalization occurs during olfactory processing in the Drosophila antennal lobe. We exploit the orderly anatomy of this circuit to independently manipulate feedforward and lateral input to second-order projection neurons (PNs). Lateral inhibition increases the level of feedforward input needed to drive PNs to saturation, and this normalization scales with the total activity of the olfactory receptor neuron (ORN) population. Increasing total ORN activity also makes PN responses more transient. Strikingly, a model with just two variables (feedforward and total ORN activity) accurately predicts PN odor responses. Finally, we show that discrimination by a linear decoder is facilitated by two complementary transformations: the saturating transformation intrinsic to each processing channel boosts weak signals, while normalization helps equalize responses to different stimuli.", "title": "" }, { "docid": "neg:1840014_1", "text": "PatternAttribution is a recent method, introduced in the vision domain, that explains classifications of deep neural networks. We demonstrate that it also generates meaningful interpretations in the language domain.", "title": "" }, { "docid": "neg:1840014_2", "text": "The expectation-maximization (EM) method can facilitate maximizing likelihood functions that arise in statistical estimation problems. In the classical EM paradigm, one iteratively maximizes the conditional log-likelihood of a single unobservable complete data space, rather than maximizing the intractable likelihood function for the measured or incomplete data. EM algorithms update all parameters simultaneously, which has two drawbacks: 1) slow convergence, and 2) difficult maximization steps due to coupling when smoothness penalties are used. This paper describes the space-alternating generalized EM (SAGE) method, which updates the parameters sequentially by alternating between several small hidden-data spaces defined by the algorithm designer. We prove that the sequence of estimates monotonically increases the penalized-likelihood objective, we derive asymptotic convergence rates, and we provide sufficient conditions for monotone convergence in norm. Two signal processing applications illustrate the method: estimation of superimposed signals in Gaussian noise, and image reconstruction from Poisson measurements. In both applications, our SAGE algorithms easily accommodate smoothness penalties and converge faster than the EM algorithms.", "title": "" }, { "docid": "neg:1840014_3", "text": "This paper proposes a novel multi-view human action recognition method by discovering and sharing common knowledge among different video sets captured in multiple viewpoints. To our knowledge, we are the first to treat a specific view as target domain and the others as source domains and consequently formulate the multi-view action recognition into the cross-domain learning framework. First, the classic bag-of-visual word framework is implemented for visual feature extraction in individual viewpoints. Then, we propose a cross-domain learning method with block-wise weighted kernel function matrix to highlight the saliency components and consequently augment the discriminative ability of the model. Extensive experiments are implemented on IXMAS, the popular multi-view action dataset. The experimental results demonstrate that the proposed method can consistently outperform the state of the arts.", "title": "" }, { "docid": "neg:1840014_4", "text": "OBJECTIVE\nTo determine the reliability and internal validity of the Hypospadias Objective Penile Evaluation (HOPE)-score, a newly developed scoring system assessing the cosmetic outcome in hypospadias.\n\n\nPATIENTS AND METHODS\nThe HOPE scoring system incorporates all surgically-correctable items: position of meatus, shape of meatus, shape of glans, shape of penile skin and penile axis. Objectivity was established with standardized photographs, anonymously coded patients, independent assessment by a panel, standards for a \"normal\" penile appearance, reference pictures and assessment of the degree of abnormality. A panel of 13 pediatric urologists completed 2 questionnaires, each consisting of 45 series of photographs, at an interval of at least 1 week. The inter-observer reliability, intra-observer reliability and internal validity were analyzed.\n\n\nRESULTS\nThe correlation coefficients for the HOPE-score were as follows: intra-observer reliability 0.817, inter-observer reliability 0.790, \"non-parametric\" internal validity 0.849 and \"parametric\" internal validity 0.842. These values reflect good reproducibility, sufficient agreement among observers and a valid measurement of differences and similarities in cosmetic appearance.\n\n\nCONCLUSIONS\nThe HOPE-score is the first scoring system that fulfills the criteria of a valid measurement tool: objectivity, reliability and validity. These favorable properties support its use as an objective outcome measure of the cosmetic result after hypospadias surgery.", "title": "" }, { "docid": "neg:1840014_5", "text": "In the broad field of evaluation, the importance of stakeholders is often acknowledged and different categories of stakeholders are identified. Far less frequent is careful attention to analysis of stakeholders' interests, needs, concerns, power, priorities, and perspectives and subsequent application of that knowledge to the design of evaluations. This article is meant to help readers understand and apply stakeholder identification and analysis techniques in the design of credible evaluations that enhance primary intended use by primary intended users. While presented using a utilization-focused-evaluation (UFE) lens, the techniques are not UFE-dependent. The article presents a range of the most relevant techniques to identify and analyze evaluation stakeholders. The techniques are arranged according to their ability to inform the process of developing and implementing an evaluation design and of making use of the evaluation's findings.", "title": "" }, { "docid": "neg:1840014_6", "text": "Non-invasive cuff-less Blood Pressure (BP) estimation from Photoplethysmogram (PPG) is a well known challenge in the field of affordable healthcare. This paper presents a set of improvements over an existing method that estimates BP using 2-element Windkessel model from PPG signal. A noisy PPG corpus is collected using fingertip pulse oximeter, from two different locations in India. Exhaustive pre-processing techniques, such as filtering, baseline and topline correction are performed on the noisy PPG signals, followed by the selection of consistent cycles. Subsequently, the most relevant PPG features and demographic features are selected through Maximal Information Coefficient (MIC) score for learning the latent parameters controlling BP. Experimental results reveal that overall error in estimating BP lies within 10% of a commercially available digital BP monitoring device. Also, use of alternative latent parameters that incorporate the variation in cardiac output, shows a better trend following for abnormally low and high BP.", "title": "" }, { "docid": "neg:1840014_7", "text": "We present a novel class of actor-critic algorithms for actors consisting of sets of interacting modules. We present, analyze theoretically, and empirically evaluate an update rule for each module, which requires only local information: the module’s input, output, and the TD error broadcast by a critic. Such updates are necessary when computation of compatible features becomes prohibitively difficult and are also desirable to increase the biological plausibility of reinforcement learning methods.", "title": "" }, { "docid": "neg:1840014_8", "text": "In this paper, we proposed the multiclass support vector machine (SVM) with the error-correcting output codes for the multiclass electroencephalogram (EEG) signals classification problem. The probabilistic neural network (PNN) and multilayer perceptron neural network were also tested and benchmarked for their performance on the classification of the EEG signals. Decision making was performed in two stages: feature extraction by computing the wavelet coefficients and the Lyapunov exponents and classification using the classifiers trained on the extracted features. The purpose was to determine an optimum classification scheme for this problem and also to infer clues about the extracted features. Our research demonstrated that the wavelet coefficients and the Lyapunov exponents are the features which well represent the EEG signals and the multiclass SVM and PNN trained on these features achieved high classification accuracies", "title": "" }, { "docid": "neg:1840014_9", "text": "Visualization of dynamically changing networks (graphs) is a significant challenge for researchers. Previous work has experimentally compared animation, small multiples, and other techniques, and found trade-offs between these. One potential way to avoid such trade-offs is to combine previous techniques in a hybrid visualization. We present two taxonomies of visualizations of dynamic graphs: one of non-hybrid techniques, and one of hybrid techniques. We also describe a prototype, called DiffAni, that allows a graph to be visualized as a sequence of three kinds of tiles: diff tiles that show difference maps over some time interval, animation tiles that show the evolution of the graph over some time interval, and small multiple tiles that show the graph state at an individual time slice. This sequence of tiles is ordered by time and covers all time slices in the data. An experimental evaluation of DiffAni shows that our hybrid approach has advantages over non-hybrid techniques in certain cases.", "title": "" }, { "docid": "neg:1840014_10", "text": "A dual-band antenna is developed on a flexible Liquid Crystal Polymer (LCP) substrate for simultaneous operation at 2.45 and 5.8 GHz in high frequency Radio Frequency IDentification (RFID) systems. The response of the low profile double T-shaped slot antenna is preserved when the antenna is placed on platforms such as wood and cardboard, and when bent to conform to a cylindrical plastic box. Furthermore, experiments show that the antenna is still operational when placed at a distance of around 5cm from a metallic surface.", "title": "" }, { "docid": "neg:1840014_11", "text": "A capsule is a group of neurons whose activity vector represents the instantiation parameters of a specific type of entity such as an object or an object part. We use the length of the activity vector to represent the probability that the entity exists and its orientation to represent the instantiation parameters. Active capsules at one level make predictions, via transformation matrices, for the instantiation parameters of higher-level capsules. When multiple predictions agree, a higher level capsule becomes active. We show that a discrimininatively trained, multi-layer capsule system achieves state-of-the-art performance on MNIST and is considerably better than a convolutional net at recognizing highly overlapping digits. To achieve these results we use an iterative routing-by-agreement mechanism: A lower-level capsule prefers to send its output to higher level capsules whose activity vectors have a big scalar product with the prediction coming from the lower-level capsule.", "title": "" }, { "docid": "neg:1840014_12", "text": "This study examined the anatomy of the infrapatellar fat pad (IFP) in relation to knee pathology and surgical approaches. Eight embalmed knees were dissected via semicircular parapatellar incisions and each IFP was examined. Their volume, shape and constituent features were recorded. They were found in all knees and were constant in shape, consisting of a central body with medial and lateral extensions. The ligamentum mucosum was found inferior to the central body in all eight knees, while a fat tag was located superior to the central body in seven cases. Two clefts were consistently found on the posterior aspect of the IFP, a horizontal cleft below the ligamentum mucosum in six knees and a vertical cleft above, in seven cases. Our study found that the IFP is a constant structure in the knee joint, which may play a number of roles in knee joint function and pathology. Its significance in knee surgery is discussed.", "title": "" }, { "docid": "neg:1840014_13", "text": "Knowledge has widely been acknowledged as one of the most important factors for corporate competitiveness, and we have witnessed an explosion of IS/IT solutions claiming to provide support for knowledge management (KM). A relevant question to ask, though, is how systems and technology intended for information such as the intranet can be able to assist in the managing of knowledge. To understand this, we must examine the relationship between information and knowledge. Building on Polanyi’s theories, I argue that all knowledge is tacit, and what can be articulated and made tangible outside the human mind is merely information. However, information and knowledge affect one another. By adopting a multi-perspective of the intranet where information, awareness, and communication are all considered, this interaction can best be supported and the intranet can become a useful and people-inclusive KM environment. 1. From philosophy to IT Ever since the ancient Greek period, philosophers have discussed what knowledge is. Early thinkers such as Plato and Aristotle where followed by Hobbes and Locke, Kant and Hegel, and into the 20th century by the likes of Wittgenstein, Popper, and Kuhn, to name but a few of the more prominent western philosophers. In recent years, we have witnessed a booming interest in knowledge also from other disciplines; organisation theorists, information system developers, and economists have all been swept away by the knowledge management avalanche. It seems, though, that the interest is particularly strong within the IS/IT community, where new opportunities to develop computer systems are welcomed. A plausible question to ask then is how knowledge relates to information technology (IT). Can IT at all be used to handle 0-7695-1435-9/02 $ knowledge, and if so, what sort of knowledge? What sorts of knowledge are there? What is knowledge? It seems we have little choice but to return to these eternal questions, but belonging to the IS/IT community, we should not approach knowledge from a philosophical perspective. As observed by Alavi and Leidner, the knowledge-based theory of the firm was never built on a universal truth of what knowledge really is but on a pragmatic interest in being able to manage organisational knowledge [2]. The discussion in this paper shall therefore be aimed at addressing knowledge from an IS/IT perspective, trying to answer two overarching questions: “What does the relationship between information and knowledge look like?” and “What role does an intranet have in this relationship?” The purpose is to critically review the contemporary KM literature in order to clarify the relationships between information and knowledge that commonly and implicitly are assumed within the IS/IT community. Epistemologically, this paper shall address the difference between tacit and explicit knowledge by accounting for some of the views more commonly found in the KM literature. Some of these views shall also be questioned, and the prevailing assump tion that tacit and explicit are two forms of knowledge shall be criticised by returning to Polanyi’s original work. My interest in the tacit side of knowledge, i.e. the aspects of knowledge that is omnipresent, taken for granted, and affecting our understanding without us being aware of it, has strongly influenced the content of this paper. Ontologywise, knowledge may be seen to exist on different levels, i.e. individual, group, organisation and inter-organisational [23]. Here, my primary interest is on the group and organisational levels. However, these two levels are obviously made up of individuals and we are thus bound to examine the personal aspects of knowledge as well, though be it from a macro perspective. 17.00 (c) 2002 IEEE 1 Proceedings of the 35th Hawaii International Conference on System Sciences 2002 2. Opposite traditions – and a middle way? When examining the knowledge literature, two separate tracks can be identified: the commodity view and the community view [35]. The commodity view of or the objective approach to knowledge as some absolute and universal truth has since long been the dominating view within science. Rooted in the positivism of the mid-19th century, the commodity view is still especially strong in the natural sciences. Disciples of this tradition understand knowledge as an artefact that can be handled in discrete units and that people may possess. Knowledge is a thing for which we can gain evidence, and knowledge as such is separated from the knower [33]. Metaphors such as drilling, mining, and harvesting are used to describe how knowledge is being managed. There is also another tradition that can be labelled the community view or the constructivist approach. This tradition can be traced back to Locke and Hume but is in its modern form rooted in the critique of the established quantitative approach to science that emerged primarily amongst social scientists during the 1960’s, and resulted in the publication of books by Garfinkel, Bourdieu, Habermas, Berger and Luckmann, and Glaser and Strauss. These authors argued that reality (and hence also knowledge) should be understood as socially constructed. According to this tradition, it is impossible to define knowledge universally; it can only be defined in practice, in the activities of and interactions between individuals. Thus, some understand knowledge to be universal and context-independent while others conceive it as situated and based on individual experiences. Maybe it is a little bit Author(s) Data Informa", "title": "" }, { "docid": "neg:1840014_14", "text": "An Optimal fuzzy logic guidance (OFLG) law for a surface to air homing missile is introduced. The introduced approach is based on the well-known proportional navigation guidance (PNG) law. Particle Swarm Optimization (PSO) is used to optimize the of the membership functions&apos; (MFs) parameters of the proposed design. The distribution of the MFs is obtained by minimizing a nonlinear constrained multi-objective optimization problem where; control effort and miss distance are treated as competing objectives. The performance of the introduced guidance law is compared with classical fuzzy logic guidance (FLG) law as well as PNG one. The simulation results show that OFLG performs better than other guidance laws. Moreover, the introduced design is shown to perform well with the existence of noisy measurements.", "title": "" }, { "docid": "neg:1840014_15", "text": "Since leadership plays a vital role in democratic movements, understanding the nature of democratic leadership is essential. However, the definition of democratic leadership is unclear (Gastil, 1994). Also, little research has defined democratic leadership in the context of democratic movements. The leadership literature has paid no attention to democratic leadership in such movements, focusing on democratic leadership within small groups and organizations. This study proposes a framework of democratic leadership in democratic movements. The framework includes contexts, motivations, characteristics, and outcomes of democratic leadership. The study considers sacrifice, courage, symbolism, citizen participation, and vision as major characteristics in the display of democratic leadership in various political, social, and cultural contexts. Applying the framework to Nelson Mandela, Lech Walesa, and Dae Jung Kim; the study considers them as exemplary models of democratic leadership in democratic movements for achieving democracy. They have showed crucial characteristics of democratic leadership, offering lessons for democratic governance.", "title": "" }, { "docid": "neg:1840014_16", "text": "We present a novel approach to probabilistic time series forecasting that combines state space models with deep learning. By parametrizing a per-time-series linear state space model with a jointly-learned recurrent neural network, our method retains desired properties of state space models such as data efficiency and interpretability, while making use of the ability to learn complex patterns from raw data offered by deep learning approaches. Our method scales gracefully from regimes where little training data is available to regimes where data from large collection of time series can be leveraged to learn accurate models. We provide qualitative as well as quantitative results with the proposed method, showing that it compares favorably to the state-of-the-art.", "title": "" }, { "docid": "neg:1840014_17", "text": "Fractals have been very successful in quantifying the visual complexity exhibited by many natural patterns, and have captured the imagination of scientists and artists alike. Our research has shown that the poured patterns of the American abstract painter Jackson Pollock are also fractal. This discovery raises an intriguing possibility - are the visual characteristics of fractals responsible for the long-term appeal of Pollock's work? To address this question, we have conducted 10 years of scientific investigation of human response to fractals and here we present, for the first time, a review of this research that examines the inter-relationship between the various results. The investigations include eye tracking, visual preference, skin conductance, and EEG measurement techniques. We discuss the artistic implications of the positive perceptual and physiological responses to fractal patterns.", "title": "" }, { "docid": "neg:1840014_18", "text": "This paper presents Platener, a system that allows quickly fabricating intermediate design iterations of 3D models, a process also known as low-fidelity fabrication. Platener achieves its speed-up by extracting straight and curved plates from the 3D model and substituting them with laser cut parts of the same size and thickness. Only the regions that are of relevance to the current design iteration are executed as full-detail 3D prints. Platener connects the parts it has created by automatically inserting joints. To help fast assembly it engraves instructions. Platener allows users to customize substitution results by (1) specifying fidelity-speed tradeoffs, (2) choosing whether or not to convert curved surfaces to plates bent using heat, and (3) specifying the conversion of individual plates and joints interactively. Platener is designed to best preserve the fidelity of func-tional objects, such as casings and mechanical tools, all of which contain a large percentage of straight/rectilinear elements. Compared to other low-fab systems, such as faBrickator and WirePrint, Platener better preserves the stability and functionality of such objects: the resulting assemblies have fewer parts and the parts have the same size and thickness as in the 3D model. To validate our system, we converted 2.250 3D models downloaded from a 3D model site (Thingiverse). Platener achieves a speed-up of 10 or more for 39.5% of all objects.", "title": "" } ]
1840015
Performance Modeling and Evaluation of Distributed Deep Learning Frameworks on GPUs
[ { "docid": "pos:1840015_0", "text": "Large-scale deep learning requires huge computational resources to train a multi-layer neural network. Recent systems propose using 100s to 1000s of machines to train networks with tens of layers and billions of connections. While the computation involved can be done more efficiently on GPUs than on more traditional CPU cores, training such networks on a single GPU is too slow and training on distributed GPUs can be inefficient, due to data movement overheads, GPU stalls, and limited GPU memory. This paper describes a new parameter server, called GeePS, that supports scalable deep learning across GPUs distributed among multiple machines, overcoming these obstacles. We show that GeePS enables a state-of-the-art single-node GPU implementation to scale well, such as to 13 times the number of training images processed per second on 16 machines (relative to the original optimized single-node code). Moreover, GeePS achieves a higher training throughput with just four GPU machines than that a state-of-the-art CPU-only system achieves with 108 machines.", "title": "" } ]
[ { "docid": "neg:1840015_0", "text": "The Raven's Progressive Matrices (RPM) test is a commonly used test of intelligence. The literature suggests a variety of problem-solving methods for addressing RPM problems. For a graduate-level artificial intelligence class in Fall 2014, we asked students to develop intelligent agents that could address 123 RPM-inspired problems, essentially crowdsourcing RPM problem solving. The students in the class submitted 224 agents that used a wide variety of problem-solving methods. In this paper, we first report on the aggregate results of those 224 agents on the 123 problems, then focus specifically on four of the most creative, novel, and effective agents in the class. We find that the four agents, using four very different problem-solving methods, were all able to achieve significant success. This suggests the RPM test may be amenable to a wider range of problem-solving methods than previously reported. It also suggests that human computation might be an effective strategy for collecting a wide variety of methods for creative tasks.", "title": "" }, { "docid": "neg:1840015_1", "text": "With the advancement in digitalization vast amount of Image data is uploaded and used via Internet in today’s world. With this revolution in uses of multimedia data, key problem in the area of Image processing, Computer vision and big data analytics is how to analyze, effectively process and extract useful information from such data. Traditional tactics to process such a data are extremely time and resource intensive. Studies recommend that parallel and distributed computing techniques have much more potential to process such data in efficient manner. To process such a complex task in efficient manner advancement in GPU based processing is also a candidate solution. This paper we introduce Hadoop-Mapreduce (Distributed system) and CUDA (Parallel system) based image processing. In our experiment using satellite images of different dimension we had compared performance or execution speed of canny edge detection algorithm. Performance is compared for CPU and GPU based Time Complexity.", "title": "" }, { "docid": "neg:1840015_2", "text": "This article investigates the cognitive strategies that people use to search computer displays. Several different visual layouts are examined: unlabeled layouts that contain multiple groups of items but no group headings, labeled layouts in which items are grouped and each group has a useful heading, and a target-only layout that contains just one item. A number of plausible strategies were proposed for each layout. Each strategy was programmed into the EPIC cognitive architecture, producing models that simulate the human visual-perceptual, oculomotor, and cognitive processing required for the task. The models generate search time predictions. For unlabeled layouts, the mean layout search times are predicted by a purely random search strategy, and the more detailed positional search times are predicted by a noisy systematic strategy. The labeled layout search times are predicted by a hierarchical strategy in which first the group labels are systematically searched, and then the contents of the target group. The target-only layout search times are predicted by a strategy in which the eyes move directly to the sudden appearance of the target. The models demonstrate that human visual search performance can be explained largely in terms of the cognitive strategy HUMAN–COMPUTER INTERACTION, 2004, Volume 19, pp. 183–223 Copyright © 2004, Lawrence Erlbaum Associates, Inc. Anthony Hornof is a computer scientist with interests in human–computer interaction, cognitive modeling, visual search, and eye tracking; he is an Assistant Professor in the Department of Computer and Information Science at the University of Oregon. that is used to coordinate the relevant perceptual and motor processes, a clear and useful visual hierarchy triggers a fundamentally different visual search strategy and effectively gives the user greater control over the visual navigation, and cognitive strategies will be an important component of a predictive visual search tool. The models provide insights pertaining to the visual-perceptual and oculomotor processes involved in visual search and contribute to the science base needed for predictive interface analysis. 184 HORNOF", "title": "" }, { "docid": "neg:1840015_3", "text": "Patients with carcinoma of the tongue including the base of the tongue who underwent total glossectomy in a period of just over ten years since January 1979 have been reviewed. Total glossectomy may be indicated as salvage surgery or as a primary procedure. The larynx may be preserved or may have to be sacrificed depending upon the site of the lesion. When the larynx is preserved the use of laryngeal suspension facilitates early rehabilitation and preserves the quality of life to a large extent. Cricopharyngeal myotomy seems unnecessary.", "title": "" }, { "docid": "neg:1840015_4", "text": "Issues concerning agriculture, countryside and farmers have been always hindering China’s development. The only solution to these three problems is agricultural modernization. However, China's agriculture is far from modernized. The introduction of cloud computing and internet of things into agricultural modernization will probably solve the problem. Based on major features of cloud computing and key techniques of internet of things, cloud computing, visualization and SOA technologies can build massive data involved in agricultural production. Internet of things and RFID technologies can help build plant factory and realize automatic control production of agriculture. Cloud computing is closely related to internet of things. A perfect combination of them can promote fast development of agricultural modernization, realize smart agriculture and effectively solve the issues concerning agriculture, countryside and farmers.", "title": "" }, { "docid": "neg:1840015_5", "text": "Online learning represents a family of machine learning methods, where a learner attempts to tackle some predictive (or any type of decision-making) task by learning from a sequence of data instances one by one at each time. The goal of online learning is to maximize the accuracy/correctness for the sequence of predictions/decisions made by the online learner given the knowledge of correct answers to previous prediction/learning tasks and possibly additional information. This is in contrast to traditional batch or offline machine learning methods that are often designed to learn a model from the entire training data set at once. Online learning has become a promising technique for learning from continuous streams of data in many real-world applications. This survey aims to provide a comprehensive survey of the online machine learning literature through a systematic review of basic ideas and key principles and a proper categorization of different algorithms and techniques. Generally speaking, according to the types of learning tasks and the forms of feedback information, the existing online learning works can be classified into three major categories: (i) online supervised learning where full feedback information is always available, (ii) online learning with limited feedback, and (iii) online unsupervised learning where no feedback is available. Due to space limitation, the survey will be mainly focused on the first category, but also briefly cover some basics of the other two categories. Finally, we also discuss some open issues and attempt to shed light on potential future research directions in this field.", "title": "" }, { "docid": "neg:1840015_6", "text": "We present pigeo, a Python geolocation prediction tool that predicts a location for a given text input or Twitter user. We discuss the design, implementation and application of pigeo, and empirically evaluate it. pigeo is able to geolocate informal text and is a very useful tool for users who require a free and easy-to-use, yet accurate geolocation service based on pre-trained models. Additionally, users can train their own models easily using pigeo’s API.", "title": "" }, { "docid": "neg:1840015_7", "text": "Research on ontology is becoming increasingly widespread in the computer science community, and its importance is being recognized in a multiplicity of research fields and application areas, including knowledge engineering, database design and integration, information retrieval and extraction. We shall use the generic term “information systems”, in its broadest sense, to collectively refer to these application perspectives. We argue in this paper that so-called ontologies present their own methodological and architectural peculiarities: on the methodological side, their main peculiarity is the adoption of a highly interdisciplinary approach, while on the architectural side the most interesting aspect is the centrality of the role they can play in an information system, leading to the perspective of ontology-driven information systems.", "title": "" }, { "docid": "neg:1840015_8", "text": "Bitcoin, a distributed, cryptographic, digital currency, gained a lot of media attention for being an anonymous e-cash system. But as all transactions in the network are stored publicly in the blockchain, allowing anyone to inspect and analyze them, the system does not provide real anonymity but pseudonymity. There have already been studies showing the possibility to deanonymize bitcoin users based on the transaction graph and publicly available data. Furthermore, users could be tracked by bitcoin exchanges or shops, where they have to provide personal information that can then be linked to their bitcoin addresses. Special bitcoin mixing services claim to obfuscate the origin of transactions and thereby increase the anonymity of its users. In this paper we evaluate three of these services – Bitcoin Fog, BitLaundry, and the Send Shared functionality of Blockchain.info – by analyzing the transaction graph. While Bitcoin Fog and Blockchain.info successfully mix our transaction, we are able to find a direct relation between the input and output transactions in the graph of BitLaundry.", "title": "" }, { "docid": "neg:1840015_9", "text": "Semantic mapping is the incremental process of “mapping” relevant information of the world (i.e., spatial information, temporal events, agents and actions) to a formal description supported by a reasoning engine. Current research focuses on learning the semantic of environments based on their spatial location, geometry and appearance. Many methods to tackle this problem have been proposed, but the lack of a uniform representation, as well as standard benchmarking suites, prevents their direct comparison. In this paper, we propose a standardization in the representation of semantic maps, by defining an easily extensible formalism to be used on top of metric maps of the environments. Based on this, we describe the procedure to build a dataset (based on real sensor data) for benchmarking semantic mapping techniques, also hypothesizing some possible evaluation metrics. Nevertheless, by providing a tool for the construction of a semantic map ground truth, we aim at the contribution of the scientific community in acquiring data for populating the dataset.", "title": "" }, { "docid": "neg:1840015_10", "text": "According to Bayesian theories in psychology and neuroscience, minds and brains are (near) optimal in solving a wide range of tasks. We challenge this view and argue that more traditional, non-Bayesian approaches are more promising. We make 3 main arguments. First, we show that the empirical evidence for Bayesian theories in psychology is weak. This weakness relates to the many arbitrary ways that priors, likelihoods, and utility functions can be altered in order to account for the data that are obtained, making the models unfalsifiable. It further relates to the fact that Bayesian theories are rarely better at predicting data compared with alternative (and simpler) non-Bayesian theories. Second, we show that the empirical evidence for Bayesian theories in neuroscience is weaker still. There are impressive mathematical analyses showing how populations of neurons could compute in a Bayesian manner but little or no evidence that they do. Third, we challenge the general scientific approach that characterizes Bayesian theorizing in cognitive science. A common premise is that theories in psychology should largely be constrained by a rational analysis of what the mind ought to do. We question this claim and argue that many of the important constraints come from biological, evolutionary, and processing (algorithmic) considerations that have no adaptive relevance to the problem per se. In our view, these factors have contributed to the development of many Bayesian \"just so\" stories in psychology and neuroscience; that is, mathematical analyses of cognition that can be used to explain almost any behavior as optimal.", "title": "" }, { "docid": "neg:1840015_11", "text": "Fraud detection is an industry where incremental gains in predictive accuracy can have large benefits for banks and customers. Banks adapt models to the novel ways in which “fraudsters” commit credit card fraud. They collect data and engineer new features in order to increase predictive power. This research compares the algorithmic impact on the predictive power across three supervised classification models: logistic regression, gradient boosted trees, and deep learning. This research also explores the benefits of creating features using domain expertise and feature engineering using an autoencoder—an unsupervised feature engineering method. These two methods of feature engineering combined with the direct mapping of the original variables create six different feature sets. Across these feature sets this research compares the aforementioned models. This research concludes that creating features using domain expertise offers a notable improvement in predictive power. Additionally, the autoencoder offers a way to reduce the dimensionality of the data and slightly boost predictive power.", "title": "" }, { "docid": "neg:1840015_12", "text": "Producing literature reviews of complex evidence for policymaking questions is a challenging methodological area. There are several established and emerging approaches to such reviews, but unanswered questions remain, especially around how to begin to make sense of large data sets drawn from heterogeneous sources. Drawing on Kuhn's notion of scientific paradigms, we developed a new method-meta-narrative review-for sorting and interpreting the 1024 sources identified in our exploratory searches. We took as our initial unit of analysis the unfolding 'storyline' of a research tradition over time. We mapped these storylines by using both electronic and manual tracking to trace the influence of seminal theoretical and empirical work on subsequent research within a tradition. We then drew variously on the different storylines to build up a rich picture of our field of study. We identified 13 key meta-narratives from literatures as disparate as rural sociology, clinical epidemiology, marketing and organisational studies. Researchers in different traditions had conceptualised, explained and investigated diffusion of innovations differently and had used different criteria for judging the quality of empirical work. Moreover, they told very different over-arching stories of the progress of their research. Within each tradition, accounts of research depicted human characters emplotted in a story of (in the early stages) pioneering endeavour and (later) systematic puzzle-solving, variously embellished with scientific dramas, surprises and 'twists in the plot'. By first separating out, and then drawing together, these different meta-narratives, we produced a synthesis that embraced the many complexities and ambiguities of 'diffusion of innovations' in an organisational setting. We were able to make sense of seemingly contradictory data by systematically exposing and exploring tensions between research paradigms as set out in their over-arching storylines. In some traditions, scientific revolutions were identifiable in which breakaway researchers had abandoned the prevailing paradigm and introduced a new set of concepts, theories and empirical methods. We concluded that meta-narrative review adds value to the synthesis of heterogeneous bodies of literature, in which different groups of scientists have conceptualised and investigated the 'same' problem in different ways and produced seemingly contradictory findings. Its contribution to the mixed economy of methods for the systematic review of complex evidence should be explored further.", "title": "" }, { "docid": "neg:1840015_13", "text": "When speakers talk, they gesture. The goal of this review is to investigate the contribution that these gestures make to how we communicate and think. Gesture can play a role in communication and thought at many timespans. We explore, in turn, gesture's contribution to how language is produced and understood in the moment; its contribution to how we learn language and other cognitive skills; and its contribution to how language is created over generations, over childhood, and on the spot. We find that the gestures speakers produce when they talk are integral to communication and can be harnessed in a number of ways. (a) Gesture reflects speakers' thoughts, often their unspoken thoughts, and thus can serve as a window onto cognition. Encouraging speakers to gesture can thus provide another route for teachers, clinicians, interviewers, etc., to better understand their communication partners. (b) Gesture can change speakers' thoughts. Encouraging gesture thus has the potential to change how students, patients, witnesses, etc., think about a problem and, as a result, alter the course of learning, therapy, or an interchange. (c) Gesture provides building blocks that can be used to construct a language. By watching how children and adults who do not already have a language put those blocks together, we can observe the process of language creation. Our hands are with us at all times and thus provide researchers and learners with an ever-present tool for understanding how we talk and think.", "title": "" }, { "docid": "neg:1840015_14", "text": "A general closed-form subharmonic stability condition is derived for the buck converter with ripple-based constant on-time control and a feedback filter. The turn-on delay is included in the analysis. Three types of filters are considered: low-pass filter (LPF), phase-boost filter (PBF), and inductor current feedback (ICF) which changes the feedback loop frequency response like a filter. With the LPF, the stability region is reduced. With the PBF or ICF, the stability region is enlarged. Stability conditions are determined both for the case of a single output capacitor and for the case of two parallel-connected output capacitors having widely different time constants. The past research results related to the feedback filters become special cases. All theoretical predictions are verified by experiments.", "title": "" }, { "docid": "neg:1840015_15", "text": "In vector space model (VSM), text representation is the task of transforming the content of a textual document into a vector in the term space so that the document could be recognized and classified by a computer or a classifier. Different terms (i.e. words, phrases, or any other indexing units used to identify the contents of a text) have different importance in a text. The term weighting methods assign appropriate weights to the terms to improve the performance of text categorization. In this study, we investigate several widely-used unsupervised (traditional) and supervised term weighting methods on benchmark data collections in combination with SVM and kNN algorithms. In consideration of the distribution of relevant documents in the collection, we propose a new simple supervised term weighting method, i.e. tf.rf, to improve the terms' discriminating power for text categorization task. From the controlled experimental results, these supervised term weighting methods have mixed performance. Specifically, our proposed supervised term weighting method, tf.rf, has a consistently better performance than other term weighting methods while other supervised term weighting methods based on information theory or statistical metric perform the worst in all experiments. On the other hand, the popularly used tf.idf method has not shown a uniformly good performance in terms of different data sets.", "title": "" }, { "docid": "neg:1840015_16", "text": "Power law distributions are an increasingly common model for computer science applications; for example, they have been used to describe file size distributions and inand out-degree distributions for the Web and Internet graphs. Recently, the similar lognormal distribution has also been suggested as an appropriate alternative model for file size distributions. In this paper, we briefly survey some of the history of these distributions, focusing on work in other fields. We find that several recently proposed models have antecedents in work from decades ago. We also find that lognormal and power law distributions connect quite naturally, and hence it is not surprising that lognormal distributions arise as a possible alternative to power law distributions.", "title": "" }, { "docid": "neg:1840015_17", "text": "The future of user interfaces will be dominated by hand gestures. In this paper, we explore an intuitive hand gesture based interaction for smartphones having a limited computational capability. To this end, we present an efficient algorithm for gesture recognition with First Person View (FPV), which focuses on recognizing a four swipe model (Left, Right, Up and Down) for smartphones through single monocular camera vision. This can be used with frugal AR/VR devices such as Google Cardboard1 andWearality2 in building AR/VR based automation systems for large scale deployments, by providing a touch-less interface and real-time performance. We take into account multiple cues including palm color, hand contour segmentation, and motion tracking, which effectively deals with FPV constraints put forward by a wearable. We also provide comparisons of swipe detection with the existing methods under the same limitations. We demonstrate that our method outperforms both in terms of gesture recognition accuracy and computational time.", "title": "" }, { "docid": "neg:1840015_18", "text": "We consider the problem of controlling a system with unknown, stochastic dynamics to achieve a complex, time-sensitive task. An example of this problem is controlling a noisy aerial vehicle with partially known dynamics to visit a pre-specified set of regions in any order while avoiding hazardous areas. In particular, we are interested in tasks which can be described by signal temporal logic (STL) specifications. STL is a rich logic that can be used to describe tasks involving bounds on physical parameters, continuous time bounds, and logical relationships over time and states. STL is equipped with a continuous measure called the robustness degree that measures how strongly a given sample path exhibits an STL property [4, 3]. This measure enables the use of continuous optimization problems to solve learning [7, 6] or formal synthesis problems [9] involving STL.", "title": "" }, { "docid": "neg:1840015_19", "text": "This paper reports a qualitative study of thriving older people and illustrates the findings with design fiction. Design research has been criticized as \"solutionist\" i.e. solving problems that don't exist or providing \"quick fixes\" for complex social, political and environmental problems. We respond to this critique by presenting a \"solutionist\" board game used to generate design concepts. Players are given data cards and technology dice, they move around the board by pitching concepts that would support positive aging. We argue that framing concept design as a solutionist game explicitly foregrounds play, irony and the limitations of technological intervention. Three of the game concepts are presented as design fictions in the form of advertisements for products and services that do not exist. The paper argues that design fiction can help create a space for design beyond solutionism.", "title": "" } ]
1840016
Neural Cryptanalysis of Classical Ciphers
[ { "docid": "pos:1840016_0", "text": "Recurrent neural networks (RNNs) represent the state of the art in translation, image captioning, and speech recognition. They are also capable of learning algorithmic tasks such as long addition, copying, and sorting from a set of training examples. We demonstrate that RNNs can learn decryption algorithms – the mappings from plaintext to ciphertext – for three polyalphabetic ciphers (Vigenere, Autokey, and Enigma). Most notably, we demonstrate that an RNN with a 3000-unit Long Short-Term Memory (LSTM) cell can learn the decryption function of the Enigma machine. We argue that our model learns efficient internal representations of these ciphers 1) by exploring activations of individual memory neurons and 2) by comparing memory usage across the three ciphers. To be clear, our work is not aimed at ’cracking’ the Enigma cipher. However, we do show that our model can perform elementary cryptanalysis by running known-plaintext attacks on the Vigenere and Autokey ciphers. Our results indicate that RNNs can learn algorithmic representations of black box polyalphabetic ciphers and that these representations are useful for cryptanalysis.", "title": "" }, { "docid": "pos:1840016_1", "text": "Template attack is the most common and powerful profiled side channel attack. It relies on a realistic assumption regarding the noise of the device under attack: the probability density function of the data is a multivariate Gaussian distribution. To relax this assumption, a recent line of research has investigated new profiling approaches mainly by applying machine learning techniques. The obtained results are commensurate, and in some particular cases better, compared to template attack. In this work, we propose to continue this recent line of research by applying more sophisticated profiling techniques based on deep learning. Our experimental results confirm the overwhelming advantages of the resulting new attacks when targeting both unprotected and protected cryptographic implementations.", "title": "" } ]
[ { "docid": "neg:1840016_0", "text": "We compare variations of string comparators based on the Jaro-Winkler comparator and edit distance comparator. We apply the comparators to Census data to see which are better classifiers for matches and nonmatches, first by comparing their classification abilities using a ROC curve based analysis, then by considering a direct comparison between two candidate comparators in record linkage results.", "title": "" }, { "docid": "neg:1840016_1", "text": "A mobile wireless infrastructure-less network is a collection of wireless mobile nodes dynamically forming a temporary network without the use of any preexisting network infrastructure or centralized administration. However, the battery life of these nodes is very limited, if their battery power is depleted fully, then this result in network partition, so these nodes becomes a critical spot in the network. These critical nodes can deplete their battery power earlier because of excessive load and processing for data forwarding. These unbalanced loads turn to increase the chances of nodes failure, network partition and reduce the route lifetime and route reliability of the MANETs. Due to this, energy consumption issue becomes a vital research topic in wireless infrastructure -less networks. The energy efficient routing is a most important design criterion for MANETs. This paper focuses of the routing approaches are based on the minimization of energy consum ption of individual nodes and many other ways. This paper surveys and classifies numerous energy-efficient routing mechanisms proposed for wireless infrastructure-less networks. Also presents detailed comparative study of lager number of energy efficient/power aware routing protocol in MANETs. Aim of this paper to helps the new researchers and application developers to explore an innovative idea for designing more efficient routing protocols. Keywords— Ad hoc Network Routing, Load Distribution, Energy Eff icient, Power Aware, Protocol Stack", "title": "" }, { "docid": "neg:1840016_2", "text": "We describe an algorithm for generating spherical mosaics from a collection of images acquired from a common optical center. The algorithm takes as input an arbitrary number of partially overlapping images, an adjacency map relating the images, initial estimates of the rotations relating each image to a specified base image, and approximate internal calibration information for the camera. The algorithm's output is a rotation relating each image to the base image, and revised estimates of the camera's internal parameters. Our algorithm is novel in the following respects. First, it requires no user input. (Our image capture instrumentation provides both an adjacency map for the mosaic, and an initial rotation estimate for each image.) Second, it optimizes an objective function based on a global correlation of overlapping image regions. Third, our representation of rotations significantly increases the accuracy of the optimization. Finally, our representation and use of adjacency information guarantees globally consistent rotation estimates. The algorithm has proved effective on a collection of nearly four thousand images acquired from more than eighty distinct optical centers. The experimental results demonstrate that the described global optimization strategy is superior to non-global aggregation of pair-wise correlation terms, and that it successfully generates high-quality mosaics despite significant error in initial rotation estimates.", "title": "" }, { "docid": "neg:1840016_3", "text": "An educational use of Pepper, a personal robot that was developed by SoftBank Robotics Corp. and Aldebaran Robotics SAS, is described. Applying the two concepts of care-receiving robot (CRR) and total physical response (TPR) into the design of an educational application using Pepper, we offer a scenario in which children learn together with Pepper at their home environments from a human teacher who gives a lesson from a remote classroom. This paper is a case report that explains the developmental process of the application that contains three educational programs that children can select in interacting with Pepper. Feedbacks and knowledge obtained from test trials are also described.", "title": "" }, { "docid": "neg:1840016_4", "text": "This is the second of two papers describing a procedure for the three dimensional nonlinear timehistory analysis of steel framed buildings. An overview of the procedure and the theory for the panel zone element and the plastic hinge beam element are presented in Part I. In this paper, the theory for an efficient new element for modeling beams and columns in steel frames called the elastofiber element is presented, along with four illustrative examples. The elastofiber beam element is divided into three segments two end nonlinear segments and an interior elastic segment. The cross-sections of the end segments are subdivided into fibers. Associated with each fiber is a nonlinear hysteretic stress-strain law for axial stress and strain. This accounts for coupling of nonlinear material behavior between bending about the major and minor axes of the cross-section and axial deformation. Examples presented include large deflection of an elastic cantilever, cyclic loading of a cantilever beam, pushover analysis of a 20-story steel moment-frame building to collapse, and strong ground motion analysis of a 2-story unsymmetric steel moment-frame building. 1Post-Doctoral Scholar, Seismological Laboratory, MC 252-21, California Institute of Technology, Pasadena, CA91125. Email: krishnan@caltech.edu 2Professor, Civil Engineering and Applied Mechanics, MC 104-44, California Institute of Technology, Pasadena, CA-91125", "title": "" }, { "docid": "neg:1840016_5", "text": "A compact dual-polarized double E-shaped patch antenna with high isolation for pico base station applications is presented in this communication. The proposed antenna employs a stacked configuration composed of two layers of substrate. Two modified E-shaped patches are printed orthogonally on both sides of the upper substrate. Two probes are used to excite the E-shaped patches, and each probe is connected to one patch separately. A circular patch is printed on the lower substrate to broaden the impedance bandwidth. Both simulated and measured results show that the proposed antenna has a port isolation higher than 30 dB over the frequency band of 2.5 GHz - 2.7 GHz, while the return loss is less than - 15 dB within the band. Moreover, stable radiation pattern with a peak gain of 6.8 dBi - 7.4 dBi is obtained within the band.", "title": "" }, { "docid": "neg:1840016_6", "text": "BACKGROUND\nWork-family conflict is a type of interrole conflict that occurs as a result of incompatible role pressures from the work and family domains. Work role characteristics that are associated with work demands refer to pressures arising from excessive workload and time pressures. Literature suggests that work demands such as number of hours worked, workload, shift work are positively associated with work-family conflict, which, in turn is related to poor mental health and negative organizational attitudes. The role of social support has been an issue of debate in the literature. This study examined social support both as a moderator and a main effect in the relationship among work demands, work-to-family conflict, and satisfaction with job and life.\n\n\nOBJECTIVES\nThis study examined the extent to which work demands (i.e., work overload, irregular work schedules, long hours of work, and overtime work) were related to work-to-family conflict as well as life and job satisfaction of nurses in Turkey. The role of supervisory support in the relationship among work demands, work-to-family conflict, and satisfaction with job and life was also investigated.\n\n\nDESIGN AND METHODS\nThe sample was comprised of 243 participants: 106 academic nurses (43.6%) and 137 clinical nurses (56.4%). All of the respondents were female. The research instrument was a questionnaire comprising nine parts. The variables were measured under four categories: work demands, work support (i.e., supervisory support), work-to-family conflict and its outcomes (i.e., life and job satisfaction).\n\n\nRESULTS\nThe structural equation modeling results showed that work overload and irregular work schedules were the significant predictors of work-to-family conflict and that work-to-family conflict was associated with lower job and life satisfaction. Moderated multiple regression analyses showed that social support from the supervisor did not moderate the relationships among work demands, work-to-family conflict, and satisfaction with job and life. Exploratory analyses suggested that social support could be best conceptualized as the main effect directly influencing work-to-family conflict and job satisfaction.\n\n\nCONCLUSION\nNurses' psychological well-being and organizational attitudes could be enhanced by rearranging work conditions to reduce excessive workload and irregular work schedule. Also, leadership development programs should be implemented to increase the instrumental and emotional support of the supervisors.", "title": "" }, { "docid": "neg:1840016_7", "text": "The most widely used signal in clinical practice is the ECG. ECG conveys information regarding the electrical function of the heart, by altering the shape of its constituent waves, namely the P, QRS, and T waves. Thus, the required tasks of ECG processing are the reliable recognition of these waves, and the accurate measurement of clinically important parameters measured from the temporal distribution of the ECG constituent waves. In this paper, we shall review some current trends on ECG pattern recognition. In particular, we shall review non-linear transformations of the ECG, the use of principal component analysis (linear and non-linear), ways to map the transformed data into n-dimensional spaces, and the use of neural networks (NN) based techniques for ECG pattern recognition and classification. The problems we shall deal with are the QRS/PVC recognition and classification, the recognition of ischemic beats and episodes, and the detection of atrial fibrillation. Finally, a generalised approach to the classification problems in n-dimensional spaces will be presented using among others NN, radial basis function networks (RBFN) and non-linear principal component analysis (NLPCA) techniques. The performance measures of the sensitivity and specificity of these algorithms will also be presented using as training and testing data sets from the MIT-BIH and the European ST-T databases.", "title": "" }, { "docid": "neg:1840016_8", "text": "We study a new variant of Arikan's successive cancellation decoder (SCD) for polar codes. We first propose a new decoding algorithm on a new decoder graph, where the various stages of the graph are permuted. We then observe that, even though the usage of the permuted graph doesn't affect the encoder, it can significantly affect the decoding performance of a given polar code. The new permuted successive cancellation decoder (PSCD) typically exhibits a performance degradation, since the polar code is optimized for the standard SCD. We then present a new polar code construction rule matched to the PSCD and show their performance in simulations. For all rates we observe that the polar code matched to a given PSCD performs the same as the original polar code with the standard SCD. We also see that a PSCD with a reversal permutation can lead to a natural decoding order, avoiding the standard bit-reversal decoding order in SCD without any loss in performance.", "title": "" }, { "docid": "neg:1840016_9", "text": "Although research on trust in an organizational context has advanced considerably in recent years, the literature has yet to produce a set of generalizable propositions that inform our understanding of the organization and coordination of work. We propose that conceptualizing trust as an organizing principle is a powerful way of integrating the diverse trust literature and distilling generalizable implications for how trust affects organizing. We develop the notion of trust as an organizing principle by specifying structuring and mobilizing as two sets of causal pathways through which trust influences several important properties of organizations. We further describe specific mechanisms within structuring and mobilizing that influence interaction patterns and organizational processes. The principal aim of the framework is to advance the literature by connecting the psychological and sociological micro-foundations of trust with the macro-bases of organizing. The paper concludes by demonstrating how the framework can be applied to yield novel insights into traditional views of organizations and to stimulate original and innovative avenues of organizational research that consider both the benefits and downsides of trust. (Trust; Organizing Principle; Structuring; Mobilizing) Introduction In the introduction to this special issue we observed that empirical research on trust was not keeping pace with theoretical developments in the field. We viewed this as a significant limitation and surmised that a special issue devoted to empirical research on trust would serve as a valuable vehicle for advancing the literature. In addition to the lack of empirical research, we would also make the observation that theories and evidence accumulating on trust in organizations is not well integrated and that the literature as a whole lacks coherence. At a general level, extant research provides “accumulating evidence that trust has a number of important benefits for organizations and their members” (Kramer 1999, p. 569). More specifically, Dirks and Ferrin’s (2001) review of the literature points to two distinct means through which trust generates these benefits. The dominant approach emphasizes the direct effects that trust has on important organizational phenomena such as: communication, conflict management, negotiation processes, satisfaction, and performance (both individual and unit). A second, less well studied, perspective points to the enabling effects of trust, whereby trust creates or enhances the conditions, such as positive interpretations of another’s behavior, that are conducive to obtaining organizational outcomes like cooperation and higher performance. The identification of these two perspectives provides a useful way of organizing the literature and generating insight into the mechanisms through which trust influences organizational outcomes. However, we are still left with a set of findings that have yet to be integrated on a theoretical level in a way that yields a set of generalizable propositions about the effects of trust on organizing. We believe this is due to the fact that research has, for the most part, embedded trust into existing theories. As a result, trust has been studied in a variety of different ways to address a wide range of organizational questions. This has yielded a diverse and eclectic body of knowledge about the relationship between trust and various organizational outcomes. At the same time, this approach has resulted in a somewhat fragmented view of the role of trust in an organizational context as a whole. In the remainder of this paper we begin to address the challenge of integrating the fragmented trust literature. While it is not feasible to develop a comprehensive framework that synthesizes the vast and diverse trust literature in a single paper, we draw together several key strands that relate to the organizational context. In particular, our paper aims to advance the literature by connecting the psychological and sociological microfoundations of trust with the macro-bases of organizing. BILL MCEVILY, VINCENZO PERRONE, AND AKBAR ZAHEER Trust as an Organizing Principle 92 ORGANIZATION SCIENCE/Vol. 14, No. 1, January–February 2003 Specifically, we propose that reconceptualizing trust as an organizing principle is a fruitful way of viewing the role of trust and comprehending how research on trust advances our understanding of the organization and coordination of economic activity. While it is our goal to generate a framework that coalesces our thinking about the processes through which trust, as an organizing principle, affects organizational life, we are not Pollyannish: trust indubitably has a down side, which has been little researched. We begin by elaborating on the notion of an organizing principle and then move on to conceptualize trust from this perspective. Next, we describe a set of generalizable causal pathways through which trust affects organizing. We then use that framework to identify some exemplars of possible research questions and to point to possible downsides of trust. Organizing Principles As Ouchi (1980) discusses, a fundamental purpose of organizations is to attain goals that require coordinated efforts. Interdependence and uncertainty make goal attainment more difficult and create the need for organizational solutions. The subdivision of work implies that actors must exchange information and rely on others to accomplish organizational goals without having complete control over, or being able to fully monitor, others’ behaviors. Coordinating actions is further complicated by the fact that actors cannot assume that their interests and goals are perfectly aligned. Consequently, relying on others is difficult when there is uncertainty about their intentions, motives, and competencies. Managing interdependence among individuals, units, and activities in the face of behavioral uncertainty constitutes a key organizational challenge. Organizing principles represent a way of solving the problem of interdependence and uncertainty. An organizing principle is the logic by which work is coordinated and information is gathered, disseminated, and processed within and between organizations (Zander and Kogut 1995). An organizing principle represents a heuristic for how actors interpret and represent information and how they select appropriate behaviors and routines for coordinating actions. Examples of organizing principles include: market, hierarchy, and clan (Ouchi 1980). Other have referred to these organizing principles as authority, price, and norms (Adler 2001, Bradach and Eccles 1989, Powell 1990). Each of these principles operates on the basis of distinct mechanisms that orient, enable, and constrain economic behavior. For instance, authority as an organizing principle solves the problem of coordinating action in the face of interdependence and uncertainty by reallocating decision-making rights (Simon 1957, Coleman 1990). Price-based organizing principles revolve around the idea of making coordination advantageous for each party involved by aligning incentives (Hayek 1948, Alchian and Demsetz 1972). Compliance to internalized norms and the resulting self-control of the clan form is another organizing principle that has been identified as a means of achieving coordinated action (Ouchi 1980). We propose that trust is also an organizing principle and that conceptualizing trust in this way provides a powerful means of integrating the disparate research on trust and distilling generalizable implications for how trust affects organizing. We view trust as most closely related to the clan organizing principle. By definition clans rely on trust (Ouchi 1980). However, trust can and does occur in organizational contexts outside of clans. For instance, there are a variety of organizational arrangements where cooperation in mixed-motive situations depends on trust, such as in repeated strategic alliances (Gulati 1995), buyer-supplier relationships (Dyer and Chu this issue), and temporary groups in organizations (Meyerson et al. 1996). More generally, we believe that trust frequently operates in conjunction with other organizing principles. For instance, Dirks (2000) found that while authority is important for behaviors that can be observed or controlled, trust is important when there exists performance ambiguity or behaviors that cannot be observed or controlled. Because most organizations have a combination of behaviors that can and cannot be observed or controlled, authority and trust co-occur. More generally, we believe that mixed or plural forms are the norm, consistent with Bradach and Eccles (1989). In some situations, however, trust may be the primary organizing principle, such as when monitoring and formal controls are difficult and costly to use. In these cases, trust represents an efficient choice. In other situations, trust may be relied upon due to social, rather than efficiency, considerations. For instance, achieving a sense of personal belonging within a collectivity (Podolny and Barron 1997) and the desire to develop and maintain rewarding social attachments (Granovetter 1985) may serve as the impetus for relying on trust as an organizing principle. Trust as an Organizing Principle At a general level trust is the willingness to accept vulnerability based on positive expectations about another’s intentions or behaviors (Mayer et al. 1995, Rousseau et al. 1998). Because trust represents a positive assumption BILL MCEVILY, VINCENZO PERRONE, AND AKBAR ZAHEER Trust as an Organizing Principle ORGANIZATION SCIENCE/Vol. 14, No. 1, January–February 2003 93 about the motives and intentions of another party, it allows people to economize on information processing and safeguarding behaviors. By representing an expectation that others will act in a way that serves, or at least is not inimical to, one’s interests (Gambetta 1988), trust as a heuristic is a frame of reference that al", "title": "" }, { "docid": "neg:1840016_10", "text": "Medical or Health related search queries constitute a significant portion of the total number of queries searched everyday on the web. For health queries, the authenticity or authoritativeness of search results is of utmost importance besides relevance. So far, research in automatic detection of authoritative sources on the web has mainly focused on a) link structure based approaches and b) supervised approaches for predicting trustworthiness. However, the aforementioned approaches have some inherent limitations. For example, several content farm and low quality sites artificially boost their link-based authority rankings by forming a syndicate of highly interlinked domains and content which is algorithmically hard to detect. Moreover, the number of positively labeled training samples available for learning trustworthiness is also limited when compared to the size of the web. In this paper, we propose a novel unsupervised approach to detect and promote authoritative domains in health segment using click-through data. We argue that standard IR metrics such as NDCG are relevance-centric and hence are not suitable for evaluating authority. We propose a new authority-centric evaluation metric based on side-by-side judgment of results. Using real world search query sets, we evaluate our approach both quantitatively and qualitatively and show that it succeeds in significantly improving the authoritativeness of results when compared to a standard web ranking baseline. ∗Corresponding Author", "title": "" }, { "docid": "neg:1840016_11", "text": "Continuous modification of the protein composition at synapses is a driving force for the plastic changes of synaptic strength, and provides the fundamental molecular mechanism of synaptic plasticity and information storage in the brain. Studying synaptic protein turnover is not only important for understanding learning and memory, but also has direct implication for understanding pathological conditions like aging, neurodegenerative diseases, and psychiatric disorders. Proteins involved in synaptic transmission and synaptic plasticity are typically concentrated at synapses of neurons and thus appear as puncta (clusters) in immunofluorescence microscopy images. Quantitative measurement of the changes in puncta density, intensity, and sizes of specific proteins provide valuable information on their function in synaptic transmission, circuit development, synaptic plasticity, and synaptopathy. Unfortunately, puncta quantification is very labor intensive and time consuming. In this article, we describe a software tool designed for the rapid semi-automatic detection and quantification of synaptic protein puncta from 2D immunofluorescence images generated by confocal laser scanning microscopy. The software, dubbed as SynPAnal (for Synaptic Puncta Analysis), streamlines data quantification for puncta density and average intensity, thereby increases data analysis throughput compared to a manual method. SynPAnal is stand-alone software written using the JAVA programming language, and thus is portable and platform-free.", "title": "" }, { "docid": "neg:1840016_12", "text": "OBJECTIVE\nThis study reports the psychometric properties of the 24-item version of the Diabetes Knowledge Questionnaire (DKQ).\n\n\nRESEARCH DESIGN AND METHODS\nThe original 60-item DKQ was administered to 502 adult Mexican-Americans with type 2 diabetes who are part of the Starr County Diabetes Education Study. The sample was composed of 252 participants and 250 support partners. The subjects were randomly assigned to the educational and social support intervention (n = 250) or to the wait-listed control group (n = 252). A shortened 24-item version of the DKQ was derived from the original instrument after data collection was completed. Reliability was assessed by means of Cronbach's coefficient alpha. To determine validity, differentiation between the experimental and control groups was conducted at baseline and after the educational portion of the intervention.\n\n\nRESULTS\nThe 24-item version of the DKQ (DKQ-24) attained a reliability coefficient of 0.78, indicating internal consistency, and showed sensitivity to the intervention, suggesting construct validation.\n\n\nCONCLUSIONS\nThe DKQ-24 is a reliable and valid measure of diabetes-related knowledge that is relatively easy to administer to either English or Spanish speakers.", "title": "" }, { "docid": "neg:1840016_13", "text": "Combining meaning, memory, and development, the perennially popular topic of intuition can be approached in a new way. Fuzzy-trace theory integrates these topics by distinguishing between meaning-based gist representations, which support fuzzy (yet advanced) intuition, and superficial verbatim representations of information, which support precise analysis. Here, I review the counterintuitive findings that led to the development of the theory and its most recent extensions to the neuroscience of risky decision making. These findings include memory interference (worse verbatim memory is associated with better reasoning); nonnumerical framing (framing effects increase when numbers are deleted from decision problems); developmental decreases in gray matter and increases in brain connectivity; developmental reversals in memory, judgment, and decision making (heuristics and biases based on gist increase from childhood to adulthood, challenging conceptions of rationality); and selective attention effects that provide critical tests comparing fuzzy-trace theory, expected utility theory, and its variants (e.g., prospect theory). Surprising implications for judgment and decision making in real life are also discussed, notably, that adaptive decision making relies mainly on gist-based intuition in law, medicine, and public health.", "title": "" }, { "docid": "neg:1840016_14", "text": "We present the Mind the Gap Model (MGM), an approach for interpretable feature extraction and selection. By placing interpretability criteria directly into the model, we allow for the model to both optimize parameters related to interpretability and to directly report a global set of distinguishable dimensions to assist with further data exploration and hypothesis generation. MGM extracts distinguishing features on real-world datasets of animal features, recipes ingredients, and disease co-occurrence. It also maintains or improves performance when compared to related approaches. We perform a user study with domain experts to show the MGM’s ability to help with dataset exploration.", "title": "" }, { "docid": "neg:1840016_15", "text": "Resilience has been most frequently defined as positive adaptation despite adversity. Over the past 40 years, resilience research has gone through several stages. From an initial focus on the invulnerable or invincible child, psychologists began to recognize that much of what seems to promote resilience originates outside of the individual. This led to a search for resilience factors at the individual, family, community - and, most recently, cultural - levels. In addition to the effects that community and culture have on resilience in individuals, there is growing interest in resilience as a feature of entire communities and cultural groups. Contemporary researchers have found that resilience factors vary in different risk contexts and this has contributed to the notion that resilience is a process. In order to characterize the resilience process in a particular context, it is necessary to identify and measure the risk involved and, in this regard, perceived discrimination and historical trauma are part of the context in many Aboriginal communities. Researchers also seek to understand how particular protective factors interact with risk factors and with other protective factors to support relative resistance. For this purpose they have developed resilience models of three main types: \"compensatory,\" \"protective,\" and \"challenge\" models. Two additional concepts are resilient reintegration, in which a confrontation with adversity leads individuals to a new level of growth, and the notion endorsed by some Aboriginal educators that resilience is an innate quality that needs only to be properly awakened.The review suggests five areas for future research with an emphasis on youth: 1) studies to improve understanding of what makes some Aboriginal youth respond positively to risk and adversity and others not; 2) case studies providing empirical confirmation of the theory of resilient reintegration among Aboriginal youth; 3) more comparative studies on the role of culture as a resource for resilience; 4) studies to improve understanding of how Aboriginal youth, especially urban youth, who do not live in self-governed communities with strong cultural continuity can be helped to become, or remain, resilient; and 5) greater involvement of Aboriginal researchers who can bring a nonlinear world view to resilience research.", "title": "" }, { "docid": "neg:1840016_16", "text": "With the dramatic growth of the game industry over the past decade, its rapid inclusion in many sectors of today’s society, and the increased complexity of games, game development has reached a point where it is no longer humanly possible to use only manual techniques to create games. Large parts of games need to be designed, built, and tested automatically. In recent years, researchers have delved into artificial intelligence techniques to support, assist, and even drive game development. Such techniques include procedural content generation, automated narration, player modelling and adaptation, and automated game design. This research is still very young, but already the games industry is taking small steps to integrate some of these techniques in their approach to design. The goal of this seminar was to bring together researchers and industry representatives who work at the forefront of artificial intelligence (AI) and computational intelligence (CI) in games, to (1) explore and extend the possibilities of AI-driven game design, (2) to identify the most viable applications of AI-driven game design in the game industry, and (3) to investigate new approaches to AI-driven game design. To this end, the seminar included a wide range of researchers and developers, including specialists in AI/CI for abstract games, commercial video games, and serious games. Thus, it fostered a better understanding of and unified vision on AI-driven game design, using input from both scientists as well as AI specialists from industry. Seminar November 19–24, 2017 – http://www.dagstuhl.de/17471 1998 ACM Subject Classification I.2.1 Artificial Intelligence Games", "title": "" }, { "docid": "neg:1840016_17", "text": "In this paper, we introduce an approach for distributed nonlinear control of multiple hovercraft-type underactuated vehicles with bounded and unidirectional inputs. First, a bounded nonlinear controller is given for stabilization and tracking of a single vehicle, using a cascade backstepping method. Then, this controller is combined with a distributed gradient-based control for multi-vehicle formation stabilization using formation potential functions previously constructed. The vehicles are used in the Caltech Multi-Vehicle Wireless Testbed (MVWT). We provide simulation and experimental results for stabilization and tracking of a single vehicle, and a simulation of stabilization of a six-vehicle formation, demonstrating that in all cases the control bounds and the control objective are satisfied.", "title": "" }, { "docid": "neg:1840016_18", "text": "The defect detection on manufactures is extremely important in the optimization of industrial processes; particularly, the visual inspection plays a fundamental role. The visual inspection is often carried out by a human expert. However, new technology features have made this inspection unreliable. For this reason, many researchers have been engaged to develop automatic analysis processes of manufactures and automatic optical inspections in the industrial production of printed circuit boards. Among the defects that could arise in this industrial process, those of the solder joints are very important, because they can lead to an incorrect functioning of the board; moreover, the amount of the solder paste can give some information on the quality of the industrial process. In this paper, a neural network-based automatic optical inspection system for the diagnosis of solder joint defects on printed circuit boards assembled in surface mounting technology is presented. The diagnosis is handled as a pattern recognition problem with a neural network approach. Five types of solder joints have been classified in respect to the amount of solder paste in order to perform the diagnosis with a high recognition rate and a detailed classification able to give information on the quality of the manufacturing process. The images of the boards under test are acquired and then preprocessed to extract the region of interest for the diagnosis. Three types of feature vectors are evaluated from each region of interest, which are the images of the solder joints under test, by exploiting the properties of the wavelet transform and the geometrical characteristics of the preprocessed images. The performances of three different classifiers which are a multilayer perceptron, a linear vector quantization, and a K-nearest neighbor classifier are compared. The n-fold cross-validation has been exploited to select the best architecture for the neural classifiers, while a number of experiments have been devoted to estimating the best value of K in the K-NN. The results have proved that the MLP network fed with the GW-features has the best recognition rate. This approach allows to carry out the diagnosis burden on image processing, feature extraction, and classification algorithms, reducing the cost and the complexity of the acquisition system. In fact, the experimental results suggest that the reason for the high recognition rate in the solder joint classification is due to the proper preprocessing steps followed as well as to the information contents of the features", "title": "" }, { "docid": "neg:1840016_19", "text": "This article presents P-Sense, a participatory sensing application for air pollution monitoring and control. The paper describes in detail the system architecture and individual components of a successfully implemented application. In addition, the paper points out several other research-oriented problems that need to be addressed before these applications can be effectively implemented in practice, in a large-scale deployment. Security, privacy, data visualization and validation, and incentives are part of our work-in-progress activities", "title": "" } ]
1840017
Neural Stance Detectors for Fake News Challenge
[ { "docid": "pos:1840017_0", "text": "Natural language sentence matching is a fundamental technology for a variety of tasks. Previous approaches either match sentences from a single direction or only apply single granular (wordby-word or sentence-by-sentence) matching. In this work, we propose a bilateral multi-perspective matching (BiMPM) model. Given two sentences P and Q, our model first encodes them with a BiLSTM encoder. Next, we match the two encoded sentences in two directions P against Q and Q against P . In each matching direction, each time step of one sentence is matched against all timesteps of the other sentence from multiple perspectives. Then, another BiLSTM layer is utilized to aggregate the matching results into a fixed-length matching vector. Finally, based on the matching vector, a decision is made through a fully connected layer. We evaluate our model on three tasks: paraphrase identification, natural language inference and answer sentence selection. Experimental results on standard benchmark datasets show that our model achieves the state-of-the-art performance on all tasks.", "title": "" }, { "docid": "pos:1840017_1", "text": "Enabling a computer to understand a document so that it can answer comprehension questions is a central, yet unsolved goal of NLP. A key factor impeding its solution by machine learned systems is the limited availability of human-annotated data. Hermann et al. (2015) seek to solve this problem by creating over a million training examples by pairing CNN and Daily Mail news articles with their summarized bullet points, and show that a neural network can then be trained to give good performance on this task. In this paper, we conduct a thorough examination of this new reading comprehension task. Our primary aim is to understand what depth of language understanding is required to do well on this task. We approach this from one side by doing a careful hand-analysis of a small subset of the problems and from the other by showing that simple, carefully designed systems can obtain accuracies of 72.4% and 75.8% on these two datasets, exceeding current state-of-the-art results by over 5% and approaching what we believe is the ceiling for performance on this task.1", "title": "" } ]
[ { "docid": "neg:1840017_0", "text": "Discretization is an essential preprocessing technique used in many knowledge discovery and data mining tasks. Its main goal is to transform a set of continuous attributes into discrete ones, by associating categorical values to intervals and thus transforming quantitative data into qualitative data. In this manner, symbolic data mining algorithms can be applied over continuous data and the representation of information is simplified, making it more concise and specific. The literature provides numerous proposals of discretization and some attempts to categorize them into a taxonomy can be found. However, in previous papers, there is a lack of consensus in the definition of the properties and no formal categorization has been established yet, which may be confusing for practitioners. Furthermore, only a small set of discretizers have been widely considered, while many other methods have gone unnoticed. With the intention of alleviating these problems, this paper provides a survey of discretization methods proposed in the literature from a theoretical and empirical perspective. From the theoretical perspective, we develop a taxonomy based on the main properties pointed out in previous research, unifying the notation and including all the known methods up to date. Empirically, we conduct an experimental study in supervised classification involving the most representative and newest discretizers, different types of classifiers, and a large number of data sets. The results of their performances measured in terms of accuracy, number of intervals, and inconsistency have been verified by means of nonparametric statistical tests. Additionally, a set of discretizers are highlighted as the best performing ones.", "title": "" }, { "docid": "neg:1840017_1", "text": "New credit cards containing Europay, MasterCard and Visa (EMV) chips for enhanced security used in-store purchases rather than online purchases have been adopted considerably. EMV supposedly protects the payment cards in such a way that the computer chip in a card referred to as chip-and-pin cards generate a unique one time code each time the card is used. The one time code is designed such that if it is copied or stolen from the merchant system or from the system terminal cannot be used to create a counterfeit copy of that card or counterfeit chip of the transaction. However, in spite of this design, EMV technology is not entirely foolproof from failure. In this paper we discuss the issues, failures and fraudulent cases associated with EMV Chip-And-Card technology.", "title": "" }, { "docid": "neg:1840017_2", "text": "Although mature technologies exist for acquiring images, geometry, and normals of small objects, they remain cumbersome and time-consuming for non-experts to employ on a large scale. In an archaeological setting, a practical acquisition system for routine use on every artifact and fragment would open new possibilities for archiving, analysis, and dissemination. We present an inexpensive system for acquiring all three types of information, and associated metadata, for small objects such as fragments of wall paintings. The acquisition system requires minimal supervision, so that a single, non-expert user can scan at least 10 fragments per hour. To achieve this performance, we introduce new algorithms to robustly and automatically align range scans, register 2-D scans to 3-D geometry, and compute normals from 2-D scans. As an illustrative application, we present a novel 3-D matching algorithm that efficiently searches for matching fragments using the scanned geometry.", "title": "" }, { "docid": "neg:1840017_3", "text": "Over 50 million people worldwide suffer from epilepsy. Traditional diagnosis of epilepsy relies on tedious visual screening by highly trained clinicians from lengthy EEG recording that contains the presence of seizure (ictal) activities. Nowadays, there are many automatic systems that can recognize seizure-related EEG signals to help the diagnosis. However, it is very costly and inconvenient to obtain long-term EEG data with seizure activities, especially in areas short of medical resources. We demonstrate in this paper that we can use the interictal scalp EEG data, which is much easier to collect than the ictal data, to automatically diagnose whether a person is epileptic. In our automated EEG recognition system, we extract three classes of features from the EEG data and build Probabilistic Neural Networks (PNNs) fed with these features. We optimize the feature extraction parameters and combine these PNNs through a voting mechanism. As a result, our system achieves an impressive 94.07% accuracy.", "title": "" }, { "docid": "neg:1840017_4", "text": "Modern applications employ text files widely for providing data storage in a readable format for applications ranging from database systems to mobile phones. Traditional text processing tools are built around a byte-at-a-time sequential processing model that introduces significant branch and cache miss penalties. Recent work has explored an alternative, transposed representation of text, Parabix (Parallel Bit Streams), to accelerate scanning and parsing using SIMD facilities. This paper advocates and develops Parabix as a general framework and toolkit, describing the software toolchain and run-time support that allows applications to exploit modern SIMD instructions for high performance text processing. The goal is to generalize the techniques to ensure that they apply across a wide variety of applications and architectures. The toolchain enables the application developer to write constructs assuming unbounded character streams and Parabix's code translator generates code based on machine specifics (e.g., SIMD register widths). The general argument in support of Parabix technology is made by a detailed performance and energy study of XML parsing across a range of processor architectures. Parabix exploits intra-core SIMD hardware and demonstrates 2×-7× speedup and 4× improvement in energy efficiency when compared with two widely used conventional software parsers, Expat and Apache-Xerces. SIMD implementations across three generations of x86 processors are studied including the new SandyBridge. The 256-bit AVX technology in Intel SandyBridge is compared with the well established 128-bit SSE technology to analyze the benefits and challenges of 3-operand instruction formats and wider SIMD hardware. Finally, the XML program is partitioned into pipeline stages to demonstrate that thread-level parallelism enables the application to exploit SIMD units scattered across the different cores, achieving improved performance (2× on 4 cores) while maintaining single-threaded energy levels.", "title": "" }, { "docid": "neg:1840017_5", "text": "Crowdsourcing is becoming more and more important for commercial purposes. With the growth of crowdsourcing platforms like Amazon Mechanical Turk or Microworkers, a huge work force and a large knowledge base can be easily accessed and utilized. But due to the anonymity of the workers, they are encouraged to cheat the employers in order to maximize their income. Thus, this paper we analyze two widely used crowd-based approaches to validate the submitted work. Both approaches are evaluated with regard to their detection quality, their costs and their applicability to different types of typical crowdsourcing tasks.", "title": "" }, { "docid": "neg:1840017_6", "text": "Mediation is said to occur when a causal effect of some variable X on an outcome Y is explained by some intervening variable M. The authors recommend that with small to moderate samples, bootstrap methods (B. Efron & R. Tibshirani, 1993) be used to assess mediation. Bootstrap tests are powerful because they detect that the sampling distribution of the mediated effect is skewed away from 0. They argue that R. M. Baron and D. A. Kenny's (1986) recommendation of first testing the X --> Y association for statistical significance should not be a requirement when there is a priori belief that the effect size is small or suppression is a possibility. Empirical examples and computer setups for bootstrap analyses are provided.", "title": "" }, { "docid": "neg:1840017_7", "text": "The systematic maintenance of mining machinery and equipment is the crucial factor for the proper functioning of a mine without production process interruption. For high-quality maintenance of the technical systems in mining, it is necessary to conduct a thorough analysis of machinery and accompanying elements in order to determine the critical elements in the system which are prone to failures. The risk assessment of the failures of system parts leads to obtaining precise indicators of failures which are also excellent guidelines for maintenance services. This paper presents a model of the risk assessment of technical systems failure based on the fuzzy sets theory, fuzzy logic and min–max composition. The risk indicators, severity, occurrence and detectability are analyzed. The risk indicators are given as linguistic variables. The model presented was applied for assessing the risk level of belt conveyor elements failure which works in severe conditions in a coal mine. Moreover, this paper shows the advantages of this model when compared to a standard procedure of RPN calculating – in the FMEA method of risk", "title": "" }, { "docid": "neg:1840017_8", "text": "Android has become the most popular smartphone operating system. This rapidly increasing adoption of Android has resulted in significant increase in the number of malwares when compared with previous years. There exist lots of antimalware programs which are designed to effectively protect the users’ sensitive data in mobile systems from such attacks. In this paper, our contribution is twofold. Firstly, we have analyzed the Android malwares and their penetration techniques used for attacking the systems and antivirus programs that act against malwares to protect Android systems. We categorize many of the most recent antimalware techniques on the basis of their detection methods. We aim to provide an easy and concise view of the malware detection and protection mechanisms and deduce their benefits and limitations. Secondly, we have forecast Android market trends for the year up to 2018 and provide a unique hybrid security solution and take into account both the static and dynamic analysis an android application. Keywords—Android; Permissions; Signature", "title": "" }, { "docid": "neg:1840017_9", "text": "Episacral lipoma is a small, tender subcutaneous nodule primarily occurring over the posterior iliac crest. Episacral lipoma is a significant and treatable cause of acute and chronic low back pain. Episacral lipoma occurs as a result of tears in the thoracodorsal fascia and subsequent herniation of a portion of the underlying dorsal fat pad through the tear. This clinical entity is common, and recognition is simple. The presence of a painful nodule with disappearance of pain after injection with anaesthetic, is diagnostic. Medication and physical therapy may not be effective. Local injection of the nodule with a solution of anaesthetic and steroid is effective in treating the episacral lipoma. Here we describe 2 patients with painful nodules over the posterior iliac crest. One patient complained of severe lower back pain radiating to the left lower extremity and this patient subsequently underwent disc operation. The other patient had been treated for greater trochanteric pain syndrome. In both patients, symptoms appeared to be relieved by local injection of anaesthetic and steroid. Episacral lipoma should be considered during diagnostic workup and in differential diagnosis of acute and chronic low back pain.", "title": "" }, { "docid": "neg:1840017_10", "text": "From a dynamic system point of view, bat locomotion stands out among other forms of flight. During a large part of bat wingbeat cycle the moving body is not in a static equilibrium. This is in sharp contrast to what we observe in other simpler forms of flight such as insects, which stay at their static equilibrium. Encouraged by biological examinations that have revealed bats exhibit periodic and stable limit cycles, this work demonstrates that one effective approach to stabilize articulated flying robots with bat morphology is locating feasible limit cycles for these robots; then, designing controllers that retain the closed-loop system trajectories within a bounded neighborhood of the designed periodic orbits. This control design paradigm has been evaluated in practice on a recently developed bio-inspired robot called Bat Bot (B2).", "title": "" }, { "docid": "neg:1840017_11", "text": "Regular expressions have served as the dominant workhorse of practical information extraction for several years. However, there has been little work on reducing the manual effort involved in building high-quality, complex regular expressions for information extraction tasks. In this paper, we propose ReLIE, a novel transformation-based algorithm for learning such complex regular expressions. We evaluate the performance of our algorithm on multiple datasets and compare it against the CRF algorithm. We show that ReLIE, in addition to being an order of magnitude faster, outperforms CRF under conditions of limited training data and cross-domain data. Finally, we show how the accuracy of CRF can be improved by using features extracted by ReLIE.", "title": "" }, { "docid": "neg:1840017_12", "text": "Anaerobic saccharolytic bacteria thriving at high pH values were studied in a cellulose-degrading enrichment culture originating from the alkaline lake, Verkhneye Beloye (Central Asia). In situ hybridization of the enrichment culture with 16S rRNA-targeted probes revealed that abundant, long, thin, rod-shaped cells were related to Cytophaga. Bacteria of this type were isolated with cellobiose and five isolates were characterized. Isolates were thin, flexible, gliding rods. They formed a spherical cyst-like structure at one cell end during the late growth phase. The pH range for growth was 7.5–10.2, with an optimum around pH 8.5. Cultures produced a pinkish pigment tentatively identified as a carotenoid. Isolates did not degrade cellulose, indicating that they utilized soluble products formed by so far uncultured hydrolytic cellulose degraders. Besides cellobiose, the isolates utilized other carbohydrates, including xylose, maltose, xylan, starch, and pectin. The main organic fermentation products were propionate, acetate, and succinate. Oxygen, which was not used as electron acceptor, impaired growth. A representative isolate, strain Z-7010, with Marinilabilia salmonicolor as the closest relative, is described as a new genus and species, Alkaliflexus imshenetskii. This is the first cultivated alkaliphilic anaerobic member of the Cytophaga/Flavobacterium/Bacteroides phylum.", "title": "" }, { "docid": "neg:1840017_13", "text": "Cheating is a real problem in the Internet of Things. The fundamental question that needs to be answered is how we can trust the validity of the data being generated in the first place. The problem, however, isnt inherent in whether or not to embrace the idea of an open platform and open-source software, but to establish a methodology to verify the trustworthiness and control any access. This paper focuses on building an access control model and system based on trust computing. This is a new field of access control techniques which includes Access Control, Trust Computing, Internet of Things, network attacks, and cheating technologies. Nevertheless, the target access control systems can be very complex to manage. This paper presents an overview of the existing work on trust computing, access control models and systems in IoT. It not only summarizes the latest research progress, but also provides an understanding of the limitations and open issues of the existing work. It is expected to provide useful guidelines for future research. Access Control, Trust Management, Internet of Things Today, our world is characterized by increasing connectivity. Things in this world are increasingly being connected. Smart phones have started an era of global proliferation and rapid consumerization of smart devices. It is predicted that the next disruptive transformation will be the concept of ‘Internet of Things’ [2]. From networked computers to smart devices, and to connected people, we are now moving towards connected ‘things’. Items of daily use are being turned into smart devices as various sensors are embedded in consumer and enterprise equipment, industrial and household appliances and personal devices. Pervasive connectivity mechanisms build bridges between our clothing and vehicles. Interaction among these things/devices can happen with little or no human intervention, thereby conjuring an enormous network, namely the Internet of Things (IoT). One of the primary goals behind IoT is to sense and send data over remote locations to enable detection of significant events, and take relevant actions sooner rather than later [25]. This technological trend is being pursued actively in all areas including the medical and health care fields. IoT provides opportunities to dramatically improve many medical applications, such as glucose level sensing, remote health monitoring (e.g. electrocardiogram, blood pressure, body temperature, and oxygen saturation monitoring, etc), rehabilitation systems, medication management, and ambient assisted living systems. The connectivity offered by IoT extends from humanto-machine to machine-to-machine communications. The interconnected devices collect all kinds of data about patients. Intelligent and ubiquitous services can then be built upon the useful information extracted from the data. During the data aggregation, fusion, and analysis processes, user ar X iv :1 61 0. 01 06 5v 1 [ cs .C R ] 4 O ct 2 01 6 2 Z. Yunpeng and X. Wu privacy and information security become major concerns for IoT services and applications. Security breaches will seriously compromise user acceptance and consumption on IoT applications in the medical and health care areas. The large scale of integration of heterogeneous devices in IoT poses a great challenge for the provision of standard security services. Many IoT devices are vulnerable to attacks since no high-level intelligence can be enabled on these passive devices [10], and security vulnerabilities in products uncovered by researchers have spread from cars [13] to garage doors [9] and to skateboards [35]. Technological utopianism surrounding IoT was very real until the emergence of the Volkswagen emissions scandal [4]. The German conglomerate admitted installing software in its diesel cars that recognizes and identifies patterns when vehicles are being tested for nitrogen oxide emissions and cuts them so that they fall within the limits prescribed by US regulators (004 g/km). Once the test is over, the car returns to its normal state: emitting nitrogen oxides (nitric oxide and nitrogen dioxide) at up to 35 times the US legal limit. The focus of IoT is not the thing itself, but the data generated by the devices and the value therein. What Volkswagen has brought to light goes far beyond protecting data and privacy, preventing intrusion, and keeping the integrity of the data. It casts doubts on the credibility of the IoT industry and its ability to secure data, reach agreement on standards, or indeed guarantee that consumer privacy rights are upheld. All in all, IoT holds tremendous potential to improve our health, make our environment safer, boost productivity and efficiency, and conserve both water and energy. IoT needs to improve its trustworthiness, however, before it can be used to solve challenging economic and environmental problems tied to our social lives. The fundamental question that needs to be answered is how we can trust the validity of the data being generated in the first place. If a node of IoT cheats, how does a system identify the cheating node and prevent a malicious attack from misbehaving nodes? This paper focuses on an access control mechanism that will only grant network access permission to trustworthy nodes. Embedding trust management into access control will improve the systems ability to discover untrustworthy participating nodes and prevent discriminatory attacks. There has been substantial research in this domain, most of which has been related to attacks like self-promotion and ballot stuffing where a node falsely promotes its importance and boosts the reputation of a malicious node (by providing good recommendations) to engage in a collusion-style attack. The traditional trust computation model is inefficient in differentiating a participant object in IoT, which is designed to win trust by cheating. In particular, the trust computation model will fail when a malicious node intelligently adjusts its behavior to hide its defect and obtain a higher trust value for its own gain. 1 Access Control Model and System IoT comprises the following three Access Control types Access Control in Internet of Things: A Survey 3 – Role-based access control (RBAC) – Credential-based access control (CBAC) — in order to access some resources and data, users require certain certificate information that falls into the following two types: 1. Attribute-Based access control (ABAC) : If a user has some special attributes, it is possible to access a particular resource or piece of data. 2. Capability-Based access control (Cap-BAC): A capability is a communicable, unforgeable rights markup, which corresponds to a value that uniquely specifies certain access rights to objects owned by subjects. – Trust-based access control (TBAC) In addition, there are also combinations of the aforementioned three methods. In order to improve the security of the system, some of the access control methods include encryption and key management mechanisms.", "title": "" }, { "docid": "neg:1840017_14", "text": "Elasticity is undoubtedly one of the most striking characteristics of cloud computing. Especially in the area of high performance computing (HPC), elasticity can be used to execute irregular and CPU-intensive applications. However, the on- the-fly increase/decrease in resources is more widespread in Web systems, which have their own IaaS-level load balancer. Considering the HPC area, current approaches usually focus on batch jobs or assumptions such as previous knowledge of application phases, source code rewriting or the stop-reconfigure-and-go approach for elasticity. In this context, this article presents AutoElastic, a PaaS-level elasticity model for HPC in the cloud. Its differential approach consists of providing elasticity for high performance applications without user intervention or source code modification. The scientific contributions of AutoElastic are twofold: (i) an Aging-based approach to resource allocation and deallocation actions to avoid unnecessary virtual machine (VM) reconfigurations (thrashing) and (ii) asynchronism in creating and terminating VMs in such a way that the application does not need to wait for completing these procedures. The prototype evaluation using OpenNebula middleware showed performance gains of up to 26 percent in the execution time of an application with the AutoElastic manager. Moreover, we obtained low intrusiveness for AutoElastic when reconfigurations do not occur.", "title": "" }, { "docid": "neg:1840017_15", "text": "During natural disasters or crises, users on social media tend to easily believe contents of postings related to the events, and retweet the postings with hoping them to be reached to many other users. Unfortunately, there are malicious users who understand the tendency and post misinformation such as spam and fake messages with expecting wider propagation. To resolve the problem, in this paper we conduct a case study of 2013 Moore Tornado and Hurricane Sandy. Concretely, we (i) understand behaviors of these malicious users, (ii) analyze properties of spam, fake and legitimate messages, (iii) propose flat and hierarchical classification approaches, and (iv) detect both fake and spam messages with even distinguishing between them. Our experimental results show that our proposed approaches identify spam and fake messages with 96.43% accuracy and 0.961 F-measure.", "title": "" }, { "docid": "neg:1840017_16", "text": "Deep Neural Networks (DNNs) have been demonstrated to perform exceptionally well on most recognition tasks such as image classification and segmentation. However, they have also been shown to be vulnerable to adversarial examples. This phenomenon has recently attracted a lot of attention but it has not been extensively studied on multiple, large-scale datasets and complex tasks such as semantic segmentation which often require more specialised networks with additional components such as CRFs, dilated convolutions, skip-connections and multiscale processing. In this paper, we present what to our knowledge is the first rigorous evaluation of adversarial attacks on modern semantic segmentation models, using two large-scale datasets. We analyse the effect of different network architectures, model capacity and multiscale processing, and show that many observations made on the task of classification do not always transfer to this more complex task. Furthermore, we show how mean-field inference in deep structured models and multiscale processing naturally implement recently proposed adversarial defenses. Our observations will aid future efforts in understanding and defending against adversarial examples. Moreover, in the shorter term, we show which segmentation models should currently be preferred in safety-critical applications due to their inherent robustness.", "title": "" }, { "docid": "neg:1840017_17", "text": "In this paper, a wideband dual polarized self-complementary connected array antenna with low radar cross section (RCS) under normal and oblique incidence is presented. First, an analytical model of the multilayer structure is proposed in order to obtain a fast and reliable predimensioning tool providing an optimized design of the infinite array. The accuracy of this model is demonstrated thanks to comparative simulations with a full wave analysis software. RCS reduction compared to a perfectly conducting flat plate of at least 10 dB has been obtained over an ultrawide bandwidth of nearly 7:1 at normal incidence and 5:1 (3.8 to 19 GHz) at 60° in both polarizations. These performances are confirmed by finite element tearing and interconnecting computations of finite arrays of different sizes. Finally, the realization of a $28 \\times 28$ cell prototype and measurement results are detailed.", "title": "" }, { "docid": "neg:1840017_18", "text": "Fog/edge computing has been proposed to be integrated with Internet of Things (IoT) to enable computing services devices deployed at network edge, aiming to improve the user’s experience and resilience of the services in case of failures. With the advantage of distributed architecture and close to end-users, fog/edge computing can provide faster response and greater quality of service for IoT applications. Thus, fog/edge computing-based IoT becomes future infrastructure on IoT development. To develop fog/edge computing-based IoT infrastructure, the architecture, enabling techniques, and issues related to IoT should be investigated first, and then the integration of fog/edge computing and IoT should be explored. To this end, this paper conducts a comprehensive overview of IoT with respect to system architecture, enabling technologies, security and privacy issues, and present the integration of fog/edge computing and IoT, and applications. Particularly, this paper first explores the relationship between cyber-physical systems and IoT, both of which play important roles in realizing an intelligent cyber-physical world. Then, existing architectures, enabling technologies, and security and privacy issues in IoT are presented to enhance the understanding of the state of the art IoT development. To investigate the fog/edge computing-based IoT, this paper also investigate the relationship between IoT and fog/edge computing, and discuss issues in fog/edge computing-based IoT. Finally, several applications, including the smart grid, smart transportation, and smart cities, are presented to demonstrate how fog/edge computing-based IoT to be implemented in real-world applications.", "title": "" }, { "docid": "neg:1840017_19", "text": "This paper presents an analysis of FastSLAM - a Rao-Blackwellised particle filter formulation of simultaneous localisation and mapping. It shows that the algorithm degenerates with time, regardless of the number of particles used or the density of landmarks within the environment, and would always produce optimistic estimates of uncertainty in the long-term. In essence, FastSLAM behaves like a non-optimal local search algorithm; in the short-term it may produce consistent uncertainty estimates but, in the long-term, it is unable to adequately explore the state-space to be a reasonable Bayesian estimator. However, the number of particles and landmarks does affect the accuracy of the estimated mean and, given sufficient particles, FastSLAM can produce good non-stochastic estimates in practice. FastSLAM also has several practical advantages, particularly with regard to data association, and would probably work well in combination with other versions of stochastic SLAM, such as EKF-based SLAM", "title": "" } ]
1840018
Learning Decision Trees Using the Area Under the ROC Curve
[ { "docid": "pos:1840018_0", "text": "In this paper, we address the problem of retrospectively pruning decision trees induced from data, according to a topdown approach. This problem has received considerable attention in the areas of pattern recognition and machine learning, and many distinct methods have been proposed in literature. We make a comparative study of six well-known pruning methods with the aim of understanding their theoretical foundations, their computational complexity, and the strengths and weaknesses of their formulation. Comments on the characteristics of each method are empirically supported. In particular, a wide experimentation performed on several data sets leads us to opposite conclusions on the predictive accuracy of simplified trees from some drawn in the literature. We attribute this divergence to differences in experimental designs. Finally, we prove and make use of a property of the reduced error pruning method to obtain an objective evaluation of the tendency to overprune/underprune observed in each method. Index Terms —Decision trees, top-down induction of decision trees, simplification of decision trees, pruning and grafting operators, optimal pruning, comparative studies. —————————— ✦ ——————————", "title": "" }, { "docid": "pos:1840018_1", "text": "The area under the ROC curve, or the equivalent Gini index, is a widely used measure of performance of supervised classification rules. It has the attractive property that it side-steps the need to specify the costs of the different kinds of misclassification. However, the simple form is only applicable to the case of two classes. We extend the definition to the case of more than two classes by averaging pairwise comparisons. This measure reduces to the standard form in the two class case. We compare its properties with the standard measure of proportion correct and an alternative definition of proportion correct based on pairwise comparison of classes for a simple artificial case and illustrate its application on eight data sets. On the data sets we examined, the measures produced similar, but not identical results, reflecting the different aspects of performance that they were measuring. Like the area under the ROC curve, the measure we propose is useful in those many situations where it is impossible to give costs for the different kinds of misclassification.", "title": "" } ]
[ { "docid": "neg:1840018_0", "text": "Today’s smartphone users face a security dilemma: many apps they install operate on privacy-sensitive data, although they might originate from developers whose trustworthiness is hard to judge. Researchers have addressed the problem with more and more sophisticated static and dynamic analysis tools as an aid to assess how apps use private user data. Those tools, however, rely on the manual configuration of lists of sources of sensitive data as well as sinks which might leak data to untrusted observers. Such lists are hard to come by. We thus propose SUSI, a novel machine-learning guided approach for identifying sources and sinks directly from the code of any Android API. Given a training set of hand-annotated sources and sinks, SUSI identifies other sources and sinks in the entire API. To provide more fine-grained information, SUSI further categorizes the sources (e.g., unique identifier, location information, etc.) and sinks (e.g., network, file, etc.). For Android 4.2, SUSI identifies hundreds of sources and sinks with over 92% accuracy, many of which are missed by current information-flow tracking tools. An evaluation of about 11,000 malware samples confirms that many of these sources and sinks are indeed used. We furthermore show that SUSI can reliably classify sources and sinks even in new, previously unseen Android versions and components like Google Glass or", "title": "" }, { "docid": "neg:1840018_1", "text": "We present a method for human pose tracking that learns explicitly about the dynamic effects of human motion on joint appearance. In contrast to previous techniques which employ generic tools such as dense optical flow or spatiotemporal smoothness constraints to pass pose inference cues between frames, our system instead learns to predict joint displacements from the previous frame to the current frame based on the possibly changing appearance of relevant pixels surrounding the corresponding joints in the previous frame. This explicit learning of pose deformations is formulated by incorporating concepts from human pose estimation into an optical flow-like framework. With this approach, state-of-the-art performance is achieved on standard benchmarks for various pose tracking tasks including 3D body pose tracking in RGB video, 3D hand pose tracking in depth sequences, and 3D hand gesture tracking in RGB video.", "title": "" }, { "docid": "neg:1840018_2", "text": "The task of measuring sentence similarity is defined as determining how similar the meanings of two sentences are. Computing sentence similarity is not a trivial task, due to the variability of natural language expressions. Measuring semantic similarity of sentences is closely related to semantic similarity between words. It makes a relationship between a word and the sentence through their meanings. The intention is to enhance the concepts of semantics over the syntactic measures that are able to categorize the pair of sentences effectively. Semantic similarity plays a vital role in Natural language processing, Informational Retrieval, Text Mining, Q & A systems, text-related research and application area. Traditional similarity measures are based on the syntactic features and other path based measures. In this project, we evaluated and tested three different semantic similarity approaches like cosine similarity, path based approach (wu – palmer and shortest path based), and feature based approach. Our proposed approaches exploits preprocessing of pair of sentences which identifies the bag of words and then applying the similarity measures like cosine similarity, path based similarity measures. In our approach the main contributions are comparison of existing similarity measures and feature based measure based on Wordnet. In feature based approach we perform the tagging and lemmatization and generates the similarity score based on the nouns and verbs. We evaluate our project output by comparing the existing measures based on different thresholds and comparison between three approaches. Finally we conclude that feature based measure generates better semantic score.", "title": "" }, { "docid": "neg:1840018_3", "text": "How do we find patterns in author-keyword associations, evolving over time? Or in data cubes (tensors), with product-branchcustomer sales information? And more generally, how to summarize high-order data cubes (tensors)? How to incrementally update these patterns over time? Matrix decompositions, like principal component analysis (PCA) and variants, are invaluable tools for mining, dimensionality reduction, feature selection, rule identification in numerous settings like streaming data, text, graphs, social networks, and many more settings. However, they have only two orders (i.e., matrices, like author and keyword in the previous example).\n We propose to envision such higher-order data as tensors, and tap the vast literature on the topic. However, these methods do not necessarily scale up, let alone operate on semi-infinite streams. Thus, we introduce a general framework, incremental tensor analysis (ITA), which efficiently computes a compact summary for high-order and high-dimensional data, and also reveals the hidden correlations. Three variants of ITA are presented: (1) dynamic tensor analysis (DTA); (2) streaming tensor analysis (STA); and (3) window-based tensor analysis (WTA). In paricular, we explore several fundamental design trade-offs such as space efficiency, computational cost, approximation accuracy, time dependency, and model complexity.\n We implement all our methods and apply them in several real settings, such as network anomaly detection, multiway latent semantic indexing on citation networks, and correlation study on sensor measurements. Our empirical studies show that the proposed methods are fast and accurate and that they find interesting patterns and outliers on the real datasets.", "title": "" }, { "docid": "neg:1840018_4", "text": "In the last years, the advent of unmanned aerial vehicles (UAVs) for civilian remote sensing purposes has generated a lot of interest because of the various new applications they can offer. One of them is represented by the automatic detection and counting of cars. In this paper, we propose a novel car detection method. It starts with a feature extraction process based on scalar invariant feature transform (SIFT) thanks to which a set of keypoints is identified in the considered image and opportunely described. Successively, the process discriminates between keypoints assigned to cars and those associated with all remaining objects by means of a support vector machine (SVM) classifier. Experimental results have been conducted on a real UAV scene. They show how the proposed method allows providing interesting detection performances.", "title": "" }, { "docid": "neg:1840018_5", "text": "A GUI skeleton is the starting point for implementing a UI design image. To obtain a GUI skeleton from a UI design image, developers have to visually understand UI elements and their spatial layout in the image, and then translate this understanding into proper GUI components and their compositions. Automating this visual understanding and translation would be beneficial for bootstraping mobile GUI implementation, but it is a challenging task due to the diversity of UI designs and the complexity of GUI skeletons to generate. Existing tools are rigid as they depend on heuristically-designed visual understanding and GUI generation rules. In this paper, we present a neural machine translator that combines recent advances in computer vision and machine translation for translating a UI design image into a GUI skeleton. Our translator learns to extract visual features in UI images, encode these features' spatial layouts, and generate GUI skeletons in a unified neural network framework, without requiring manual rule development. For training our translator, we develop an automated GUI exploration method to automatically collect large-scale UI data from real-world applications. We carry out extensive experiments to evaluate the accuracy, generality and usefulness of our approach.", "title": "" }, { "docid": "neg:1840018_6", "text": "Training workshops and professional meetings are important tools for capacity building and professional development. These social events provide professionals and educators a platform where they can discuss and exchange constructive ideas, and receive feedback. In particular, competition-based training workshops where participants compete on solving similar and common challenging problems are effective tools for stimulating students’ learning and aspirations. This paper reports the results of a two-day training workshop where memory and disk forensics were taught using a competition-based security educational tool. The workshop included training sessions for professionals, educators, and students to learn features of Tracer FIRE, a competition-based digital forensics and assessment tool, developed by Sandia National Laboratories. The results indicate that competitionbased training can be very effective in stimulating students’ motivation to learn. However, extra caution should be taken into account when delivering these types of training workshops. Keywords-component; cyber security, digital forenciscs, partcipatory training workshop, competition-based learning,", "title": "" }, { "docid": "neg:1840018_7", "text": "Botnet is one of the major threats on the Internet for committing cybercrimes, such as DDoS attacks, stealing sensitive information, spreading spams, etc. It is a challenging issue to detect modern botnets that are continuously improving for evading detection. In this paper, we propose a machine learning based botnet detection system that is shown to be effective in identifying P2P botnets. Our approach extracts convolutional version of effective flow-based features, and trains a classification model by using a feed-forward artificial neural network. The experimental results show that the accuracy of detection using the convolutional features is better than the ones using the traditional features. It can achieve 94.7% of detection accuracy and 2.2% of false positive rate on the known P2P botnet datasets. Furthermore, our system provides an additional confidence testing for enhancing performance of botnet detection. It further classifies the network traffic of insufficient confidence in the neural network. The experiment shows that this stage can increase the detection accuracy up to 98.6% and decrease the false positive rate up to 0.5%.", "title": "" }, { "docid": "neg:1840018_8", "text": "Type I membrane oscillators such as the Connor model (Connor et al. 1977) and the Morris-Lecar model (Morris and Lecar 1981) admit very low frequency oscillations near the critical applied current. Hansel et al. (1995) have numerically shown that synchrony is difficult to achieve with these models and that the phase resetting curve is strictly positive. We use singular perturbation methods and averaging to show that this is a general property of Type I membrane models. We show in a limited sense that so called Type II resetting occurs with models that obtain rhythmicity via a Hopf bifurcation. We also show the differences between synapses that act rapidly and those that act slowly and derive a canonical form for the phase interactions.", "title": "" }, { "docid": "neg:1840018_9", "text": "Although there is interest in the educational potential of online multiplayer games and virtual worlds, there is still little evidence to explain specifically what and how people learn from these environments. This paper addresses this issue by exploring the experiences of couples that play World of Warcraft together. Learning outcomes were identified (involving the management of ludic, social and material resources) along with learning processes, which followed Wenger’s model of participation in Communities of Practice. Comparing this with existing literature suggests that productive comparisons can be drawn with the experiences of distance education students and the social pressures that affect their participation. Introduction Although there is great interest in the potential that computer games have in educational settings (eg, McFarlane, Sparrowhawk & Heald, 2002), and their relevance to learning more generally (eg, Gee, 2003), there has been relatively little in the way of detailed accounts of what is actually learnt when people play (Squire, 2002), and still less that relates such learning to formal education. In this paper, we describe a study that explores how people learn when they play the massively multiplayer online role-playing game (MMORPG), World of Warcraft. Detailed, qualitative research was undertaken with couples to explore their play, adopting a social perspective on learning. The paper concludes with a discussion that relates this to formal curricula and considers the implications for distance learning. British Journal of Educational Technology Vol 40 No 3 2009 444–457 doi:10.1111/j.1467-8535.2009.00948.x © 2009 The Authors. Journal compilation © 2009 Becta. Published by Blackwell Publishing, 9600 Garsington Road, Oxford OX4 2DQ, UK and 350 Main Street, Malden, MA 02148, USA. Background Researchers have long been interested in games and learning. There is, for example, a tradition of work within psychology exploring what makes games motivating, and relating this to learning (eg, Malone & Lepper, 1987). Games have been recently featured in mainstream educational policy (eg, DfES, 2005), and it has been suggested (eg, Gee, 2003) that they provide a model that should inform educational practice more generally. However, research exploring how games can be used in formal education suggests that the potential value of games to support learning is not so easy to realise. McFarlane et al (2002, p. 16), for example, argued that ‘the greatest obstacle to integrating games into the curriculum is the mismatch between the skills and knowledge developed in games, and those recognised explicitly within the school system’. Mitchell and Savill-Smith (2004) noted that although games have been used to support various kinds of learning (eg, recall of content, computer literacy, strategic skills), such uses were often problematic, being complicated by the need to integrate games into existing educational contexts. Furthermore, games specifically designed to be educational were ‘typically disliked’ (p. 44) as well as being expensive to produce. Until recently, research on the use of games in education tended to focus on ‘stand alone’ or single player games. Such games can, to some extent, be assessed in terms of their content coverage or instructional design processes, and evaluated for their ‘fit’ with a given curriculum (eg, Kirriemuir, 2002). Gaming, however, is generally a social activity, and this is even more apparent when we move from a consideration of single player games to a focus on multiplayer, online games. Viewing games from a social perspective opens the possibility of understanding learning as a social achievement, not just a process of content acquisition or skills development (Squire, 2002). In this study, we focus on a particular genre of online, multiplayer game: an MMORPG. MMORPGs incorporate structural elements drawn from table-top role-playing games (Dungeons & Dragons being the classic example). Play takes place in an expansive and persistent graphically rendered world. Players form teams and guilds, undertake group missions, meet in banks and auction houses, chat, congregate in virtual cities and engage in different modes of play, which involve various forms of collaboration and competition. As Squire noted (2002), socially situated accounts of actual learning in games (as opposed to what they might, potentially, help people to learn) have been lacking, partly because the topic is so complex. How, indeed, should the ‘game’ be understood—is it limited to the rules, or the player’s interactions with these rules? Does it include other players, and all possible interactions, and extend to out-of-game related activities and associated materials such as fan forums? Such questions have methodological implications, and hint at the ambiguities that educators working with virtual worlds might face (Carr, Oliver & Burn, 2008). Learning in virtual worlds 445 © 2009 The Authors. Journal compilation © 2009 Becta. Work in this area is beginning to emerge, particularly in relation to the learning and mentoring that takes place within player ‘guilds’ and online clans (see Galarneau, 2005; Steinkuehler, 2005). However, it is interesting to note that the research emerging from a digital game studies perspective, including much of the work cited thus far, is rarely utilised by educators researching the pedagogic potentials of virtual worlds such as Second Life. This study is informed by and attempts to speak to both of these communities. Methodology The purpose of this study was to explore how people learn in such virtual worlds in general. It was decided that focusing on a MMORPG such as World of Warcraft would be practical and offer a rich opportunity to study learning. MMORPGs are games; they have rules and goals, and particular forms of progression. Expertise in a virtual world such as Second Life is more dispersed, because the range of activities is that much greater (encompassing building, playing, scripting, creating machinima or socialising, for instance). Each of these activities would involve particular forms of expertise. The ‘curriculum’ proposed by World of Warcraft is more specified. It was important to approach learning practices in this game without divorcing such phenomena from the real-world contexts in which play takes place. In order to study players’ accounts of learning and the links between their play and other aspects of their social lives, we sought participants who would interact with each other both in the context of the game and outside of it. To this end, we recruited couples that play together in the virtual environment of World of Warcraft, while sharing real space. This decision was taken to manage the potential complexity of studying social settings: couples were the simplest stable social formation that we could identify who would interact both in the context of the game and outside of this too. Interviews were conducted with five couples. These were theoretically sampled, to maximise diversity in players’ accounts (as with any theoretically sampled study, this means that no claims can be made about prevalence or typicality). Players were recruited through online guilds and real-world social networks. The first two sets of participants were sampled for convenience (two heterosexual couples); the rest were invited to participate in order to broaden this sample (one couple was chosen because they shared a single account, one where a partner had chosen to stop playing and one mother–son pairing). All participants were adults, and conventional ethical procedures to ensure informed consent were followed, as specified in the British Educational Research Association guidelines. The couples were interviewed in the game world at a location of their choosing. The interviews, which were semi-structured, were chat-logged and each lasted 60–90 minutes. The resulting transcripts were split into self-contained units (typically a single statement, or a question and answer, or a short exchange) and each was categorised 446 British Journal of Educational Technology Vol 40 No 3 2009 © 2009 The Authors. Journal compilation © 2009 Becta. thematically. The initial categories were then jointly reviewed in order to consolidate and refine them, cross-checking them with the source transcripts to ensure their relevance and coherence. At this stage, the categories included references to topics such as who started first, self-assessments of competence, forms of help, guilds, affect, domestic space and assets, ‘alts’ (multiple characters) and so on. These were then reviewed to develop a single category that might provide an overview or explanation of the process. It should be noted that although this approach was informed by ‘grounded theory’ processes as described in Glaser and Strauss (1967), it does not share their positivistic stance on the status of the model that has been developed. Instead, it accords more closely with the position taken by Charmaz (2000), who recognises the central role of the researcher in shaping the data collected and making sense of it. What is produced therefore is seen as a socially constructed model, based on personal narratives, rather than an objective account of an independent reality. Reviewing the categories that emerged in this case led to ‘management of resources’ being selected as a general marker of learning. As players moved towards greater competence, they identified and leveraged an increasingly complex array of in-game resources, while also negotiating real-world resources and demands. To consider this framework in greater detail, ‘management of resources’ was subdivided into three categories: ludic (concerning the skills, knowledge and practices of game play), social and material (concerning physical resources such as the embodied setting for play) (see Carr & Oliver, 2008). Using this explanation of learning, the transcripts were re-reviewed in order to", "title": "" }, { "docid": "neg:1840018_10", "text": "In this position paper, we initiate a systematic treatment of reaching consensus in a permissionless network. We prove several simple but hopefully insightful lower bounds that demonstrate exactly why reaching consensus in a permission-less setting is fundamentally more difficult than the classical, permissioned setting. We then present a simplified proof of Nakamoto's blockchain which we recommend for pedagogical purposes. Finally, we survey recent results including how to avoid well-known painpoints in permissionless consensus, and how to apply core ideas behind blockchains to solve consensus in the classical, permissioned setting and meanwhile achieve new properties that are not attained by classical approaches.", "title": "" }, { "docid": "neg:1840018_11", "text": "The abstraction of a shared memory is of growing importance in distributed computing systems. Traditional memory consistency ensures that all processes agree on a common order of all operations on memory. Unfortunately, providing these guarantees entails access latencies that prevent scaling to large systems. This paper weakens such guarantees by definingcausal memory, an abstraction that ensures that processes in a system agree on the relative ordering of operations that arecausally related. Because causal memory isweakly consistent, it admits more executions, and hence more concurrency, than either atomic or sequentially consistent memories. This paper provides a formal definition of causal memory and gives an implementation for message-passing systems. In addition, it describes a practical class of programs that, if developed for a strongly consistent memory, run correctly with causal memory.", "title": "" }, { "docid": "neg:1840018_12", "text": "Single-image haze-removal is challenging due to limited information contained in one single image. Previous solutions largely rely on handcrafted priors to compensate for this deficiency. Recent convolutional neural network (CNN) models have been used to learn haze-related priors but they ultimately work as advanced image filters. In this paper we propose a novel semantic approach towards single image haze removal. Unlike existing methods, we infer color priors based on extracted semantic features. We argue that semantic context can be exploited to give informative cues for (a) learning color prior on clean image and (b) estimating ambient illumination. This design allowed our model to recover clean images from challenging cases with strong ambiguity, e.g. saturated illumination color and sky regions in image. In experiments, we validate our approach upon synthetic and real hazy images, where our method showed superior performance over state-of-the-art approaches, suggesting semantic information facilitates the haze removal task.", "title": "" }, { "docid": "neg:1840018_13", "text": "The problem of secure data processing by means of a neural network (NN) is addressed. Secure processing refers to the possibility that the NN owner does not get any knowledge about the processed data since they are provided to him in encrypted format. At the same time, the NN itself is protected, given that its owner may not be willing to disclose the knowledge embedded within it. The considered level of protection ensures that the data provided to the network and the network weights and activation functions are kept secret. Particular attention is given to prevent any disclosure of information that could bring a malevolent user to get access to the NN secrets by properly inputting fake data to any point of the proposed protocol. With respect to previous works in this field, the interaction between the user and the NN owner is kept to a minimum with no resort to multiparty computation protocols.", "title": "" }, { "docid": "neg:1840018_14", "text": "A planar W-band monopulse antenna array is designed based on the substrate integrated waveguide (SIW) technology. The sum-difference comparator, 16-way divider and 32 × 32 slot array antenna are all integrated on a single dielectric substrate in the compact layout through the low-cost PCB process. Such a substrate integrated monopulse array is able to operate over 93 ~ 96 GHz with narrow-beam and high-gain. The maximal gain is measured to be 25.8 dBi, while the maximal null-depth is measured to be - 43.7 dB. This SIW monopulse antenna not only has advantages of low-cost, light, easy-fabrication, etc., but also has good performance validated by measurements. It presents an excellent candidate for W-band directional-finding systems.", "title": "" }, { "docid": "neg:1840018_15", "text": "In 2004, the US Center for Disease Control (CDC) published a paper showing that there is no link between the age at which a child is vaccinated with MMR and the vaccinated children's risk of a subsequent diagnosis of autism. One of the authors, William Thompson, has now revealed that statistically significant information was deliberately omitted from the paper. Thompson first told Dr S Hooker, a researcher on autism, about the manipulation of the data. Hooker analysed the raw data from the CDC study afresh. He confirmed that the risk of autism among African American children vaccinated before the age of 2 years was 340% that of those vaccinated later.", "title": "" }, { "docid": "neg:1840018_16", "text": "The short length of the estrous cycle of rats makes them ideal for investigation of changes occurring during the reproductive cycle. The estrous cycle lasts four days and is characterized as: proestrus, estrus, metestrus and diestrus, which may be determined according to the cell types observed in the vaginal smear. Since the collection of vaginal secretion and the use of stained material generally takes some time, the aim of the present work was to provide researchers with some helpful considerations about the determination of the rat estrous cycle phases in a fast and practical way. Vaginal secretion of thirty female rats was collected every morning during a month and unstained native material was observed using the microscope without the aid of the condenser lens. Using the 10 x objective lens, it was easier to analyze the proportion among the three cellular types, which are present in the vaginal smear. Using the 40 x objective lens, it is easier to recognize each one of these cellular types. The collection of vaginal lavage from the animals, the observation of the material, in the microscope, and the determination of the estrous cycle phase of all the thirty female rats took 15-20 minutes.", "title": "" }, { "docid": "neg:1840018_17", "text": "Hierarchical temporal memory (HTM) provides a theoretical framework that models several key computational principles of the neocortex. In this paper, we analyze an important component of HTM, the HTM spatial pooler (SP). The SP models how neurons learn feedforward connections and form efficient representations of the input. It converts arbitrary binary input patterns into sparse distributed representations (SDRs) using a combination of competitive Hebbian learning rules and homeostatic excitability control. We describe a number of key properties of the SP, including fast adaptation to changing input statistics, improved noise robustness through learning, efficient use of cells, and robustness to cell death. In order to quantify these properties we develop a set of metrics that can be directly computed from the SP outputs. We show how the properties are met using these metrics and targeted artificial simulations. We then demonstrate the value of the SP in a complete end-to-end real-world HTM system. We discuss the relationship with neuroscience and previous studies of sparse coding. The HTM spatial pooler represents a neurally inspired algorithm for learning sparse representations from noisy data streams in an online fashion.", "title": "" }, { "docid": "neg:1840018_18", "text": "We investigate the task of 2D articulated human pose estimation in unconstrained still images. This is extremely challenging because of variation in pose, anatomy, clothing, and imaging conditions. Current methods use simple models of body part appearance and plausible configurations due to limitations of available training data and constraints on computational expense. We show that such models severely limit accuracy. Building on the successful pictorial structure model (PSM) we propose richer models of both appearance and pose, using state-of-the-art discriminative classifiers without introducing unacceptable computational expense. We introduce a new annotated database of challenging consumer images, an order of magnitude larger than currently available datasets, and demonstrate over 50% relative improvement in pose estimation accuracy over a stateof-the-art method.", "title": "" } ]
1840019
Building Neuromorphic Circuits with Memristive Devices
[ { "docid": "pos:1840019_0", "text": "Hybrid reconfigurable logic circuits were fabricated by integrating memristor-based crossbars onto a foundry-built CMOS (complementary metal-oxide-semiconductor) platform using nanoimprint lithography, as well as materials and processes that were compatible with the CMOS. Titanium dioxide thin-film memristors served as the configuration bits and switches in a data routing network and were connected to gate-level CMOS components that acted as logic elements, in a manner similar to a field programmable gate array. We analyzed the chips using a purpose-built testing system, and demonstrated the ability to configure individual devices, use them to wire up various logic gates and a flip-flop, and then reconfigure devices.", "title": "" } ]
[ { "docid": "neg:1840019_0", "text": "State-of-the-art 3D shape classification and retrieval algorithms, hereinafter referred to as shape analysis, are often based on comparing signatures or descriptors that capture the main geometric and topological properties of 3D objects. None of the existing descriptors, however, achieve best performance on all shape classes. In this article, we explore, for the first time, the usage of covariance matrices of descriptors, instead of the descriptors themselves, in 3D shape analysis. Unlike histogram -based techniques, covariance-based 3D shape analysis enables the fusion and encoding of different types of features and modalities into a compact representation. Covariance matrices, however, are elements of the non-linear manifold of symmetric positive definite (SPD) matrices and thus \\BBL2 metrics are not suitable for their comparison and clustering. In this article, we study geodesic distances on the Riemannian manifold of SPD matrices and use them as metrics for 3D shape matching and recognition. We then: (1) introduce the concepts of bag of covariance (BoC) matrices and spatially-sensitive BoC as a generalization to the Riemannian manifold of SPD matrices of the traditional bag of features framework, and (2) generalize the standard kernel methods for supervised classification of 3D shapes to the space of covariance matrices. We evaluate the performance of the proposed BoC matrices framework and covariance -based kernel methods and demonstrate their superiority compared to their descriptor-based counterparts in various 3D shape matching, retrieval, and classification setups.", "title": "" }, { "docid": "neg:1840019_1", "text": "Article history: Received 8 July 2016 Received in revised form 15 November 2016 Accepted 29 December 2016 Available online 25 January 2017 As part of the post-2015 United Nations sustainable development agenda, the world has its first urban sustainable development goal (USDG) “to make cities and human settlements inclusive, safe, resilient and sustainable”. This paper provides an overview of the USDG and explores some of the difficulties around using this goal as a tool for improving cities. We argue that challenges emerge around selecting the indicators in the first place and also around the practical use of these indicators once selected. Three main practical problems of indicator use include 1) the poor availability of standardized, open and comparable data 2) the lack of strong data collection institutions at the city scale to support monitoring for the USDG and 3) “localization” the uptake and context specific application of the goal by diverse actors in widely different cities. Adding to the complexity, the USDG conversation is taking place at the same time as the proliferation of a bewildering array of indicator systems at different scales. Prompted by technological change, debates on the “data revolution” and “smart city” also have direct bearing on the USDG. We argue that despite these many complexities and challenges, the USDG framework has the potential to encourage and guide needed reforms in our cities but only if anchored in local institutions and initiatives informed by open, inclusive and contextually sensitive data collection and monitoring. © 2017 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "neg:1840019_2", "text": "We perform sensitivity analyses on a mathematical model of malaria transmission to determine the relative importance of model parameters to disease transmission and prevalence. We compile two sets of baseline parameter values: one for areas of high transmission and one for low transmission. We compute sensitivity indices of the reproductive number (which measures initial disease transmission) and the endemic equilibrium point (which measures disease prevalence) to the parameters at the baseline values. We find that in areas of low transmission, the reproductive number and the equilibrium proportion of infectious humans are most sensitive to the mosquito biting rate. In areas of high transmission, the reproductive number is again most sensitive to the mosquito biting rate, but the equilibrium proportion of infectious humans is most sensitive to the human recovery rate. This suggests strategies that target the mosquito biting rate (such as the use of insecticide-treated bed nets and indoor residual spraying) and those that target the human recovery rate (such as the prompt diagnosis and treatment of infectious individuals) can be successful in controlling malaria.", "title": "" }, { "docid": "neg:1840019_3", "text": "In 2005 the Commission for Africa noted that ‘Tackling HIV and AIDS requires a holistic response that recognises the wider cultural and social context’ (p. 197). Cultural factors that range from beliefs and values regarding courtship, sexual networking, contraceptive use, perspectives on sexual orientation, explanatory models for disease and misfortune and norms for gender and marital relations have all been shown to be factors in the various ways that HIV/AIDS has impacted on African societies (UNESCO, 2002). Increasingly the centrality of culture is being recognised as important to HIV/AIDS prevention, treatment, care and support. With culture having both positive and negative influences on health behaviour, international donors and policy makers are beginning to acknowledge the need for cultural approaches to the AIDS crisis (Nguyen et al., 2008). The development of cultural approaches to HIV/AIDS presents two major challenges for South Africa. First, the multi-cultural nature of the country means that there is no single sociocultural context in which the HIV/AIDS epidemic is occurring. South Africa is home to a rich tapestry of racial, ethnic, religious and linguistic groups. As a result of colonial history and more recent migration, indigenous Africans have come to live alongside large populations of people with European, Asian and mixed descent, all of whom could lay claim to distinctive cultural practices and spiritual beliefs. Whilst all South Africans are affected by the spread of HIV, the burden of the disease lies with the majority black African population (see Shisana et al., 2005; UNAIDS, 2007). Therefore, this chapter will focus on some sociocultural aspects of life within the majority black African population of South Africa, most of whom speak languages that are classified within the broad linguistic grouping of Bantu languages. This large family of linguistically related ethnic groups span across southern Africa and comprise the bulk of the African people who reside in South Africa today (Hammond-Tooke, 1974). A second challenge involves the legitimacy of the culture concept. Whilst race was used in apartheid as the rationale for discrimination, notions of culture and cultural differences were legitimised by segregating the country into various ‘homelands’. Within the homelands, the majority black South Africans could presumably", "title": "" }, { "docid": "neg:1840019_4", "text": "irtually everyone would agree that a primary, yet insufficiently met, goal of schooling is to enable students to think critically. In layperson’s terms, critical thinking consists of seeing both sides of an issue, being open to new evidence that disconfirms your ideas, reasoning dispassionately, demanding that claims be backed by evidence, deducing and inferring conclusions from available facts, solving problems, and so forth. Then too, there are specific types of critical thinking that are characteristic of different subject matter: That’s what we mean when we refer to “thinking like a scientist” or “thinking like a historian.” This proper and commonsensical goal has very often been translated into calls to teach “critical thinking skills” and “higher-order thinking skills”—and into generic calls for teaching students to make better judgments, reason more logically, and so forth. In a recent survey of human resource officials and in testimony delivered just a few months ago before the Senate Finance Committee, business leaders have repeatedly exhorted schools to do a better job of teaching students to think critically. And they are not alone. Organizations and initiatives involved in education reform, such as the National Center on Education and the Economy, the American Diploma Project, and the Aspen Institute, have pointed out the need for students to think and/or reason critically. The College Board recently revamped the SAT to better assess students’ critical thinking. And ACT, Inc. offers a test of critical thinking for college students. These calls are not new. In 1983, A Nation At Risk, a report by the National Commission on Excellence in Education, found that many 17-year-olds did not possess the “‘higher-order’ intellectual skills” this country needed. It claimed that nearly 40 percent could not draw inferences from written material and only onefifth could write a persuasive essay. Following the release of A Nation At Risk, programs designed to teach students to think critically across the curriculum became extremely popular. By 1990, most states had initiatives designed to encourage educators to teach critical thinking, and one of the most widely used programs, Tactics for Thinking, sold 70,000 teacher guides. But, for reasons I’ll explain, the programs were not very effective—and today we still lament students’ lack of critical thinking. After more than 20 years of lamentation, exhortation, and little improvement, maybe it’s time to ask a fundamental question: Can critical thinking actually be taught? Decades of cognitive research point to a disappointing answer: not really. People who have sought to teach critical thinking have assumed that it is a skill, like riding a bicycle, and that, like other skills, once you learn it, you can apply it in any situation. Research from cognitive science shows that thinking is not that sort of skill. The processes of thinking are intertwined with the content of thought (that is, domain knowledge). Thus, if you remind a student to “look at an issue from multiple perspectives” often enough, he will learn that he ought to do so, but if he doesn’t know much about Critical Thinking", "title": "" }, { "docid": "neg:1840019_5", "text": "Reliable detection and avoidance of obstacles is a crucial prerequisite for autonomously navigating robots as both guarantee safety and mobility. To ensure safe mobility, the obstacle detection needs to run online, thereby taking limited resources of autonomous systems into account. At the same time, robust obstacle detection is highly important. Here, a too conservative approach might restrict the mobility of the robot, while a more reckless one might harm the robot or the environment it is operating in. In this paper, we present a terrain-adaptive approach to obstacle detection that relies on 3D-Lidar data and combines computationally cheap and fast geometric features, like step height and steepness, which are updated with the frequency of the lidar sensor, with semantic terrain information, which is updated with at lower frequency. We provide experiments in which we evaluate our approach on a real robot on an autonomous run over several kilometers containing different terrain types. The experiments demonstrate that our approach is suitable for autonomous systems that have to navigate reliable on different terrain types including concrete, dirt roads and grass.", "title": "" }, { "docid": "neg:1840019_6", "text": "Personality plays an important role in the way people manage the images they convey in self-presentations and employment interviews, trying to affect the other\"s first impressions and increase effectiveness. This paper addresses the automatically detection of the Big Five personality traits from short (30-120 seconds) self-presentations, by investigating the effectiveness of 29 simple acoustic and visual non-verbal features. Our results show that Conscientiousness and Emotional Stability/Neuroticism are the best recognizable traits. The lower accuracy levels for Extraversion and Agreeableness are explained through the interaction between situational characteristics and the differential activation of the behavioral dispositions underlying those traits.", "title": "" }, { "docid": "neg:1840019_7", "text": "With the rapid growth of social media, massive misinformation is also spreading widely on social media, e.g., Weibo and Twitter, and brings negative effects to human life. Today, automatic misinformation identification has drawn attention from academic and industrial communities. Whereas an event on social media usually consists of multiple microblogs, current methods are mainly constructed based on global statistical features. However, information on social media is full of noise, which should be alleviated. Moreover, most of the microblogs about an event have little contribution to the identification of misinformation, where useful information can be easily overwhelmed by useless information. Thus, it is important to mine significant microblogs for constructing a reliable misinformation identification method. In this article, we propose an attention-based approach for identification of misinformation (AIM). Based on the attention mechanism, AIM can select microblogs with the largest attention values for misinformation identification. The attention mechanism in AIM contains two parts: content attention and dynamic attention. Content attention is the calculated-based textual features of each microblog. Dynamic attention is related to the time interval between the posting time of a microblog and the beginning of the event. To evaluate AIM, we conduct a series of experiments on the Weibo and Twitter datasets, and the experimental results show that the proposed AIM model outperforms the state-of-the-art methods.", "title": "" }, { "docid": "neg:1840019_8", "text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/page/info/about/policies/terms.jsp. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.", "title": "" }, { "docid": "neg:1840019_9", "text": "Sudoku is a very popular puzzle which consists of placing several numbers in a squared grid according to some simple rules. In this paper, we present a Sudoku solving technique named Boolean Sudoku Solver (BSS) using only simple Boolean algebras. Use of Boolean algebra increases the execution speed of the Sudoku solver. Simulation results show that our method returns the solution of the Sudoku in minimum number of iterations and outperforms the existing popular approaches.", "title": "" }, { "docid": "neg:1840019_10", "text": "This study examines the accuracy of 54 online dating photographs posted by heterosexual daters. We report data on (a1) online daters’ self-reported accuracy, (b) independent judges’ perceptions of accuracy, and (c) inconsistencies in the profile photograph identified by trained coders. While online daters rated their photos as relatively accurate, independent judges rated approximately 1/3 of the photographs as not accurate. Female photographs were judged as less accurate than male photographs, and were more likely to be older, to be retouched or taken by a professional photographer, and to contain inconsistencies, including changes in hair style and skin quality. The findings are discussed in terms of the tensions experienced by online daters to (a) enhance their physical attractiveness and (b) present a photograph that would not be judged deceptive in subsequent face-to-face meetings. The paper extends the theoretical concept of selective self-presentation to online photographs, and discusses issues of self-deception and social desirability bias.", "title": "" }, { "docid": "neg:1840019_11", "text": "Program translation is an important tool to migrate legacy code in one language into an ecosystem built in a different language. In this work, we are the first to employ deep neural networks toward tackling this problem. We observe that program translation is a modular procedure, in which a sub-tree of the source tree is translated into the corresponding target sub-tree at each step. To capture this intuition, we design a tree-to-tree neural network to translate a source tree into a target one. Meanwhile, we develop an attention mechanism for the tree-to-tree model, so that when the decoder expands one non-terminal in the target tree, the attention mechanism locates the corresponding sub-tree in the source tree to guide the expansion of the decoder. We evaluate the program translation capability of our tree-to-tree model against several state-of-the-art approaches. Compared against other neural translation models, we observe that our approach is consistently better than the baselines with a margin of up to 15 points. Further, our approach can improve the previous state-of-the-art program translation approaches by a margin of 20 points on the translation of real-world projects.", "title": "" }, { "docid": "neg:1840019_12", "text": "Model repositories play a central role in the model driven development of complex software-intensive systems by offering means to persist and manipulate models obtained from heterogeneous languages and tools. Complex models can be assembled by interconnecting model fragments by hard links, i.e., regular references, where the target end points to external resources using storage-specific identifiers. This approach, in certain application scenarios, may prove to be a too rigid and error prone way of interlinking models. As a flexible alternative, we propose to combine derived features with advanced incremental model queries as means for soft interlinking of model elements residing in different model resources. These soft links can be calculated on-demand with graceful handling for temporarily unresolved references. In the background, the links are maintained efficiently and flexibly by using incremental model query evaluation. The approach is applicable to modeling environments or even property graphs for representing query results as first-class relations, which also allows the chaining of soft links that is useful for modular applications. The approach is evaluated using the Eclipse Modeling Framework (EMF) and EMF-IncQuery in two complex industrial case studies. The first case study is motivated by a knowledge management project from the financial domain, involving a complex interlinked structure of concept and business process models. The second case study is set in the avionics domain with strict traceability requirements enforced by certification standards (DO-178b). It consists of multiple domain models describing the allocation scenario of software functions to hardware components.", "title": "" }, { "docid": "neg:1840019_13", "text": "In this paper, we study the stochastic gradient descent method in analyzing nonconvex statistical optimization problems from a diffusion approximation point of view. Using the theory of large deviation of random dynamical system, we prove in the small stepsize regime and the presence of omnidirectional noise the following: starting from a local minimizer (resp. saddle point) the SGD iteration escapes in a number of iteration that is exponentially (resp. linearly) dependent on the inverse stepsize. We take the deep neural network as an example to study this phenomenon. Based on a new analysis of the mixing rate of multidimensional Ornstein-Uhlenbeck processes, our theory substantiate a very recent empirical results by Keskar et al. (2016), suggesting that large batch sizes in training deep learning for synchronous optimization leads to poor generalization error.", "title": "" }, { "docid": "neg:1840019_14", "text": "Inevitably, reading is one of the requirements to be undergone. To improve the performance and quality, someone needs to have something new every day. It will suggest you to have more inspirations, then. However, the needs of inspirations will make you searching for some sources. Even from the other people experience, internet, and many books. Books and internet are the recommended media to help you improving your quality and performance.", "title": "" }, { "docid": "neg:1840019_15", "text": "Developing manufacturing simulation models usually requires experts with knowledge of multiple areas including manufacturing, modeling, and simulation software. The expertise requirements increase for virtual factory models that include representations of manufacturing at multiple resolution levels. This paper reports on an initial effort to automatically generate virtual factory models using manufacturing configuration data in standard formats as the primary input. The execution of the virtual factory generates time series data in standard formats mimicking a real factory. Steps are described for auto-generation of model components in a software environment primarily oriented for model development via a graphic user interface. Advantages and limitations of the approach and the software environment used are discussed. The paper concludes with a discussion of challenges in verification and validation of the virtual factory prototype model with its multiple hierarchical models and future directions.", "title": "" }, { "docid": "neg:1840019_16", "text": "Ebola virus disease (EVD) distinguishes its feature as high infectivity and mortality. Thus, it is urgent for governments to draw up emergency plans against Ebola. However, it is hard to predict the possible epidemic situations in practice. Luckily, in recent years, computational experiments based on artificial society appeared, providing a new approach to study the propagation of EVD and analyze the corresponding interventions. Therefore, the rationality of artificial society is the key to the accuracy and reliability of experiment results. Individuals' behaviors along with travel mode directly affect the propagation among individuals. Firstly, artificial Beijing is reconstructed based on geodemographics and machine learning is involved to optimize individuals' behaviors. Meanwhile, Ebola course model and propagation model are built, according to the parameters in West Africa. Subsequently, propagation mechanism of EVD is analyzed, epidemic scenario is predicted, and corresponding interventions are presented. Finally, by simulating the emergency responses of Chinese government, the conclusion is finally drawn that Ebola is impossible to outbreak in large scale in the city of Beijing.", "title": "" }, { "docid": "neg:1840019_17", "text": "Nanorobotics is the technology of creating machines or robots of the size of few hundred nanometres and below consisting of components of nanoscale or molecular size. There is an all around development in nanotechnology towards realization of nanorobots in the last two decades. In the present work, the compilation of advancement in nanotechnology in context to nanorobots is done. The challenges and issues in movement of a nanorobot and innovations present in nature to overcome the difficulties in moving at nano-size regimes are discussed. The efficiency aspect in context to artificial nanorobot is also presented.", "title": "" }, { "docid": "neg:1840019_18", "text": "There are plenty of classification methods that perform well when training and testing data are drawn from the same distribution. However, in real applications, this condition may be violated, which causes degradation of classification accuracy. Domain adaptation is an effective approach to address this problem. In this paper, we propose a general domain adaptation framework from the perspective of prediction reweighting, from which a novel approach is derived. Different from the major domain adaptation methods, our idea is to reweight predictions of the training classifier on testing data according to their signed distance to the domain separator, which is a classifier that distinguishes training data (from source domain) and testing data (from target domain). We then propagate the labels of target instances with larger weights to ones with smaller weights by introducing a manifold regularization method. It can be proved that our reweighting scheme effectively brings the source and target domains closer to each other in an appropriate sense, such that classification in target domain becomes easier. The proposed method can be implemented efficiently by a simple two-stage algorithm, and the target classifier has a closed-form solution. The effectiveness of our approach is verified by the experiments on artificial datasets and two standard benchmarks, a visual object recognition task and a cross-domain sentiment analysis of text. Experimental results demonstrate that our method is competitive with the state-of-the-art domain adaptation algorithms.", "title": "" } ]
1840020
How to Measure Motivation : A Guide for the Experimental Social Psychologist
[ { "docid": "pos:1840020_0", "text": "Six studies explore the role of goal shielding in self-regulation by examining how the activation of focal goals to which the individual is committed inhibits the accessibility of alternative goals. Consistent evidence was found for such goal shielding, and a number of its moderators were identified: Individuals' level of commitment to the focal goal, their degree of anxiety and depression, their need for cognitive closure, and differences in their goal-related tenacity. Moreover, inhibition of alternative goals was found to be more pronounced when they serve the same overarching purpose as the focal goal, but lessened when the alternative goals facilitate focal goal attainment. Finally, goal shielding was shown to have beneficial consequences for goal pursuit and attainment.", "title": "" } ]
[ { "docid": "neg:1840020_0", "text": "We study reinforcement learning of chat-bots with recurrent neural network architectures when the rewards are noisy and expensive to obtain. For instance, a chat-bot used in automated customer service support can be scored by quality assurance agents, but this process can be expensive, time consuming and noisy. Previous reinforcement learning work for natural language processing uses onpolicy updates and/or is designed for on-line learning settings. We demonstrate empirically that such strategies are not appropriate for this setting and develop an off-policy batch policy gradient method (BPG). We demonstrate the efficacy of our method via a series of synthetic experiments and an Amazon Mechanical Turk experiment on a restaurant recommendations dataset.", "title": "" }, { "docid": "neg:1840020_1", "text": "This paper presents some of the unique verification, validation, and certification challenges that must be addressed during the development of adaptive system software for use in safety-critical aerospace applications. The paper first discusses the challenges imposed by the current regulatory guidelines for aviation software. Next, a number of individual technologies being researched by NASA and others are discussed that focus on various aspects of the software challenges. These technologies include the formal methods of model checking, compositional verification, static analysis, program synthesis, and runtime analysis. Then the paper presents some validation challenges for adaptive control, including proving convergence over long durations, guaranteeing controller stability, using new tools to compute statistical error bounds, identifying problems in fault-tolerant software, and testing in the presence of adaptation. These specific challenges are presented in the context of a software validation effort in testing the Integrated Flight Control System (IFCS) neural control software at the Dryden Flight Research Center. Lastly, the challenges to develop technologies to help prevent aircraft system failures, detect and identify failures that do occur, and provide enhanced guidance and control capability to prevent and recover from vehicle loss of control are briefly cited in connection with ongoing work at the NASA Langley Research Center.", "title": "" }, { "docid": "neg:1840020_2", "text": "Exploration of ANNs for the economic purposes is described and empirically examined with the foreign exchange market data. For the experiments, panel data of the exchange rates (USD/EUR, JPN/USD, USD/ GBP) are examined and optimized to be used for time-series predictions with neural networks. In this stage the input selection, in which the processing steps to prepare the raw data to a suitable input for the models are investigated. The best neural network is found with the best forecasting abilities, based on a certain performance measure. A visual graphs on the experiments data set is presented after processing steps, to illustrate that particular results. The out-of-sample results are compared with training ones. & 2015 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "neg:1840020_3", "text": "In an Information technology world, the ability to effectively process massive datasets has become integral to a broad range of scientific and other academic disciplines. We are living in an era of data deluge and as a result, the term “Big Data” is appearing in many contexts. It ranges from meteorology, genomics, complex physics simulations, biological and environmental research, finance and business to healthcare. Big Data refers to data streams of higher velocity and higher variety. The infrastructure required to support the acquisition of Big Data must deliver low, predictable latency in both capturing data and in executing short, simple queries. To be able to handle very high transaction volumes, often in a distributed environment; and support flexible, dynamic data structures. Data processing is considerably more challenging than simply locating, identifying, understanding, and citing data. For effective large-scale analysis all of this has to happen in a completely automated manner. This requires differences in data structure and semantics to be expressed in forms that are computer understandable, and then “robotically” resolvable. There is a strong body of work in data integration, mapping and transformations. However, considerable additional work is required to achieve automated error-free difference resolution. This paper proposes a framework on recent research for the Data Mining using Big Data.", "title": "" }, { "docid": "neg:1840020_4", "text": "One major goal of vision is to infer physical models of objects, surfaces, and their layout from sensors. In this paper, we aim to interpret indoor scenes from one RGBD image. Our representation encodes the layout of walls, which must conform to a Manhattan structure but is otherwise flexible, and the layout and extent of objects, modeled with CAD-like 3D shapes. We represent both the visible and occluded portions of the scene, producing a complete 3D parse. Such a scene interpretation is useful for robotics and visual reasoning, but difficult to produce due to the wellknown challenge of segmentation, the high degree of occlusion, and the diversity of objects in indoor scene. We take a data-driven approach, generating sets of potential object regions, matching to regions in training images, and transferring and aligning associated 3D models while encouraging fit to observations and overall consistency. We demonstrate encouraging results on the NYU v2 dataset and highlight a variety of interesting directions for future work.", "title": "" }, { "docid": "neg:1840020_5", "text": "An electrical-balance duplexer uses series connected step-down transformers to enhance linearity and power handling capability by reducing the voltage swing across nonlinear components. Wideband, dual-notch Tx-to-Rx isolation is demonstrated experimentally with a planar inverted-F antenna. The 0.18μm CMOS prototype achieves >50dB isolation for 220MHz aggregated bandwidth or >40dB dual-notch isolation for 160MHz bandwidth, +49dBm Tx-path IIP3 and -48dBc ACLR1 for +27dBm at the antenna.", "title": "" }, { "docid": "neg:1840020_6", "text": "● Volume of pages makes efficient WWW navigation difficult ● Aim: To analyse users' navigation history to generate tools that increase navigational efficiency – ie. Predictive server prefetching ● Provides a mathematical foundation to several concepts", "title": "" }, { "docid": "neg:1840020_7", "text": "This paper considers the scenario that multiple data owners wish to apply a machine learning method over the combined dataset of all owners to obtain the best possible learning output but do not want to share the local datasets owing to privacy concerns. We design systems for the scenario that the stochastic gradient descent (SGD) algorithm is used as the machine learning method because SGD (or its variants) is at the heart of recent deep learning techniques over neural networks. Our systems differ from existing systems in the following features: (1) any activation function can be used, meaning that no privacy-preserving-friendly approximation is required; (2) gradients computed by SGD are not shared but the weight parameters are shared instead; and (3) robustness against colluding parties even in the extreme case that only one honest party exists. We prove that our systems, while privacy-preserving, achieve the same learning accuracy as SGD and hence retain the merit of deep learning with respect to accuracy. Finally, we conduct several experiments using benchmark datasets, and show that our systems outperform previous system in terms of learning accuracies. keywords: privacy preservation, stochastic gradient descent, distributed trainers, neural networks.", "title": "" }, { "docid": "neg:1840020_8", "text": "Artefact evaluation is regarded as being crucial for Design Science Research (DSR) in order to rigorously proof an artefact’s relevance for practice. The availability of guidelines for structuring DSR processes notwithstanding, the current body of knowledge provides only rudimentary means for a design researcher to select and justify appropriate artefact evaluation strategies in a given situation. This paper proposes patterns that could be used to articulate and justify artefact evaluation strategies within DSR projects. These patterns have been synthesised from priorDSR literature concerned with evaluation strategies. They distinguish both ex ante as well as ex post evaluations and reflect current DSR approaches and evaluation criteria.", "title": "" }, { "docid": "neg:1840020_9", "text": "It is common for cloud users to require clusters of inter-connected virtual machines (VMs) in a geo-distributed IaaS cloud, to run their services. Compared to isolated VMs, key challenges on dynamic virtual cluster (VC) provisioning (computation + communication resources) lie in two folds: (1) optimal placement of VCs and inter-VM traffic routing involve NP-hard problems, which are non-trivial to solve offline, not to mention if an online efficient algorithm is sought; (2) an efficient pricing mechanism is missing, which charges a market-driven price for each VC as a whole upon request, while maximizing system efficiency or provider revenue over the entire span. This paper proposes efficient online auction mechanisms to address the above challenges. We first design SWMOA, a novel online algorithm for dynamic VC provisioning and pricing, achieving truthfulness, individual rationality, computation efficiency, and <inline-formula><tex-math notation=\"LaTeX\">$(1+2\\log \\mu)$</tex-math><alternatives> <inline-graphic xlink:href=\"wu-ieq1-2601905.gif\"/></alternatives></inline-formula>-competitiveness in social welfare, where <inline-formula><tex-math notation=\"LaTeX\">$\\mu$</tex-math><alternatives> <inline-graphic xlink:href=\"wu-ieq2-2601905.gif\"/></alternatives></inline-formula> is related to the problem size. Next, applying a randomized reduction technique, we convert the social welfare maximizing auction into a revenue maximizing online auction, PRMOA, achieving <inline-formula><tex-math notation=\"LaTeX\">$O(\\log \\mu)$ </tex-math><alternatives><inline-graphic xlink:href=\"wu-ieq3-2601905.gif\"/></alternatives></inline-formula> -competitiveness in provider revenue, as well as truthfulness, individual rationality and computation efficiency. We investigate auction design in different cases of resource cost functions in the system. We validate the efficacy of the mechanisms through solid theoretical analysis and trace-driven simulations.", "title": "" }, { "docid": "neg:1840020_10", "text": "Due to the increasingly aging population, there is a rising demand for assistive living technologies for the elderly to ensure their health and well-being. The elderly are mostly chronic patients who require frequent check-ups of multiple vital signs, some of which (e.g., blood pressure and blood glucose) vary greatly according to the daily activities that the elderly are involved in. Therefore, the development of novel wearable intelligent systems to effectively monitor the vital signs continuously over a 24 hour period is in some cases crucial for understanding the progression of chronic symptoms in the elderly. In this paper, recent development of Wearable Intelligent Systems for e-Health (WISEs) is reviewed, including breakthrough technologies and technical challenges that remain to be solved. A novel application of wearable technologies for transient cardiovascular monitoring during water drinking is also reported. In particular, our latest results found that heart rate increased by 9 bpm (P < 0.001) and pulse transit time was reduced by 5 ms (P < 0.001), indicating a possible rise in blood pressure, during swallowing. In addition to monitoring physiological conditions during daily activities, it is anticipated that WISEs will have a number of other potentially viable applications, including the real-time risk prediction of sudden cardiovascular events and deaths. Category: Smart and intelligent computing", "title": "" }, { "docid": "neg:1840020_11", "text": "From the automated text processing point of view, natural language is very redundant in the sense that many different words share a common or similar meaning. For computer this can be hard to understand without some background knowledge. Latent Semantic Indexing (LSI) is a technique that helps in extracting some of this background knowledge from corpus of text documents. This can be also viewed as extraction of hidden semantic concepts from text documents. On the other hand visualization can be very helpful in data analysis, for instance, for finding main topics that appear in larger sets of documents. Extraction of main concepts from documents using techniques such as LSI, can make the results of visualizations more useful. For example, given a set of descriptions of European Research projects (6FP) one can find main areas that these projects cover including semantic web, e-learning, security, etc. In this paper we describe a method for visualization of document corpus based on LSI, the system implementing it and give results of using the system on several datasets.", "title": "" }, { "docid": "neg:1840020_12", "text": "Current cloud providers use fixed-price based mechanisms to allocate Virtual Machine (VM) instances to their users. The fixed-price based mechanisms do not provide an efficient allocation of resources and do not maximize the revenue of the cloud providers. A better alternative would be to use combinatorial auction-based resource allocation mechanisms. In this PhD dissertation we will design, study and implement combinatorial auction-based mechanisms for efficient provisioning and allocation of VM instances in cloud computing environments. We present our preliminary results consisting of three combinatorial auction-based mechanisms for VM provisioning and allocation. We also present an efficient bidding algorithm that can be used by the cloud users to decide on how to bid for their requested bundles of VM instances.", "title": "" }, { "docid": "neg:1840020_13", "text": "The lack of assessment tools to analyze serious games and insufficient knowledge on their impact on players is a recurring critique in the field of game and media studies, education science and psychology. Although initial empirical studies on serious games usage deliver discussable results, numerous questions remain unacknowledged. In particular, questions regarding the quality of their formal conceptual design in relation to their purpose mostly stay uncharted. In the majority of cases the designers' good intentions justify incoherence and insufficiencies in their design. In addition, serious games are mainly assessed in terms of the quality of their content, not in terms of their intention-based design. This paper argues that analyzing a game's formal conceptual design, its elements, and their relation to each other based on the game's purpose is a constructive first step in assessing serious games. By outlining the background of the Serious Game Design Assessment Framework and exemplifying its use, a constructive structure to examine purpose-based games is introduced. To demonstrate how to assess the formal conceptual design of serious games we applied the SGDA Framework to the online games \"Sweatshop\" (2011) and \"ICED\" (2008).", "title": "" }, { "docid": "neg:1840020_14", "text": "This article describes the rationale, development, and validation of the Scale for Suicide Ideation (SSI), a 19-item clinical research instrument designed to quantify and assess suicidal intention. The scale was found to have high internal consistency and moderately high correlations with clinical ratings of suicidal risk and self-administered measures of self-harm. Furthermore, it was sensitive to changes in levels of depression and hopelessness over time. Its construct validity was supported by two studies by different investigators testing the relationship between hopelessness, depression, and suicidal ideation and by a study demonstrating a significant relationship between high level of suicidal ideation and \"dichotomous\" attitudes about life and related concepts on a semantic differential test. Factor analysis yielded three meaningful factors: active suicidal desire, specific plans for suicide, and passive suicidal desire.", "title": "" }, { "docid": "neg:1840020_15", "text": "The emerging ambient persuasive technology looks very promising for many areas of personal and ubiquitous computing. Persuasive applications aim at changing human attitudes or behavior through the power of software designs. This theory-creating article suggests the concept of a behavior change support system (BCSS), whether web-based, mobile, ubiquitous, or more traditional information system to be treated as the core of research into persuasion, influence, nudge, and coercion. This article provides a foundation for studying BCSSs, in which the key constructs are the O/C matrix and the PSD model. It will (1) introduce the archetypes of behavior change via BCSSs, (2) describe the design process for building persuasive BCSSs, and (3) exemplify research into BCSSs through the domain of health interventions. Recognizing the themes put forward in this article will help leverage the full potential of computing for producing behavioral changes.", "title": "" }, { "docid": "neg:1840020_16", "text": "A hand injury can have great impact on a person's daily life. However, the current manual evaluations of hand functions are imprecise and inconvenient. In this research, a data glove embedded with 6-axis inertial sensors is proposed. With the proposed angle calculating algorithm, accurate bending angles are measured to estimate the real-time movements of hands. This proposed system can provide physicians with an efficient tool to evaluate the recovery of patients and improve the quality of hand rehabilitation.", "title": "" }, { "docid": "neg:1840020_17", "text": "Annotating data is a common bottleneck in building text classifiers. This is particularly problematic in social media domains, where data drift requires frequent retraining to maintain high accuracy. In this paper, we propose and evaluate a text classification method for Twitter data whose only required human input is a single keyword per class. The algorithm proceeds by identifying exemplar Twitter accounts that are representative of each class by analyzing Twitter Lists (human-curated collections of related Twitter accounts). A classifier is then fit to the exemplar accounts and used to predict labels of new tweets and users. We develop domain adaptation methods to address the noise and selection bias inherent to this approach, which we find to be critical to classification accuracy. Across a diverse set of tasks (topic, gender, and political affiliation classification), we find that the resulting classifier is competitive with a fully supervised baseline, achieving superior accuracy on four of six datasets despite using no manually labeled data.", "title": "" }, { "docid": "neg:1840020_18", "text": "Mycosis fungoides (MF), a low-grade lymphoproliferative disorder, is the most common type of cutaneous T-cell lymphoma. Typically, neoplastic T cells localize to the skin and produce patches, plaques, tumours or erythroderma. Diagnosis of MF can be difficult due to highly variable presentations and the sometimes nonspecific nature of histological findings. Molecular biology has improved the diagnostic accuracy. Nevertheless, clinical experience is of substantial importance as MF can resemble a wide variety of skin diseases. We performed a literature review and found that MF can mimic >50 different clinical entities. We present a structured framework of clinical variations of classical, unusual and distinct forms of MF. Distinct subforms such as ichthyotic MF, adnexotropic (including syringotropic and folliculotropic) MF, MF with follicular mucinosis, granulomatous MF with granulomatous slack skin and papuloerythroderma of Ofuji are delineated in more detail.", "title": "" }, { "docid": "neg:1840020_19", "text": "The terminology Internet of Things (IoT) refers to a future where every day physical objects are connected by the Internet in one form or the other, but outside the traditional desktop realm. The successful emergence of the IoT vision, however, will require computing to extend past traditional scenarios involving portables and smart-phones to the connection of everyday physical objects and the integration of intelligence with the environment. Subsequently, this will lead to the development of new computing features and challenges. The main purpose of this paper, therefore, is to investigate the features, challenges, and weaknesses that will come about, as the IoT becomes reality with the connection of more and more physical objects. Specifically, the study seeks to assess emergent challenges due to denial of service attacks, eavesdropping, node capture in the IoT infrastructure, and physical security of the sensors. We conducted a literature review about IoT, their features, challenges, and vulnerabilities. The methodology paradigm used was qualitative in nature with an exploratory research design, while data was collected using the desk research method. We found that, in the distributed form of architecture in IoT, attackers could hijack unsecured network devices converting them into bots to attack third parties. Moreover, attackers could target communication channels and extract data from the information flow. Finally, the perceptual layer in distributed IoT architecture is also found to be vulnerable to node capture attacks, including physical capture, brute force attack, DDoS attacks, and node privacy leaks.", "title": "" } ]
1840021
Centering Theory in Spanish: Coding Manual
[ { "docid": "pos:1840021_0", "text": "Most existing anaphora resolution algorithms are designed to account only for anaphors with NP-antecedents. This paper describes an algorithm for the resolution of discourse deictic anaphors, which constitute a large percentage of anaphors in spoken dialogues. The success of the resolution is dependent on the classification of all pronouns and demonstratives into individual, discourse deictic and vague anaphora. Finally, the empirical results of the application of the algorithm to a corpus of spoken dialogues are presented.", "title": "" } ]
[ { "docid": "neg:1840021_0", "text": "High-resolution screen printing of pristine graphene is introduced for the rapid fabrication of conductive lines on flexible substrates. Well-defined silicon stencils and viscosity-controlled inks facilitate the preparation of high-quality graphene patterns as narrow as 40 μm. This strategy provides an efficient method to produce highly flexible graphene electrodes for printed electronics.", "title": "" }, { "docid": "neg:1840021_1", "text": "Smart route planning gathers increasing interest as cities become crowded and jammed. We present a system for individual trip planning that incorporates future traffic hazards in routing. Future traffic conditions are computed by a Spatio-Temporal Random Field based on a stream of sensor readings. In addition, our approach estimates traffic flow in areas with low sensor coverage using a Gaussian Process Regression. The conditioning of spatial regression on intermediate predictions of a discrete probabilistic graphical model allows to incorporate historical data, streamed online data and a rich dependency structure at the same time. We demonstrate the system and test model assumptions with a real-world use-case from Dublin city, Ireland.", "title": "" }, { "docid": "neg:1840021_2", "text": "In a recent paper, the authors proposed a new class of low-complexity iterative thresholding algorithms for reconstructing sparse signals from a small set of linear measurements. The new algorithms are broadly referred to as AMP, for approximate message passing. This is the first of two conference papers describing the derivation of these algorithms, connection with the related literature, extensions of the original framework, and new empirical evidence. In particular, the present paper outlines the derivation of AMP from standard sum-product belief propagation, and its extension in several directions. We also discuss relations with formal calculations based on statistical mechanics methods.", "title": "" }, { "docid": "neg:1840021_3", "text": "Abstract This paper describes the functionality of MEAD, a comprehensive, public domain, open source, multidocument multilingual summarization environment that has been thus far downloaded by more than 500 organizations. MEAD has been used in a variety of summarization applications ranging from summarization for mobile devices to Web page summarization within a search engine and to novelty detection.", "title": "" }, { "docid": "neg:1840021_4", "text": "Model transformation by example is a novel approach in model-driven software engineering to derive model transformation rules from an initial prototypical set of interrelated source and target models, which describe critical cases of the model transformation problem in a purely declarative way. In the current paper, we automate this approach using inductive logic programming (Muggleton and Raedt in J Logic Program 19-20:629–679, 1994) which aims at the inductive construction of first-order clausal theories from examples and background knowledge.", "title": "" }, { "docid": "neg:1840021_5", "text": "Low-field extremity magnetic resonance imaging (lfMRI) is currently commercially available and has been used clinically to evaluate rheumatoid arthritis (RA). However, one disadvantage of this new modality is that the field of view (FOV) is too small to assess hand and wrist joints simultaneously. Thus, we have developed a new lfMRI system, compacTscan, with a FOV that is large enough to simultaneously assess the entire wrist to proximal interphalangeal joint area. In this work, we examined its clinical value compared to conventional 1.5 tesla (T) MRI. The comparison involved evaluating three RA patients by both 0.3 T compacTscan and 1.5 T MRI on the same day. Bone erosion, bone edema, and synovitis were estimated by our new compact MRI scoring system (cMRIS) and the kappa coefficient was calculated on a joint-by-joint basis. We evaluated a total of 69 regions. Bone erosion was detected in 49 regions by compacTscan and in 48 regions by 1.5 T MRI, while the total erosion score was 77 for compacTscan and 76.5 for 1.5 T MRI. These findings point to excellent agreement between the two techniques (kappa = 0.833). Bone edema was detected in 14 regions by compacTscan and in 19 by 1.5 T MRI, and the total edema score was 36.25 by compacTscan and 47.5 by 1.5 T MRI. Pseudo-negative findings were noted in 5 regions. However, there was still good agreement between the techniques (kappa = 0.640). Total number of evaluated joints was 33. Synovitis was detected in 13 joints by compacTscan and 14 joints by 1.5 T MRI, while the total synovitis score was 30 by compacTscan and 32 by 1.5 T MRI. Thus, although 1 pseudo-positive and 2 pseudo-negative findings resulted from the joint evaluations, there was again excellent agreement between the techniques (kappa = 0.827). Overall, the data obtained by our compacTscan system showed high agreement with those obtained by conventional 1.5 T MRI with regard to diagnosis and the scoring of bone erosion, edema, and synovitis. We conclude that compacTscan is useful for diagnosis and estimation of disease activity in patients with RA.", "title": "" }, { "docid": "neg:1840021_6", "text": "The knowledge base is a machine-readable set of knowledge. More and more multi-domain and large-scale knowledge bases have emerged in recent years, and they play an essential role in many information systems and semantic annotation tasks. However we do not have a perfect knowledge base yet and maybe we will never have a perfect one, because all the knowledge bases have limited coverage while new knowledge continues to emerge. Therefore populating and enriching the existing knowledge base become important tasks. Traditional knowledge base population task usually leverages the information embedded in the unstructured free text. Recently researchers found that massive structured tables on the Web are high-quality relational data and easier to be utilized than the unstructured text. Our goal of this paper is to enrich the knowledge base using Wikipedia tables. Here, knowledge means binary relations between entities and we focus on the relations in some specific domains. There are two basic types of information can be used in this task: the existing relation instances and the connection between types and relations. We firstly propose two basic probabilistic models based on two types of information respectively. Then we propose a light-weight aggregated model to combine the advantages of basic models. The experimental results show that our method is an effective approach to enriching the knowledge base with both high precision and recall.", "title": "" }, { "docid": "neg:1840021_7", "text": "Convolutional neural networks (CNN) are limited by the lack of capability to handle geometric information due to the fixed grid kernel structure. The availability of depth data enables progress in RGB-D semantic segmentation with CNNs. State-of-the-art methods either use depth as additional images or process spatial information in 3D volumes or point clouds. These methods suffer from high computation and memory cost. To address these issues, we present Depth-aware CNN by introducing two intuitive, flexible and effective operations: depth-aware convolution and depth-aware average pooling. By leveraging depth similarity between pixels in the process of information propagation, geometry is seamlessly incorporated into CNN. Without introducing any additional parameters, both operators can be easily integrated into existing CNNs. Extensive experiments and ablation studies on challenging RGB-D semantic segmentation benchmarks validate the effectiveness and flexibility of our approach.", "title": "" }, { "docid": "neg:1840021_8", "text": "Alcohol consumption is highly prevalent in university students. Early detection in future health professionals is important: their consumption might not only influence their own health but may determine how they deal with the implementation of preventive strategies in the future. The aim of this paper is to detect the prevalence of risky alcohol consumption in first- and last-degree year students and to compare their drinking patterns.Risky drinking in pharmacy students (n=434) was assessed and measured with the AUDIT questionnaire (Alcohol Use Disorders Identification Test). A comparative analysis between college students from the first and fifth years of the degree in pharmacy, and that of a group of professors was carried to see differences in their alcohol intake patterns.Risky drinking was detected in 31.3% of students. The highest prevalence of risky drinkers, and the total score of the AUDIT test was found in students in their first academic year. Students in the first academic level taking morning classes had a two-fold risk of risky drinking (OR=1.9 (IC 95%1.1-3.1)) compared with students in the fifth level. The frequency of alcohol consumption increases with the academic level, whereas the number of alcohol beverages per drinking occasion falls.Risky drinking is high during the first year of university. As alcohol consumption might decrease with age, it is important to design preventive strategies that will strengthen this tendency.", "title": "" }, { "docid": "neg:1840021_9", "text": "Our research team has spent the last few years studying the cognitive processes involved in simultaneous interpreting. The results of this research have shown that professional interpreters develop specific ways of using their working memory, due to their work in simultaneous interpreting; this allows them to perform the processes of linguistic input, lexical and semantic access, reformulation and production of the segment translated both simultaneously and under temporal pressure (Bajo, Padilla & Padilla, 1998). This research led to our interest in the processes involved in the tasks of mediation in general. We understand that linguistic and cultural mediation involves not only translation but also the different forms of interpreting: consecutive and simultaneous. Our general objective in this project is to outline a cognitive theory of translation and interpreting and find empirical support for it. From the field of translation and interpreting there have been some attempts to create global and partial theories of the processes of mediation (Gerver, 1976; Moser-Mercer, 1997; Gile, 1997), but most of these attempts lack empirical support. On the other hand, from the field of psycholinguistics there have been some attempts to make an empirical study of the tasks of translation (De Groot, 1993; Sánchez-Casas Davis and GarcíaAlbea, 1992) and interpreting (McDonald and Carpenter, 1981), but these have always been partial, concentrating on very specific aspects of translation and interpreting. The specific objectives of this project are:", "title": "" }, { "docid": "neg:1840021_10", "text": "Control strategies for these contaminants will require a better understanding of how they move around the globe.", "title": "" }, { "docid": "neg:1840021_11", "text": "Spatial pyramid matching (SPM) based pooling has been the dominant choice for state-of-art image classification systems. In contrast, we propose a novel object-centric spatial pooling (OCP) approach, following the intuition that knowing the location of the object of interest can be useful for image classification. OCP consists of two steps: (1) inferring the location of the objects, and (2) using the location information to pool foreground and background features separately to form the image-level representation. Step (1) is particularly challenging in a typical classification setting where precise object location annotations are not available during training. To address this challenge, we propose a framework that learns object detectors using only image-level class labels, or so-called weak labels. We validate our approach on the challenging PASCAL07 dataset. Our learned detectors are comparable in accuracy with stateof-the-art weakly supervised detection methods. More importantly, the resulting OCP approach significantly outperforms SPM-based pooling in image classification.", "title": "" }, { "docid": "neg:1840021_12", "text": "Internet of Things (IoT) is reshaping our daily lives by bridging the gaps between physical and digital world. To enable ubiquitous sensing, seamless connection and real-time processing for IoT applications, fog computing is considered as a key component in a heterogeneous IoT architecture, which deploys storage and computing resources to network edges. However, the fog-based IoT architecture can lead to various security and privacy risks, such as compromised fog nodes that may impede developments of IoT by attacking the data collection and gathering period. In this paper, we propose a novel privacy-preserving and reliable scheme for the fog-based IoT to address the data privacy and reliability challenges of the selective data aggregation service. Specifically, homomorphic proxy re-encryption and proxy re-authenticator techniques are respectively utilized to deal with the data privacy and reliability issues of the service, which supports data aggregation over selective data types for any type-driven applications. We define a new threat model to formalize the non-collusive and collusive attacks of compromised fog nodes, and it is demonstrated that the proposed scheme can prevent both non-collusive and collusive attacks in our model. In addition, performance evaluations show the efficiency of the scheme in terms of computational costs and communication overheads.", "title": "" }, { "docid": "neg:1840021_13", "text": "This paper presents the tuning of the structure and parameters of a neural network using an improved genetic algorithm (GA). It is also shown that the improved GA performs better than the standard GA based on some benchmark test functions. A neural network with switches introduced to its links is proposed. By doing this, the proposed neural network can learn both the input-output relationships of an application and the network structure using the improved GA. The number of hidden nodes is chosen manually by increasing it from a small number until the learning performance in terms of fitness value is good enough. Application examples on sunspot forecasting and associative memory are given to show the merits of the improved GA and the proposed neural network.", "title": "" }, { "docid": "neg:1840021_14", "text": "A decision is a commitment to a proposition or plan of action based on information and values associated with the possible outcomes. The process operates in a flexible timeframe that is free from the immediacy of evidence acquisition and the real time demands of action itself. Thus, it involves deliberation, planning, and strategizing. This Perspective focuses on perceptual decision making in nonhuman primates and the discovery of neural mechanisms that support accuracy, speed, and confidence in a decision. We suggest that these mechanisms expose principles of cognitive function in general, and we speculate about the challenges and directions before the field.", "title": "" }, { "docid": "neg:1840021_15", "text": "This paper investigates recently proposed approaches for defending against adversarial examples and evaluating adversarial robustness. We motivate adversarial risk as an objective for achieving models robust to worst-case inputs. We then frame commonly used attacks and evaluation metrics as defining a tractable surrogate objective to the true adversarial risk. This suggests that models may optimize this surrogate rather than the true adversarial risk. We formalize this notion as obscurity to an adversary, and develop tools and heuristics for identifying obscured models and designing transparent models. We demonstrate that this is a significant problem in practice by repurposing gradient-free optimization techniques into adversarial attacks, which we use to decrease the accuracy of several recently proposed defenses to near zero. Our hope is that our formulations and results will help researchers to develop more powerful defenses.", "title": "" }, { "docid": "neg:1840021_16", "text": "There is a strong need for advanced control methods in battery management systems, especially in the plug-in hybrid and electric vehicles sector, due to cost and safety issues of new high-power battery packs and high-energy cell design. Limitations in computational speed and available memory require the use of very simple battery models and basic control algorithms, which in turn result in suboptimal utilization of the battery. This work investigates the possible use of optimal control strategies for charging. We focus on the minimum time charging problem, where different constraints on internal battery states are considered. Based on features of the open-loop optimal charging solution, we propose a simple one-step predictive controller, which is shown to recover the time-optimal solution, while being feasible for real-time computations. We present simulation results suggesting a decrease in charging time by 50% compared to the conventional constant-current / constant-voltage method for lithium-ion batteries.", "title": "" }, { "docid": "neg:1840021_17", "text": "E-transactions have become promising and very much convenient due to worldwide and usage of the internet. The consumer reviews are increasing rapidly in number on various products. These large numbers of reviews are beneficial to manufacturers and consumers alike. It is a big task for a potential consumer to read all reviews to make a good decision of purchasing. It is beneficial to mine available consumer reviews for popular products from various product review sites of consumer. The first step is performing sentiment analysis to decide the polarity of a review. On the basis of polarity, we can then classify the review. Comparison is made among the different WEKA classifiers in the form of charts and graphs.", "title": "" }, { "docid": "neg:1840021_18", "text": "This paper studies physical layer security in a wireless ad hoc network with numerous legitimate transmitter–receiver pairs and eavesdroppers. A hybrid full-duplex (FD)/half-duplex receiver deployment strategy is proposed to secure legitimate transmissions, by letting a fraction of legitimate receivers work in the FD mode sending jamming signals to confuse eavesdroppers upon their information receptions, and letting the other receivers work in the half-duplex mode just receiving their desired signals. The objective of this paper is to choose properly the fraction of FD receivers for achieving the optimal network security performance. Both accurate expressions and tractable approximations for the connection outage probability and the secrecy outage probability of an arbitrary legitimate link are derived, based on which the area secure link number, network-wide secrecy throughput, and network-wide secrecy energy efficiency are optimized, respectively. Various insights into the optimal fraction are further developed, and its closed-form expressions are also derived under perfect self-interference cancellation or in a dense network. It is concluded that the fraction of FD receivers triggers a non-trivial tradeoff between reliability and secrecy, and the proposed strategy can significantly enhance the network security performance.", "title": "" } ]
1840022
Design of Secure and Lightweight Authentication Protocol for Wearable Devices Environment
[ { "docid": "pos:1840022_0", "text": "Despite two decades of intensive research, it remains a challenge to design a practical anonymous two-factor authentication scheme, for the designers are confronted with an impressive list of security requirements (e.g., resistance to smart card loss attack) and desirable attributes (e.g., local password update). Numerous solutions have been proposed, yet most of them are shortly found either unable to satisfy some critical security requirements or short of a few important features. To overcome this unsatisfactory situation, researchers often work around it in hopes of a new proposal (but no one has succeeded so far), while paying little attention to the fundamental question: whether or not there are inherent limitations that prevent us from designing an “ideal” scheme that satisfies all the desirable goals? In this work, we aim to provide a definite answer to this question. We first revisit two foremost proposals, i.e. Tsai et al.'s scheme and Li's scheme, revealing some subtleties and challenges in designing such schemes. Then, we systematically explore the inherent conflicts and unavoidable trade-offs among the design criteria. Our results indicate that, under the current widely accepted adversarial model, certain goals are beyond attainment. This also suggests a negative answer to the open problem left by Huang et al. in 2014. To the best of knowledge, the present study makes the first step towards understanding the underlying evaluation metric for anonymous two-factor authentication, which we believe will facilitate better design of anonymous two-factor protocols that offer acceptable trade-offs among usability, security and privacy.", "title": "" } ]
[ { "docid": "neg:1840022_0", "text": "Between November 1998 and December 1999, trained medical record abstractors visited the Micronesian jurisdictions of Chuuk, Kosrae, Pohnpei, and Yap (the four states of the Federated States of Micronesia), as well as the Republic of Palau (Belau), the Republic of Kiribati, the Republic of the Marshall Islands (RMI), and the Republic of Nauru to review all available medical records in order to describe the epidemiology of cancer in Micronesia. Annualized age-adjusted, site-specific cancer period prevalence rates for individual jurisdictions were calculated. Site-specific cancer occurrence in Micronesia follows a pattern characteristic of developing nations. At the same time, cancers associated with developed countries are also impacting these populations. Recommended are jurisdiction-specific plans that outline the steps and resources needed to establish or improve local cancer registries; expand cancer awareness and screening activities; and improve diagnostic and treatment capacity.", "title": "" }, { "docid": "neg:1840022_1", "text": "To meet the development of Internet of Things (IoT), IETF has proposed IPv6 standards working under stringent low-power and low-cost constraints. However, the behavior and performance of the proposed standards have not been fully understood, especially the RPL routing protocol lying at the heart the protocol stack. In this work, we make an in-depth study on a popular implementation of the RPL (routing protocol for low power and lossy network) to provide insights and guidelines for the adoption of these standards. Specifically, we use the Contiki operating system and COOJA simulator to evaluate the behavior of the ContikiRPL implementation. We analyze the performance for different networking settings. Different from previous studies, our work is the first effort spanning across the whole life cycle of wireless sensor networks, including both the network construction process and the functioning stage. The metrics evaluated include signaling overhead, latency, energy consumption and so on, which are vital to the overall performance of a wireless sensor network. Furthermore, based on our observations, we provide a few suggestions for RPL implemented WSN. This study can also serve as a basis for future enhancement on the proposed standards.", "title": "" }, { "docid": "neg:1840022_2", "text": "In this paper, a new ensemble forecasting model for short-term load forecasting (STLF) is proposed based on extreme learning machine (ELM). Four important improvements are used to support the ELM for increased forecasting performance. First, a novel wavelet-based ensemble scheme is carried out to generate the individual ELM-based forecasters. Second, a hybrid learning algorithm blending ELM and the Levenberg-Marquardt method is proposed to improve the learning accuracy of neural networks. Third, a feature selection method based on the conditional mutual information is developed to select a compact set of input variables for the forecasting model. Fourth, to realize an accurate ensemble forecast, partial least squares regression is utilized as a combining approach to aggregate the individual forecasts. Numerical testing shows that proposed method can obtain better forecasting results in comparison with other standard and state-of-the-art methods.", "title": "" }, { "docid": "neg:1840022_3", "text": "With the explosion of online communication and publication, texts become obtainable via forums, chat messages, blogs, book reviews and movie reviews. Usually, these texts are much short and noisy without sufficient statistical signals and enough information for a good semantic analysis. Traditional natural language processing methods such as Bow-of-Word (BOW) based probabilistic latent semantic models fail to achieve high performance due to the short text environment. Recent researches have focused on the correlations between words, i.e., term dependencies, which could be helpful for mining latent semantics hidden in short texts and help people to understand them. Long short-term memory (LSTM) network can capture term dependencies and is able to remember the information for long periods of time. LSTM has been widely used and has obtained promising results in variants of problems of understanding latent semantics of texts. At the same time, by analyzing the texts, we find that a number of keywords contribute greatly to the semantics of the texts. In this paper, we establish a keyword vocabulary and propose an LSTM-based model that is sensitive to the words in the vocabulary; hence, the keywords leverage the semantics of the full document. The proposed model is evaluated in a short-text sentiment analysis task on two datasets: IMDB and SemEval-2016, respectively. Experimental results demonstrate that our model outperforms the baseline LSTM by 1%~2% in terms of accuracy and is effective with significant performance enhancement over several non-recurrent neural network latent semantic models (especially in dealing with short texts). We also incorporate the idea into a variant of LSTM named the gated recurrent unit (GRU) model and achieve good performance, which proves that our method is general enough to improve different deep learning models.", "title": "" }, { "docid": "neg:1840022_4", "text": "As the development of microprocessors, power electronic converters and electric motor drives, electric power steering (EPS) system which uses an electric motor came to use a few year ago. Electric power steering systems have many advantages over traditional hydraulic power steering systems in engine efficiency, space efficiency, and environmental compatibility. This paper deals with design and optimization of an interior permanent magnet (IPM) motor for power steering application. Simulated Annealing method is used for optimization. After optimization and finding motor parameters, An IPM motor and drive with mechanical parts of EPS system is simulated and performance evaluation of system is done.", "title": "" }, { "docid": "neg:1840022_5", "text": "In this paper, the authors undertake a study of cyber warfare reviewing theories, law, policies, actual incidents and the dilemma of anonymity. Starting with the United Kingdom perspective on cyber warfare, the authors then consider United States' views including the perspective of its military on the law of war and its general inapplicability to cyber conflict. Consideration is then given to the work of the United Nations' group of cyber security specialists and diplomats who as of July 2010 have agreed upon a set of recommendations to the United Nations Secretary General for negotiations on an international computer security treaty. An examination of the use of a nation's cybercrime law to prosecute violations that occur over the Internet indicates the inherent limits caused by the jurisdictional limits of domestic law to address cross-border cybercrime scenarios. Actual incidents from Estonia (2007), Georgia (2008), Republic of Korea (2009), Japan (2010), ongoing attacks on the United States as well as other incidents and reports on ongoing attacks are considered as well. Despite the increasing sophistication of such cyber attacks, it is evident that these attacks were met with a limited use of law and policy to combat them that can be only be characterised as a response posture defined by restraint. Recommendations are then examined for overcoming the attribution problem. The paper then considers when do cyber attacks rise to the level of an act of war by reference to the work of scholars such as Schmitt and Wingfield. Further evaluation of the special impact that non-state actors may have and some theories on how to deal with the problem of asymmetric players are considered. Discussion and possible solutions are offered. A conclusion is offered drawing some guidance from the writings of the Chinese philosopher Sun Tzu. Finally, an appendix providing a technical overview of the problem of attribution and the dilemma of anonymity in cyberspace is provided. 1. The United Kingdom Perspective \"If I went and bombed a power station in France, that would be an act of war. If I went on to the net and took out a power station, is that an act of war? One", "title": "" }, { "docid": "neg:1840022_6", "text": "Identifying the lineage path of neural cells is critical for understanding the development of brain. Accurate neural cell detection is a crucial step to obtain reliable delineation of cell lineage. To solve this task, in this paper we present an efficient neural cell detection method based on SSD (single shot multibox detector) neural network model. Our method adapts the original SSD architecture and removes the unnecessary blocks, leading to a light-weight model. Moreover, we formulate the cell detection as a binary regression problem, which makes our model much simpler. Experimental results demonstrate that, with only a small training set, our method is able to accurately capture the neural cells under severe shape deformation in a fast way.", "title": "" }, { "docid": "neg:1840022_7", "text": "LTE 4G cellular networks are gradually being adopted by all major operators in the world and are expected to rule the cellular landscape at least for the current decade. They will also form the starting point for further progress beyond the current generation of mobile cellular networks to chalk a path towards fifth generation mobile networks. The lack of open cellular ecosystem has limited applied research in this field within the boundaries of vendor and operator R&D groups. Furthermore, several new approaches and technologies are being considered as potential elements making up such a future mobile network, including cloudification of radio network, radio network programability and APIs following SDN principles, native support of machine-type communication, and massive MIMO. Research on these technologies requires realistic and flexible experimentation platforms that offer a wide range of experimentation modes from real-world experimentation to controlled and scalable evaluations while at the same time retaining backward compatibility with current generation systems.\n In this work, we present OpenAirInterface (OAI) as a suitably flexible platform towards open LTE ecosystem and playground [1]. We will demonstrate an example of the use of OAI to deploy a low-cost open LTE network using commodity hardware with standard LTE-compatible devices. We also show the reconfigurability features of the platform.", "title": "" }, { "docid": "neg:1840022_8", "text": "The development of complex distributed systems demands for the creation of suitable architectural styles (or paradigms) and related run-time infrastructures. An emerging style that is receiving increasing attention is based on the notion of event. In an event-based architecture, distributed software components interact by generating and consuming events. An event is the occurrence of some state change in a component of a software system, made visible to the external world. The occurrence of an event in a component is asynchronously notified to any other component that has declared some interest in it. This paradigm (usually called “publish/subscribe” from the names of the two basic operations that regulate the communication) holds the promise of supporting a flexible and effective interaction among highly reconfigurable, distributed software components. In the past two years, we have developed an object-oriented infrastructure called JEDI (Java Event-based Distributed Infrastructure). JEDI supports the development and operation of event-based systems and has been used to implement a significant example of distributed system, namely, the OPSS workflow management system (WFMS). The paper illustrates JEDI main features and how we have used them to implement OPSS. Moreover, the paper provides an initial evaluation of our experiences in using the event-based architectural style and a classification of some of the event-based infrastructures presented in the literature.", "title": "" }, { "docid": "neg:1840022_9", "text": "Generating adversarial examples is an intriguing problem and an important way of understanding the working mechanism of deep neural networks. Most existing approaches generated perturbations in the image space, i.e., each pixel can be modified independently. However, in this paper we pay special attention to the subset of adversarial examples that are physically authentic – those corresponding to actual changes in 3D physical properties (like surface normals, illumination condition, etc.). These adversaries arguably pose a more serious concern, as they demonstrate the possibility of causing neural network failure by small perturbations of real-world 3D objects and scenes. In the contexts of object classification and visual question answering, we augment state-of-the-art deep neural networks that receive 2D input images with a rendering module (either differentiable or not) in front, so that a 3D scene (in the physical space) is rendered into a 2D image (in the image space), and then mapped to a prediction (in the output space). The adversarial perturbations can now go beyond the image space, and have clear meanings in the 3D physical world. Through extensive experiments, we found that a vast majority of image-space adversaries cannot be explained by adjusting parameters in the physical space, i.e., they are usually physically inauthentic. But it is still possible to successfully attack beyond the image space on the physical space (such that authenticity is enforced), though this is more difficult than image-space attacks, reflected in lower success rates and heavier perturbations required.", "title": "" }, { "docid": "neg:1840022_10", "text": "Cloud computing is a novel perspective for large scale distributed computing and parallel processing. It provides computing as a utility service on a pay per use basis. The performance and efficiency of cloud computing services always depends upon the performance of the user tasks submitted to the cloud system. Scheduling of the user tasks plays significant role in improving performance of the cloud services. Task scheduling is one of the main types of scheduling performed. This paper presents a detailed study of various task scheduling methods existing for the cloud environment. A brief analysis of various scheduling parameters considered in these methods is also discussed in this paper.", "title": "" }, { "docid": "neg:1840022_11", "text": "Concepts of basal ganglia organization have changed markedly over the past decade, due to significant advances in our understanding of the anatomy, physiology and pharmacology of these structures. Independent evidence from each of these fields has reinforced a growing perception that the functional architecture of the basal ganglia is essentially parallel in nature, regardless of the perspective from which these structures are viewed. This represents a significant departure from earlier concepts of basal ganglia organization, which generally emphasized the serial aspects of their connectivity. Current evidence suggests that the basal ganglia are organized into several structurally and functionally distinct 'circuits' that link cortex, basal ganglia and thalamus, with each circuit focused on a different portion of the frontal lobe. In this review, Garrett Alexander and Michael Crutcher, using the basal ganglia 'motor' circuit as the principal example, discuss recent evidence indicating that a parallel functional architecture may also be characteristic of the organization within each individual circuit.", "title": "" }, { "docid": "neg:1840022_12", "text": "The Wasserstein distance and its variations, e.g., the sliced-Wasserstein (SW) distance, have recently drawn attention from the machine learning community. The SW distance, specifically, was shown to have similar properties to the Wasserstein distance, while being much simpler to compute, and is therefore used in various applications including generative modeling and general supervised/unsupervised learning. In this paper, we first clarify the mathematical connection between the SW distance and the Radon transform. We then utilize the generalized Radon transform to define a new family of distances for probability measures, which we call generalized slicedWasserstein (GSW) distances. We also show that, similar to the SW distance, the GSW distance can be extended to a maximum GSW (max-GSW) distance. We then provide the conditions under which GSW and max-GSW distances are indeed distances. Finally, we compare the numerical performance of the proposed distances on several generative modeling tasks, including SW flows and SW auto-encoders.", "title": "" }, { "docid": "neg:1840022_13", "text": "We consider the problem of object figure-ground segmentation when the object categories are not available during training (i.e. zero-shot). During training, we learn standard segmentation models for a handful of object categories (called “source objects”) using existing semantic segmentation datasets. During testing, we are given images of objects (called “target objects”) that are unseen during training. Our goal is to segment the target objects from the background. Our method learns to transfer the knowledge from the source objects to the target objects. Our experimental results demonstrate the effectiveness of our approach.", "title": "" }, { "docid": "neg:1840022_14", "text": "OBJECTIVE\nTo determine whether acetyl-L-carnitine (ALC), a metabolite necessary for energy metabolism and essential fatty acid anabolism, might help attention-deficit/hyperactivity disorder (ADHD). Trials in Down's syndrome, migraine, and Alzheimer's disease showed benefit for attention. A preliminary trial in ADHD using L-carnitine reported significant benefit.\n\n\nMETHOD\nA multi-site 16-week pilot study randomized 112 children (83 boys, 29 girls) age 5-12 with systematically diagnosed ADHD to placebo or ALC in weight-based doses from 500 to 1500 mg b.i.d. The 2001 revisions of the Conners' parent and teacher scales (including DSM-IV ADHD symptoms) were administered at baseline, 8, 12, and 16 weeks. Analyses were ANOVA of change from baseline to 16 weeks with treatment, center, and treatment-by-center interaction as independent variables.\n\n\nRESULTS\nThe primary intent-to-treat analysis, of 9 DSM-IV teacher-rated inattentive symptoms, was not significant. However, secondary analyses were interesting. There was significant (p = 0.02) moderation by subtype: superiority of ALC over placebo in the inattentive type, with an opposite tendency in combined type. There was also a geographic effect (p = 0.047). Side effects were negligible; electrocardiograms, lab work, and physical exam unremarkable.\n\n\nCONCLUSION\nALC appears safe, but with no effect on the overall ADHD population (especially combined type). It deserves further exploration for possible benefit specifically in the inattentive type.", "title": "" }, { "docid": "neg:1840022_15", "text": "BACKGROUND\nOn December 8th, 2015, World Health Organization published a priority list of eight pathogens expected to cause severe outbreaks in the near future. To better understand global research trends and characteristics of publications on these emerging pathogens, we carried out this bibliometric study hoping to contribute to global awareness and preparedness toward this topic.\n\n\nMETHOD\nScopus database was searched for the following pathogens/infectious diseases: Ebola, Marburg, Lassa, Rift valley, Crimean-Congo, Nipah, Middle Eastern Respiratory Syndrome (MERS), and Severe Respiratory Acute Syndrome (SARS). Retrieved articles were analyzed to obtain standard bibliometric indicators.\n\n\nRESULTS\nA total of 8619 journal articles were retrieved. Authors from 154 different countries contributed to publishing these articles. Two peaks of publications, an early one for SARS and a late one for Ebola, were observed. Retrieved articles received a total of 221,606 citations with a mean ± standard deviation of 25.7 ± 65.4 citations per article and an h-index of 173. International collaboration was as high as 86.9%. The Centers for Disease Control and Prevention had the highest share (344; 5.0%) followed by the University of Hong Kong with 305 (4.5%). The top leading journal was Journal of Virology with 572 (6.6%) articles while Feldmann, Heinz R. was the most productive researcher with 197 (2.3%) articles. China ranked first on SARS, Turkey ranked first on Crimean-Congo fever, while the United States of America ranked first on the remaining six diseases. Of retrieved articles, 472 (5.5%) were on vaccine - related research with Ebola vaccine being most studied.\n\n\nCONCLUSION\nNumber of publications on studied pathogens showed sudden dramatic rise in the past two decades representing severe global outbreaks. Contribution of a large number of different countries and the relatively high h-index are indicative of how international collaboration can create common health agenda among distant different countries.", "title": "" }, { "docid": "neg:1840022_16", "text": "The hippocampal CA3 region is classically viewed as a homogeneous autoassociative network critical for associative memory and pattern completion. However, recent evidence has demonstrated a striking heterogeneity along the transverse, or proximodistal, axis of CA3 in spatial encoding and memory. Here we report the presence of striking proximodistal gradients in intrinsic membrane properties and synaptic connectivity for dorsal CA3. A decreasing gradient of mossy fiber synaptic strength along the proximodistal axis is mirrored by an increasing gradient of direct synaptic excitation from entorhinal cortex. Furthermore, we uncovered a nonuniform pattern of reactivation of fear memory traces, with the most robust reactivation during memory retrieval occurring in mid-CA3 (CA3b), the region showing the strongest net recurrent excitation. Our results suggest that heterogeneity in both intrinsic properties and synaptic connectivity may contribute to the distinct spatial encoding and behavioral role of CA3 subregions along the proximodistal axis.", "title": "" }, { "docid": "neg:1840022_17", "text": "Scripts define knowledge about how everyday scenarios (such as going to a restaurant) are expected to unfold. One of the challenges to learning scripts is the hierarchical nature of the knowledge. For example, a suspect arrested might plead innocent or guilty, and a very different track of events is then expected to happen. To capture this type of information, we propose an autoencoder model with a latent space defined by a hierarchy of categorical variables. We utilize a recently proposed vector quantization based approach, which allows continuous embeddings to be associated with each latent variable value. This permits the decoder to softly decide what portions of the latent hierarchy to condition on by attending over the value embeddings for a given setting. Our model effectively encodes and generates scripts, outperforming a recent language modeling-based method on several standard tasks, and allowing the autoencoder model to achieve substantially lower perplexity scores compared to the previous language modelingbased method.", "title": "" }, { "docid": "neg:1840022_18", "text": "White grubs (larvae of Coleoptera: Scarabaeidae) are abundant in below-ground systems and can cause considerable damage to a wide variety of crops by feeding on roots. White grub populations may be controlled by natural enemies, but the predator guild of the European species is barely known. Trophic interactions within soil food webs are difficult to study with conventional methods. Therefore, a polymerase chain reaction (PCR)-based approach was developed to investigate, for the first time, a soil insect predator-prey system. Can, however, highly sensitive detection methods identify carrion prey in predators, as has been shown for fresh prey? Fresh Melolontha melolontha (L.) larvae and 1- to 9-day-old carcasses were presented to Poecilus versicolor Sturm larvae. Mitochondrial cytochrome oxidase subunit I fragments of the prey, 175, 327 and 387 bp long, were detectable in 50% of the predators 32 h after feeding. Detectability decreased to 18% when a 585 bp sequence was amplified. Meal size and digestion capacity of individual predators had no influence on prey detection. Although prey consumption was negatively correlated with cadaver age, carrion prey could be detected by PCR as efficiently as fresh prey irrespective of carrion age. This is the first proof that PCR-based techniques are highly efficient and sensitive, both in fresh and carrion prey detection. Thus, if active predation has to be distinguished from scavenging, then additional approaches are needed to interpret the picture of prey choice derived by highly sensitive detection methods.", "title": "" }, { "docid": "neg:1840022_19", "text": "The Universal Serial Bus (USB) is an extremely popular interface standard for computer peripheral connections and is widely used in consumer Mass Storage Devices (MSDs). While current consumer USB MSDs provide relatively high transmission speed and are convenient to carry, the use of USB MSDs has been prohibited in many commercial and everyday environments primarily due to security concerns. Security protocols have been previously proposed and a recent approach for the USB MSDs is to utilize multi-factor authentication. This paper proposes significant enhancements to the three-factor control protocol that now makes it secure under many types of attacks including the password guessing attack, the denial-of-service attack, and the replay attack. The proposed solution is presented with a rigorous security analysis and practical computational cost analysis to demonstrate the usefulness of this new security protocol for consumer USB MSDs.", "title": "" } ]
1840023
Design of a Wideband Planar Printed Quasi-Yagi Antenna Using Stepped Connection Structure
[ { "docid": "pos:1840023_0", "text": "In this letter, we present a novel coplanar waveguide fed quasi-Yagi antenna with broad bandwidth. The uniqueness of this design is due to its simple feed selection and despite this, its achievable bandwidth. The 10 dB return loss bandwidth of the antenna is 44% covering X-band. The antenna is realized on a high dielectric constant substrate and is compatible with microstrip circuitry and active devices. The gain of the antenna is 7.4 dBi, the front-to-back ratio is 15 dB and the nominal efficiency of the radiator is 95%.", "title": "" }, { "docid": "pos:1840023_1", "text": "We present a modified quasi-Yagi antenna for use in WLAN access points. The antenna uses a new microstrip-to-coplanar strip (CPS) transition, consisting of a tapered microstrip input, T-junction, conventional 50-ohm microstrip line, and three artificial transmission line (ATL) sections. The design concept, mode conversion scheme, and simulated and experimental S-parameters of the transition are discussed first. It features a compact size, and a 3dB-insertion loss bandwidth of 78.6%. Based on the transition, a modified quasi-Yagi antenna is demonstrated. In addition to the new transition, the antenna consists of a CPS feed line, a meandered dipole, and a parasitic element. The meandered dipole can substantially increase to the front-to-back ratio of the antenna without sacrificing the operating bandwidth. The parasitic element is placed in close proximity to the driven element to improve impedance bandwidth and radiation characteristics. The antenna exhibits excellent end-fire radiation with a front-to-back ratio of greater than 15 dB. It features a moderate gain around 4 dBi, and a fractional bandwidth of 38.3%. We carefully investigate the concept, methodology, and experimental results of the proposed antenna.", "title": "" } ]
[ { "docid": "neg:1840023_0", "text": "This short communication describes a case of diprosopiasis in Trachemys scripta scripta imported from Florida (USA) and farmed for about 4 months by a private owner in Palermo, Sicily, Italy. The water turtle showed the morphological and radiological features characterizing such deformity. This communication aims to advance the knowledge of the reptile's congenital anomalies and suggests the need for more detailed investigations to better understand its pathogenesis.", "title": "" }, { "docid": "neg:1840023_1", "text": "Due to the growing number of vehicles on the roads worldwide, road traffic accidents are currently recognized as a major public safety problem. In this context, connected vehicles are considered as the key enabling technology to improve road safety and to foster the emergence of next generation cooperative intelligent transport systems (ITS). Through the use of wireless communication technologies, the deployment of ITS will enable vehicles to autonomously communicate with other nearby vehicles and roadside infrastructures and will open the door for a wide range of novel road safety and driver assistive applications. However, connecting wireless-enabled vehicles to external entities can make ITS applications vulnerable to various security threats, thus impacting the safety of drivers. This article reviews the current research challenges and opportunities related to the development of secure and safe ITS applications. It first explores the architecture and main characteristics of ITS systems and surveys the key enabling standards and projects. Then, various ITS security threats are analyzed and classified, along with their corresponding cryptographic countermeasures. Finally, a detailed ITS safety application case study is analyzed and evaluated in light of the European ETSI TC ITS standard. An experimental test-bed is presented, and several elliptic curve digital signature algorithms (ECDSA) are benchmarked for signing and verifying ITS safety messages. To conclude, lessons learned, open research challenges and opportunities are discussed. Electronics 2015, 4 381", "title": "" }, { "docid": "neg:1840023_2", "text": "Neural networks are vulnerable to adversarial examples and researchers have proposed many heuristic attack and defense mechanisms. We take the principled view of distributionally robust optimization, which guarantees performance under adversarial input perturbations. By considering a Lagrangian penalty formulation of perturbation of the underlying data distribution in a Wasserstein ball, we provide a training procedure that augments model parameter updates with worst-case perturbations of training data. For smooth losses, our procedure provably achieves moderate levels of robustness with little computational or statistical cost relative to empirical risk minimization. Furthermore, our statistical guarantees allow us to efficiently certify robustness for the population loss. We match or outperform heuristic approaches on supervised and reinforcement learning tasks.", "title": "" }, { "docid": "neg:1840023_3", "text": "Future power networks will be characterized by safe and reliable functionality against physical and cyber attacks. This paper proposes a unified framework and advanced monitoring procedures to detect and identify network components malfunction or measurements corruption caused by an omniscient adversary. We model a power system under cyber-physical attack as a linear time-invariant descriptor system with unknown inputs. Our attack model generalizes the prototypical stealth, (dynamic) false-data injection and replay attacks. We characterize the fundamental limitations of both static and dynamic procedures for attack detection and identification. Additionally, we design provably-correct (dynamic) detection and identification procedures based on tools from geometric control theory. Finally, we illustrate the effectiveness of our method through a comparison with existing (static) detection algorithms, and through a numerical study.", "title": "" }, { "docid": "neg:1840023_4", "text": "This paper introduces adaptor grammars, a class of probabil istic models of language that generalize probabilistic context-free grammar s (PCFGs). Adaptor grammars augment the probabilistic rules of PCFGs with “ada ptors” that can induce dependencies among successive uses. With a particular choice of adaptor, based on the Pitman-Yor process, nonparametric Bayesian mo dels f language using Dirichlet processes and hierarchical Dirichlet proc esses can be written as simple grammars. We present a general-purpose inference al gorithm for adaptor grammars, making it easy to define and use such models, and ill ustrate how several existing nonparametric Bayesian models can be expressed wi thin this framework.", "title": "" }, { "docid": "neg:1840023_5", "text": "Therabot is a robotic therapy support system designed to supplement a therapist and to provide support to patients diagnosed with conditions associated with trauma and adverse events. The system takes on the form factor of a floppy-eared dog which fits in a person»s lap and is designed for patients to provide support and encouragement for home therapy exercises and in counseling.", "title": "" }, { "docid": "neg:1840023_6", "text": "The demands on visual recognition systems do not end with the complexity offered by current large-scale image datasets, such as ImageNet. In consequence, we need curious and continuously learning algorithms that actively acquire knowledge about semantic concepts which are present in available unlabeled data. As a step towards this goal, we show how to perform continuous active learning and exploration, where an algorithm actively selects relevant batches of unlabeled examples for annotation. These examples could either belong to already known or to yet undiscovered classes. Our algorithm is based on a new generalization of the Expected Model Output Change principle for deep architectures and is especially tailored to deep neural networks. Furthermore, we show easy-to-implement approximations that yield efficient techniques for active selection. Empirical experiments show that our method outperforms currently used heuristics.", "title": "" }, { "docid": "neg:1840023_7", "text": "In recent years, much research has been conducted on image super-resolution (SR). To the best of our knowledge, however, few SR methods were concerned with compressed images. The SR of compressed images is a challenging task due to the complicated compression artifacts, while many images suffer from them in practice. The intuitive solution for this difficult task is to decouple it into two sequential but independent subproblems, i.e., compression artifacts reduction (CAR) and SR. Nevertheless, some useful details may be removed in CAR stage, which is contrary to the goal of SR and makes the SR stage more challenging. In this paper, an end-to-end trainable deep convolutional neural network is designed to perform SR on compressed images (CISRDCNN), which reduces compression artifacts and improves image resolution jointly. Experiments on compressed images produced by JPEG (we take the JPEG as an example in this paper) demonstrate that the proposed CISRDCNN yields state-of-the-art SR performance on commonly used test images and imagesets. The results of CISRDCNN on real low quality web images are also very impressive, with obvious quality enhancement. Further, we explore the application of the proposed SR method in low bit-rate image coding, leading to better rate-distortion performance than JPEG.", "title": "" }, { "docid": "neg:1840023_8", "text": "Evolutionary computation methods have been successfully applied to neural networks since two decades ago, while those methods cannot scale well to the modern deep neural networks due to the complicated architectures and large quantities of connection weights. In this paper, we propose a new method using genetic algorithms for evolving the architectures and connection weight initialization values of a deep convolutional neural network to address image classification problems. In the proposed algorithm, an efficient variable-length gene encoding strategy is designed to represent the different building blocks and the unpredictable optimal depth in convolutional neural networks. In addition, a new representation scheme is developed for effectively initializing connection weights of deep convolutional neural networks, which is expected to avoid networks getting stuck into local minima which is typically a major issue in the backward gradient-based optimization. Furthermore, a novel fitness evaluation method is proposed to speed up the heuristic search with substantially less computational resource. The proposed algorithm is examined and compared with 22 existing algorithms on nine widely used image classification tasks, including the stateof-the-art methods. The experimental results demonstrate the remarkable superiority of the proposed algorithm over the stateof-the-art algorithms in terms of classification error rate and the number of parameters (weights).", "title": "" }, { "docid": "neg:1840023_9", "text": "INTRODUCTION\nResearch indicated that: (i) vaginal orgasm (induced by penile-vaginal intercourse [PVI] without concurrent clitoral masturbation) consistency (vaginal orgasm consistency [VOC]; percentage of PVI occasions resulting in vaginal orgasm) is associated with mental attention to vaginal sensations during PVI, preference for a longer penis, and indices of psychological and physiological functioning, and (ii) clitoral, distal vaginal, and deep vaginal/cervical stimulation project via different peripheral nerves to different brain regions.\n\n\nAIMS\nThe aim of this study is to examine the association of VOC with: (i) sexual arousability perceived from deep vaginal stimulation (compared with middle and shallow vaginal stimulation and clitoral stimulation), and (ii) whether vaginal stimulation was present during the woman's first masturbation.\n\n\nMETHODS\nA sample of 75 Czech women (aged 18-36), provided details of recent VOC, site of genital stimulation during first masturbation, and their recent sexual arousability from the four genital sites.\n\n\nMAIN OUTCOME MEASURES\nThe association of VOC with: (i) sexual arousability perceived from the four genital sites and (ii) involvement of vaginal stimulation in first-ever masturbation.\n\n\nRESULTS\nVOC was associated with greater sexual arousability from deep vaginal stimulation but not with sexual arousability from other genital sites. VOC was also associated with women's first masturbation incorporating (or being exclusively) vaginal stimulation.\n\n\nCONCLUSIONS\nThe findings suggest (i) stimulating the vagina during early life masturbation might indicate individual readiness for developing greater vaginal responsiveness, leading to adult greater VOC, and (ii) current sensitivity of deep vaginal and cervical regions is associated with VOC, which might be due to some combination of different neurophysiological projections of the deep regions and their greater responsiveness to penile stimulation.", "title": "" }, { "docid": "neg:1840023_10", "text": "Zero-shot learning (ZSL) highly depends on a good semantic embedding to connect the seen and unseen classes. Recently, distributed word embeddings (DWE) pre-trained from large text corpus have become a popular choice to draw such a connection. Compared with human defined attributes, DWEs are more scalable and easier to obtain. However, they are designed to reflect semantic similarity rather than visual similarity and thus using them in ZSL often leads to inferior performance. To overcome this visual-semantic discrepancy, this work proposes an objective function to re-align the distributed word embeddings with visual information by learning a neural network to map it into a new representation called visually aligned word embedding (VAWE). Thus the neighbourhood structure of VAWEs becomes similar to that in the visual domain. Note that in this work we do not design a ZSL method that projects the visual features and semantic embeddings onto a shared space but just impose a requirement on the structure of the mapped word embeddings. This strategy allows the learned VAWE to generalize to various ZSL methods and visual features. As evaluated via four state-of-the-art ZSL methods on four benchmark datasets, the VAWE exhibit consistent performance improvement.", "title": "" }, { "docid": "neg:1840023_11", "text": "In this paper we review classification algorithms used to design brain-computer interface (BCI) systems based on electroencephalography (EEG). We briefly present the commonly employed algorithms and describe their critical properties. Based on the literature, we compare them in terms of performance and provide guidelines to choose the suitable classification algorithm(s) for a specific BCI.", "title": "" }, { "docid": "neg:1840023_12", "text": "In diesem Kapitel wird Kognitive Modellierung als ein interdisziplinäres Forschungsgebiet vorgestellt, das sich mit der Entwicklung von computerimplementierbaren Modellen beschäftigt, in denen wesentliche Eigenschaften des Wissens und der Informationsverarbeitung beim Menschen abgebildet sind. Nach einem allgemeinen Überblick über Zielsetzungen, Methoden und Vorgehensweisen, die sich auf den Gebieten der kognitiven Psychologie und der Künstlichen Intelligenz entwickelt haben, sowie der Darstellung eines Theorierahmens werden vier Modelle detaillierter besprochen: In einem I>crnmodcll, das in einem Intelligenten Tutoriellen System Anwendung findet und in einem Performanz-Modell der MenschComputer-Interaktion wird menschliches Handlungswissen beschrieben. Die beiden anderen Modelle zum Textverstehen und zur flexiblen Gedächtnisorganisation beziehen sich demgegenüber vor allem auf den Aufbau und Abruf deklarativen Wissens. Abschließend werden die vorgestellten Modelle in die historische Entwicklung eingeordnet. Möglichkeiten und Grenzen der Kognitiven Modellierung werden hinsichtlich interessant erscheinender Weiterentwicklungen diskutiert. 1. Einleitung und Überblick Das Gebiet der Künstlichen Intelligenz wird meist unter Bezugnahme auf ursprünglich nur beim Menschen beobachtetes Verhalten definiert. So wird die Künstliche Intelligenz oder KI als die Erforschung von jenen Verhaltensabläufen verstanden, deren Planung und Durchführung Intelligenz erfordert. Der Begriff Intelligenz wird dabei unter Bezugnahme auf den Menschen vage abgegrenzt |Siekmann_83,Winston_84]. Da auch Teilbereiche der Psychologie, vor allem die Kognitive Psychologie, Intelligenz und Denken untersuchen, könnte man vermuten, daß die KI-Forschung als die jüngere Wissenschaft direkt auf älteren psychologischen Erkenntnissen aufbauen würde. Obwohl K I und kognitive Psychologie einen ähnlichen Gegenstandsbereich erforschen, gibt es jedoch auch vielschichtige Unterschiede zwischen beiden Disziplinen. Daraus läßt sich möglicherweise erklären, daß die beiden Fächer bislang nicht in dem Maß interagiert haben, wie dies wünschenswert wäre. 1.1 Unterschiede zwischen KI und Kognitiver Psychologie Auch wenn keine klare Grenze zwischen den beiden Gebieten gezogen werden kann, so müssen wir doch feststellen, daß K I nicht gleich Kognitiver Psychologie ist. Wichtige Unterschiede bestehen in den primären Forschungszielen und Methoden, sowie in der Interpretation von Computermodellen (computational models). Zielsetzungen und Methoden Während die K I eine Modellierung von Kompetenzen anstrebt, erforscht die Psychologie die Performanz des Menschen. • Die K I sucht nach Verfahren, die zu einem intelligenten Verhalten eines Computers fuhren. Beispielsweise sollte ein Computer natürliche Sprache verstehen, neue Begriffe lernen können oder Expertenverhalten zeigen oder unterstützen. Die K I versucht also, intelligente Systeme zu entwickeln und deckt dabei mögliche Prinzipien von Intelligenz auf, indem sie Datenstrukturen und Algorithmen spezifiziert, die intelligentes Verhalten erwarten lassen. Entscheidend ist dabei, daß eine intelligente Leistung im Sinne eines Turing-Tests erbracht wird: Eine Implementierung des Algorithmus soll für eine Menge spezifizierter Eingaben (z. B . gesprochene Sprache) innerhalb angemessener Zeit die vergleichbare Verarbeitungsleistung erbringen wie der Mensch. Der beobachtete Systemoutput von Mensch und Computer wäre also oberflächlich betrachtet nicht voneinander unterscheidbar [Turing_63]. Ob die dabei im Computer verwendeten Strukturen, Prozesse und Heuristiken denen beim Menschen ähneln, spielt in der K I keine primäre Rolle. • Die Kognitive Psychologie hingegen untersucht eher die internen kognitiven Verarbeitungsprozesse des Menschen. Bei einer psychologischen Theorie sollte also auch das im Modell verwendete Verfahren den Heuristiken entsprechen, die der Mensch verwendet. Beispielsweise wird ein Schachprogramm nicht dadurch zu einem psychologisch adäquaten Modell, daß es die Spielstärke menschlicher Meisterspieler erreicht. Vielmehr sollten bei einem psychologischen Modell auch die Verarbeitungsprozesse von Mensch und Programm übereinstimmen (vgl. dazu [deGroot_66]).Für psychologische Forschungen sind daher empirische und gezielte experimentelle Untersuchungen der menschlichen Kognition von großer Bedeutung. In der K I steht die Entwicklung und Implementierung von Modellen im Vordergrund. Die kognitive Psychologie dagegen betont die Wichtigkeit der empirischen Evaluation von Modellen zur Absicherung von präzisen, allgemeingültigen Aussagen. Wegen dieser verschiedenen Schwerpunkt Setzung und den daraus resultierenden unterschiedlichen Forschungsmethoden ist es für die Forscher der einen Disziplin oft schwierig, den wissenschaftlichen Fortschritt der jeweils anderen Disziplin zu nutzen [Miller_78]. Interpretation von Computermodellen Die K I ist aus der Informatik hervorgegangen. Wie bei der Informatik bestehen auch bei der K I wissenschaftliche Erkenntnisse darin, daß mit ingenieurwissenschaftlichen Verfahren neue Systeme wie Computerhardund -Software konzipiert und erzeugt werden. Die genaue Beschreibung eines so geschaffenen Systems ist für den Informatiker im Prinzip unproblematisch, da er das System selbst entwickelt hat und daher über dessen Bestandteile und Funktionsweisen bestens informiert ist. Darin liegt ein Unterschied zu den empirischen Wissenschaften wie der Physik oder Psychologie. Der Erfahrungswissenschaftler muß Objektbereiche untersuchen, deren Gesetzmäßigkeiten er nie mit letzter Sicherheit feststellen kann. Er m u ß sich daher Theorien oder Modelle über den Untersuchungsgegenstand bilden, die dann empirisch überprüft werden können. Jedoch läßt sich durch eine noch so große Anzahl von Experimenten niemals die Korrektheit eines Modells beweisen [Popper_66]. E in einfaches Beispiel kann diesen Unterschied verdeutlichen. • E in Hardwarespezialist, der einen Personal Computer gebaut hat, weiß, daß die Aussage \"Der Computer ist mit 640 K B Hauptspeicher bestückt\" richtig ist, weil er ihn eben genau so bestückt hat. Dies ist also eine feststehende Tatsache, die keiner weiteren Überprüfung bedarf. • Die Behauptung eines Psychologen, daß der menschliche Kurzzeitoder Arbeitsspeicher eine Kapazität von etwa 7 Einheiten oder Chunks habe, hat jedoch einen ganz anderen Stellenwert. Damit wird keinesfalls eine faktische Behauptung über die Größe von Arealen im menschlichen Gehirn aufgestellt. \"Arbeitsspeicher\" wird hier als theoretischer Term eines Modells verwendet. Mit der Aussage über die Kapazität des Arbeitsspeichers ist gemeint, daß erfahrungsgemäß Modelle, die eine solche Kapazitätsbescfiränkung annehmen, menschliches Verhalten gut beschreiben können. Dadurch wird jedoch nicht ausgeschlossen, daß ein weiteres Experiment Unzulänglichkeiten oder die Inkorrektheit des Modells nachweist. In den Erfahrungswissenscharten werden theoretische Begriffe wie etwa Arbeitsspeicher innerhalb von Computermodellen zur abstrahierten und integrativen Beschreibung von empirischen Erkenntnissen verwendet. Dadurch können beim Menschen zu beobachtende Verhaltensweisen vorhergesagt werden. Aus der Sichtweise der Informatik bezeichnen genau die gleichen Tcrme jedoch tatsächliche Komponenten eines Geräts oder Programms. Diese unterschiedlichen Sichtweisen der gleichen Modelle verbieten einen unkritischen und oberflächlichen Informationstransfer zwischen K I und Kognitiver Psychologie. Aus der Integration der Zielsetzungen und Sichtweisen ergeben sich jedoch auch gerade vielversprechende Erkenntnismöglichkeiten über Intelligenz. Da theoretische wie auch empirische Untersuchungen zum Verständnis der Intelligenz beitragen, können sich die Methoden und Erkenntnisse von beiden Disziplinen (ähnlich wie Mathematik und Physik im Bereich der theoretischen Physik) ergänzen und befruchten. 1.2 Synthese von KI und Kognitiver Psychologie Im Rahmen der Kognitionswissenschaften(cognitive science) tragen viele Disziplinen (z.B. K I , Psychologie, Linguistik, Anthropologie ...) Erkenntnisse über informationsverarbeitende Systeme bei. Die Kognitive Modellierung als ein Teilgebiet von sowohl K I als auch Kognitiver Psychologie befaßt sich mit der Entwicklung von computerimplementierbaren Modellen, in denen wesentliche Eigenschaften des Wissens und der Informationsverarbeitung beim Menschen abgebildet sind. Durch Kognitive Modellierung wird also eine Synthese von K I und psychologischer Forschung angestrebt. E in Computermodell wird zu einem kognitiven Modell, indem Entitätcn des Modells psychologischen Beobachtungen und Erkenntnissen zugeordnet werden. Da ein solches Modell auch den Anspruch erhebt, menschliches Verhalten vorherzusagen, können Kognitive Modelle aufgrund empirischer Untersuchungen weiterentwickelt werden. Die Frage, ob ein KI-Modell als ein kognitives Modell anzusehen ist, kann nicht einfach bejaht oder verneint werden, sondern wird vielmehr durch die Angabe einer Zuordnung von Aspekten der menschlichen Informationsverarbeitung zu Eigenschaften des Computermodells beantwortet.", "title": "" }, { "docid": "neg:1840023_13", "text": "We report 3 cases of renal toxicity associated with use of the antiviral agent tenofovir. Renal failure, proximal tubular dysfunction, and nephrogenic diabetes insipidus were observed, and, in 2 cases, renal biopsy revealed severe tubular necrosis with characteristic nuclear changes. Patients receiving tenofovir must be monitored closely for early signs of tubulopathy (glycosuria, acidosis, mild increase in the plasma creatinine level, and proteinuria).", "title": "" }, { "docid": "neg:1840023_14", "text": "The now taken-for-granted notion that data lead to information, which leads to knowledge, which in turn leads to wisdom was first specified in detail by R. L. Ackoff in 1988. The Data-Information-KnowledgeWisdom hierarchy is based on filtration, reduction, and transformation. Besides being causal and hierarchical, the scheme is pyramidal, in that data are plentiful while wisdom is almost nonexistent. Ackoff’s formula linking these terms together this way permits us to ask what the opposite of knowledge is and whether analogous principles of hierarchy, process, and pyramiding apply to it. The inversion of the DataInformation-Knowledge-Wisdom hierarchy produces a series of opposing terms (including misinformation, error, ignorance, and stupidity) but not exactly a chain or a pyramid. Examining the connections between these phenomena contributes to our understanding of the contours and limits of knowledge. This presentation will revisit the Data-Information-Knowledge-Wisdom hierarchy linking these concepts together as stages of a single developmental process, with the aim of building a taxonomy for a postulated opposite of knowledge, which I will call ‘nonknowledge’. Concepts of data, information, knowledge, and wisdom are the building blocks of library and information science. Discussions and definitions of these terms pervade the literature from introductory textbooks to theoretical research articles (see Zins, 2007). Expressions linking some of these concepts predate the development of information science as a field of study (Sharma 2008). But the first to put all the terms into a single formula was Russell Lincoln Ackoff, in 1989. Ackoff posited a hierarchy at the top of which lay wisdom, and below that understanding, knowledge, information, and data, in that order. Furthermore, he wrote that “each of these includes the categories that fall below it,” and estimated that “on average about forty percent of the human mind consists of data, thirty percent information, twenty percent knowledge, ten percent understanding, and virtually no wisdom” (Ackoff, 1989, 3). This phraseology allows us to view his model as a pyramid, and indeed it has been likened to one ever since (Rowley, 2007; see figure 1). (‘Understanding’ is omitted, since subsequent formulations have not picked up on it.) Ackoff was a management consultant and former professor of management science at the Wharton School specializing in operations research and organizational theory. His article formulating what is now commonly called the Data-InformationKnowledge-Wisdom hierarchy (or DIKW for short) was first given in 1988 as a presidential address to the International Society for General Systems Research. This background may help explain his approach. Data in his terms are the product of observations, and are of no value until they are processed into a usable form to become information. Information is contained in answers to questions. Knowledge, the next layer, further refines information by making “possible the transformation of information into instructions. It makes control of a system possible” (Ackoff, 1989, 4), and that enables one to make it work efficiently. A managerial rather than scholarly perspective runs through Ackoff’s entire hierarchy, so that “understanding” for him", "title": "" }, { "docid": "neg:1840023_15", "text": "Three important stages within automated 3D object reconstruction via multi-image convergent photogrammetry are image pre-processing, interest point detection for feature-based matching and triangular mesh generation. This paper investigates approaches to each of these. The Wallis filter is initially examined as a candidate image pre-processor to enhance the performance of the FAST interest point operator. The FAST algorithm is then evaluated as a potential means to enhance the speed, robustness and accuracy of interest point detection for subsequent feature-based matching. Finally, the Poisson Surface Reconstruction algorithm for wireframe mesh generation of objects with potentially complex 3D surface geometry is evaluated. The outcomes of the investigation indicate that the Wallis filter, FAST interest operator and Poisson Surface Reconstruction algorithms present distinct benefits in the context of automated image-based object reconstruction. The reported investigation has advanced the development of an automatic procedure for high-accuracy point cloud generation in multi-image networks, where robust orientation and 3D point determination has enabled surface measurement and visualization to be implemented within a single software system.", "title": "" }, { "docid": "neg:1840023_16", "text": "We study the problem of how to distribute the training of large-scale deep learning models in the parallel computing environment. We propose a new distributed stochastic optimization method called Elastic Averaging SGD (EASGD). We analyze the convergence rate of the EASGD method in the synchronous scenario and compare its stability condition with the existing ADMM method in the round-robin scheme. An asynchronous and momentum variant of the EASGD method is applied to train deep convolutional neural networks for image classification on the CIFAR and ImageNet datasets. Our approach accelerates the training and furthermore achieves better test accuracy. It also requires a much smaller amount of communication than other common baseline approaches such as the DOWNPOUR method. We then investigate the limit in speedup of the initial and the asymptotic phase of the mini-batch SGD, the momentum SGD, and the EASGD methods. We find that the spread of the input data distribution has a big impact on their initial convergence rate and stability region. We also find a surprising connection between the momentum SGD and the EASGD method with a negative moving average rate. A non-convex case is also studied to understand when EASGD can get trapped by a saddle point. Finally, we scale up the EASGD method by using a tree structured network topology. We show empirically its advantage and challenge. We also establish a connection between the EASGD and the DOWNPOUR method with the classical Jacobi and the Gauss-Seidel method, thus unifying a class of distributed stochastic optimization methods.", "title": "" }, { "docid": "neg:1840023_17", "text": "It has long been conjectured that hypothesis spaces suitable for data that is compositional in nature, such as text or images, may be more efficiently represented with deep hierarchical architectures than with shallow ones. Despite the vast empirical evidence, formal arguments to date are limited and do not capture the kind of networks used in practice. Using tensor factorization, we derive a universal hypothesis space implemented by an arithmetic circuit over functions applied to local data structures (e.g. image patches). The resulting networks first pass the input through a representation layer, and then proceed with a sequence of layers comprising sum followed by product-pooling, where sum corresponds to the widely used convolution operator. The hierarchical structure of networks is born from factorizations of tensors based on the linear weights of the arithmetic circuits. We show that a shallow network corresponds to a rank-1 decomposition, whereas a deep network corresponds to a Hierarchical Tucker (HT) decomposition. Log-space computation for numerical stability transforms the networks into SimNets.", "title": "" }, { "docid": "neg:1840023_18", "text": "This paper presents a method for decomposing long, complex consumer health questions. Our approach largely decomposes questions using their syntactic structure, recognizing independent questions embedded in clauses, as well as coordinations and exemplifying phrases. Additionally, we identify elements specific to disease-related consumer health questions, such as the focus disease and background information. To achieve this, our approach combines rank-and-filter machine learning methods with rule-based methods. Our results demonstrate significant improvements over the heuristic methods typically employed for question decomposition that rely only on the syntactic parse tree.", "title": "" }, { "docid": "neg:1840023_19", "text": "The R language, from the point of view of language design and implementation, is a unique combination of various programming language concepts. It has functional characteristics like lazy evaluation of arguments, but also allows expressions to have arbitrary side effects. Many runtime data structures, for example variable scopes and functions, are accessible and can be modified while a program executes. Several different object models allow for structured programming, but the object models can interact in surprising ways with each other and with the base operations of R. \n R works well in practice, but it is complex, and it is a challenge for language developers trying to improve on the current state-of-the-art, which is the reference implementation -- GNU R. The goal of this work is to demonstrate that, given the right approach and the right set of tools, it is possible to create an implementation of the R language that provides significantly better performance while keeping compatibility with the original implementation. \n In this paper we describe novel optimizations backed up by aggressive speculation techniques and implemented within FastR, an alternative R language implementation, utilizing Truffle -- a JVM-based language development framework developed at Oracle Labs. We also provide experimental evidence demonstrating effectiveness of these optimizations in comparison with GNU R, as well as Renjin and TERR implementations of the R language.", "title": "" } ]
1840024
Screening for Depression Patients in Family Medicine
[ { "docid": "pos:1840024_0", "text": "PHILADELPHIA The difficulties inherent in obtaining consistent and adequate diagnoses for the purposes of research and therapy have been pointed out by a number of authors. Pasamanick12 in a recent article viewed the low interclinician agreement on diagnosis as an indictment of the present state of psychiatry and called for \"the development of objective, measurable and verifiable criteria of classification based not on personal or parochial considerations, buton behavioral and other objectively measurable manifestations.\" Attempts by other investigators to subject clinical observations and judgments to objective measurement have resulted in a wide variety of psychiatric rating ~ c a l e s . ~ J ~ These have been well summarized in a review article by Lorr l1 on \"Rating Scales and Check Lists for the E v a 1 u a t i o n of Psychopathology.\" In the area of psychological testing, a variety of paper-andpencil tests have been devised for the purpose of measuring specific personality traits; for example, the Depression-Elation Test, devised by Jasper in 1930. This report describes the development of an instrument designed to measure the behavioral manifestations of depression. In the planning of the research design of a project aimed at testing certain psychoanalytic formulations of depression, the necessity for establishing an appropriate system for identifying depression was recognized. Because of the reports on the low degree of interclinician agreement on diagnosis,13 we could not depend on the clinical diagnosis, but had to formulate a method of defining depression that would be reliable and valid. The available instruments were not considered adequate for our purposes. The Minnesota Multiphasic Personality Inventory, for example, was not specifically designed", "title": "" } ]
[ { "docid": "neg:1840024_0", "text": "Oblivious RAM (ORAM) protocols are powerful techniques that hide a client’s data as well as access patterns from untrusted service providers. We present an oblivious cloud storage system, ObliviSync, that specifically targets one of the most widely-used personal cloud storage paradigms: synchronization and backup services, popular examples of which are Dropbox, iCloud Drive, and Google Drive. This setting provides a unique opportunity because the above privacy properties can be achieved with a simpler form of ORAM called write-only ORAM, which allows for dramatically increased efficiency compared to related work. Our solution is asymptotically optimal and practically efficient, with a small constant overhead of approximately 4x compared with non-private file storage, depending only on the total data size and parameters chosen according to the usage rate, and not on the number or size of individual files. Our construction also offers protection against timing-channel attacks, which has not been previously considered in ORAM protocols. We built and evaluated a full implementation of ObliviSync that supports multiple simultaneous read-only clients and a single concurrent read/write client whose edits automatically and seamlessly propagate to the readers. We show that our system functions under high work loads, with realistic file size distributions, and with small additional latency (as compared to a baseline encrypted file system) when paired with Dropbox as the synchronization service.", "title": "" }, { "docid": "neg:1840024_1", "text": "Different modalities of magnetic resonance imaging (MRI) can indicate tumor-induced tissue changes from different perspectives, thus benefit brain tumor segmentation when they are considered together. Meanwhile, it is always interesting to examine the diagnosis potential from single modality, considering the cost of acquiring multi-modality images. Clinically, T1-weighted MRI is the most commonly used MR imaging modality, although it may not be the best option for contouring brain tumor. In this paper, we investigate whether synthesizing FLAIR images from T1 could help improve brain tumor segmentation from the single modality of T1. This is achieved by designing a 3D conditional Generative Adversarial Network (cGAN) for FLAIR image synthesis and a local adaptive fusion method to better depict the details of the synthesized FLAIR images. The proposed method can effectively handle the segmentation task of brain tumors that vary in appearance, size and location across samples.", "title": "" }, { "docid": "neg:1840024_2", "text": "This paper presents an adaptation of Lesk’s dictionary– based word sense disambiguation algorithm. Rather than using a standard dictionary as the source of glosses for our approach, the lexical database WordNet is employed. This provides a rich hierarchy of semantic relations that our algorithm can exploit. This method is evaluated using the English lexical sample data from the Senseval-2 word sense disambiguation exercise, and attains an overall accuracy of 32%. This represents a significant improvement over the 16% and 23% accuracy attained by variations of the Lesk algorithm used as benchmarks during the Senseval-2 comparative exercise among word sense disambiguation", "title": "" }, { "docid": "neg:1840024_3", "text": "Spell checking is a well-known task in Natural Language Processing. Nowadays, spell checkers are an important component of a number of computer software such as web browsers, word processors and others. Spelling error detection and correction is the process that will check the spelling of words in a document, and in occurrence of any error, list out the correct spelling in the form of suggestions. This survey paper covers different spelling error detection and correction techniques in various languages. KeywordsNLP, Spell Checker, Spelling Errors, Error detection techniques, Error correction techniques.", "title": "" }, { "docid": "neg:1840024_4", "text": "Vaccine hesitancy reflects concerns about the decision to vaccinate oneself or one's children. There is a broad range of factors contributing to vaccine hesitancy, including the compulsory nature of vaccines, their coincidental temporal relationships to adverse health outcomes, unfamiliarity with vaccine-preventable diseases, and lack of trust in corporations and public health agencies. Although vaccination is a norm in the U.S. and the majority of parents vaccinate their children, many do so amid concerns. The proportion of parents claiming non-medical exemptions to school immunization requirements has been increasing over the past decade. Vaccine refusal has been associated with outbreaks of invasive Haemophilus influenzae type b disease, varicella, pneumococcal disease, measles, and pertussis, resulting in the unnecessary suffering of young children and waste of limited public health resources. Vaccine hesitancy is an extremely important issue that needs to be addressed because effective control of vaccine-preventable diseases generally requires indefinite maintenance of extremely high rates of timely vaccination. The multifactorial and complex causes of vaccine hesitancy require a broad range of approaches on the individual, provider, health system, and national levels. These include standardized measurement tools to quantify and locate clustering of vaccine hesitancy and better understand issues of trust; rapid, independent, and transparent review of an enhanced and appropriately funded vaccine safety system; adequate reimbursement for vaccine risk communication in doctors' offices; and individually tailored messages for parents who have vaccine concerns, especially first-time pregnant women. The potential of vaccines to prevent illness and save lives has never been greater. Yet, that potential is directly dependent on parental acceptance of vaccines, which requires confidence in vaccines, healthcare providers who recommend and administer vaccines, and the systems to make sure vaccines are safe.", "title": "" }, { "docid": "neg:1840024_5", "text": "This paper demonstrates a reliable navigation of a mobile robot in outdoor environment. We fuse differential GPS and odometry data using the framework of extended Kalman filter to localize a mobile robot. And also, we propose an algorithm to detect curbs through the laser range finder. An important feature of road environment is the existence of curbs. The mobile robot builds the map of the curbs of roads and the map is used for tracking and localization. The navigation system for the mobile robot consists of a mobile robot and a control station. The mobile robot sends the image data from a camera to the control station. The control station receives and displays the image data and the teleoperator commands the mobile robot based on the image data. Since the image data does not contain enough data for reliable navigation, a hybrid strategy for reliable mobile robot in outdoor environment is suggested. When the mobile robot is faced with unexpected obstacles or the situation that, if it follows the command, it can happen to collide, it sends a warning message to the teleoperator and changes the mode from teleoperated to autonomous to avoid the obstacles by itself. After avoiding the obstacles or the collision situation, the mode of the mobile robot is returned to teleoperated mode. We have been able to confirm that the appropriate change of navigation mode can help the teleoperator perform reliable navigation in outdoor environment through experiments in the road.", "title": "" }, { "docid": "neg:1840024_6", "text": "User acceptance of artificial intelligence agents might depend on their ability to explain their reasoning, which requires adding an interpretability layer that facilitates users to understand their behavior. This paper focuses on adding an interpretable layer on top of Semantic Textual Similarity (STS), which measures the degree of semantic equivalence between two sentences. The interpretability layer is formalized as the alignment between pairs of segments across the two sentences, where the relation between the segments is labeled with a relation type and a similarity score. We present a publicly available dataset of sentence pairs annotated following the formalization. We then develop a system trained on this dataset which, given a sentence pair, explains what is similar and different, in the form of graded and typed segment alignments. When evaluated on the dataset, the system performs better than an informed baseline, showing that the dataset and task are well-defined and feasible. Most importantly, two user studies show how the system output can be used to automatically produce explanations in natural language. Users performed better when having access to the explanations, providing preliminary evidence that our dataset and method to automatically produce explanations is useful in real applications.", "title": "" }, { "docid": "neg:1840024_7", "text": "In the current Web scenario a video browsing tool that produces on-the-fly storyboards is more and more a need. Video summary techniques can be helpful but, due to their long processing time, they are usually unsuitable for on-the-fly usage. Therefore, it is common to produce storyboards in advance, penalizing users customization. The lack of customization is more and more critical, as users have different demands and might access the Web with several different networking and device technologies. In this paper we propose STIMO, a summarization technique designed to produce on-the-fly video storyboards. STIMO produces still and moving storyboards and allows advanced users customization (e.g., users can select the storyboard length and the maximum time they are willing to wait to get the storyboard). STIMO is based on a fast clustering algorithm that selects the most representative video contents using HSV frame color distribution. Experimental results show that STIMO produces storyboards with good quality and in a time that makes on-the-fly usage possible.", "title": "" }, { "docid": "neg:1840024_8", "text": "The combination of limited individual information and costly information acquisition in markets for experience goods leads us to believe that significant peer effects drive demand in these markets. In this paper we model the effects of peers on the demand patterns of products in the market experience goods microfunding. By analyzing data from an online crowdfunding platform from 2006 to 2010 we are able to ascertain that peer effects, and not network externalities, influence consumption.", "title": "" }, { "docid": "neg:1840024_9", "text": "Stochastic regular bi-languages has been recently proposed to model the joint probability distributions appearing in some statistical approaches of Spoken Dialog Systems. To this end a deterministic and probabilistic finite state biautomaton was defined to model the distribution probabilities for the dialog model. In this work we propose and evaluate decision strategies over the defined probabilistic finite state bi-automaton to select the best system action at each step of the interaction. To this end the paper proposes some heuristic decision functions that consider both action probabilities learn from a corpus and number of known attributes at running time. We compare either heuristics based on a single next turn or based on entire paths over the automaton. Experimental evaluation was carried out to test the model and the strategies over the Let’s Go Bus Information system. The results obtained show good system performances. They also show that local decisions can lead to better system performances than best path-based decisions due to the unpredictability of the user behaviors.", "title": "" }, { "docid": "neg:1840024_10", "text": "A simple framework Probabilistic Multi-view Graph Embedding (PMvGE) is proposed for multi-view feature learning with many-to-many associations so that it generalizes various existing multi-view methods. PMvGE is a probabilistic model for predicting new associations via graph embedding of the nodes of data vectors with links of their associations. Multi-view data vectors with many-to-many associations are transformed by neural networks to feature vectors in a shared space, and the probability of new association between two data vectors is modeled by the inner product of their feature vectors. While existing multi-view feature learning techniques can treat only either of many-to-many association or non-linear transformation, PMvGE can treat both simultaneously. By combining Mercer’s theorem and the universal approximation theorem, we prove that PMvGE learns a wide class of similarity measures across views. Our likelihoodbased estimator enables efficient computation of non-linear transformations of data vectors in largescale datasets by minibatch SGD, and numerical experiments illustrate that PMvGE outperforms existing multi-view methods.", "title": "" }, { "docid": "neg:1840024_11", "text": "Deep Neural Networks (DNNs) have been demonstrated to perform exceptionally well on most recognition tasks such as image classification and segmentation. However, they have also been shown to be vulnerable to adversarial examples. This phenomenon has recently attracted a lot of attention but it has not been extensively studied on multiple, large-scale datasets and complex tasks such as semantic segmentation which often require more specialised networks with additional components such as CRFs, dilated convolutions, skip-connections and multiscale processing. In this paper, we present what to our knowledge is the first rigorous evaluation of adversarial attacks on modern semantic segmentation models, using two large-scale datasets. We analyse the effect of different network architectures, model capacity and multiscale processing, and show that many observations made on the task of classification do not always transfer to this more complex task. Furthermore, we show how mean-field inference in deep structured models and multiscale processing naturally implement recently proposed adversarial defenses. Our observations will aid future efforts in understanding and defending against adversarial examples. Moreover, in the shorter term, we show which segmentation models should currently be preferred in safety-critical applications due to their inherent robustness.", "title": "" }, { "docid": "neg:1840024_12", "text": "We use an online travel context to test three aspects of communication", "title": "" }, { "docid": "neg:1840024_13", "text": "Recently, increased computational power and data availability, as well as algorithmic advances, have led machine learning (ML) techniques to impressive results in regression, classification, data generation and reinforcement learning tasks. Despite these successes, the proximity to the physical limits of chip fabrication alongside the increasing size of datasets is motivating a growing number of researchers to explore the possibility of harnessing the power of quantum computation to speed up classical ML algorithms. Here we review the literature in quantum ML and discuss perspectives for a mixed readership of classical ML and quantum computation experts. Particular emphasis will be placed on clarifying the limitations of quantum algorithms, how they compare with their best classical counterparts and why quantum resources are expected to provide advantages for learning problems. Learning in the presence of noise and certain computationally hard problems in ML are identified as promising directions for the field. Practical questions, such as how to upload classical data into quantum form, will also be addressed.", "title": "" }, { "docid": "neg:1840024_14", "text": "With current projections regarding the growth of Internet sales, online retailing raises many questions about how to market on the Net. While convenience impels consumers to purchase items on the web, quality remains a significant factor in deciding where to shop online. The competition is increasing and personalization is considered to be the competitive advantage that will determine the winners in the market of online shopping in the following years. Recommender systems are a means of personalizing a site and a solution to the customer’s information overload problem. As such, many e-commerce sites already use them to facilitate the buying process. In this paper we present a recommender system for online shopping focusing on the specific characteristics and requirements of electronic retailing. We use a hybrid model supporting dynamic recommendations, which eliminates the problems the underlying techniques have when applied solely. At the end, we conclude with some ideas for further development and research in this area.", "title": "" }, { "docid": "neg:1840024_15", "text": "Convolutional Neural Networks (CNNs) trained on large scale RGB databases have become the secret sauce in the majority of recent approaches for object categorization from RGB-D data. Thanks to colorization techniques, these methods exploit the filters learned from 2D images to extract meaningful representations in 2.5D. Still, the perceptual signature of these two kind of images is very different, with the first usually strongly characterized by textures, and the second mostly by silhouettes of objects. Ideally, one would like to have two CNNs, one for RGB and one for depth, each trained on a suitable data collection, able to capture the perceptual properties of each channel for the task at hand. This has not been possible so far, due to the lack of a suitable depth database. This paper addresses this issue, proposing to opt for synthetically generated images rather than collecting by hand a 2.5D large scale database. While being clearly a proxy for real data, synthetic images allow to trade quality for quantity, making it possible to generate a virtually infinite amount of data. We show that the filters learned from such data collection, using the very same architecture typically used on visual data, learns very different filters, resulting in depth features (a) able to better characterize the different facets of depth images, and (b) complementary with respect to those derived from CNNs pre-trained on 2D datasets. Experiments on two publicly available databases show the power of our approach.", "title": "" }, { "docid": "neg:1840024_16", "text": "Finding the sparsest solutions to a tensor complementarity problem is generally NP-hard due to the nonconvexity and noncontinuity of the involved 0 norm. In this paper, a special type of tensor complementarity problems with Z -tensors has been considered. Under some mild conditions, we show that to pursuit the sparsest solutions is equivalent to solving polynomial programming with a linear objective function. The involved conditions guarantee the desired exact relaxation and also allow to achieve a global optimal solution to the relaxednonconvexpolynomial programming problem. Particularly, in comparison to existing exact relaxation conditions, such as RIP-type ones, our proposed conditions are easy to verify. This research was supported by the National Natural Science Foundation of China (11301022, 11431002), the State Key Laboratory of Rail Traffic Control and Safety, Beijing Jiaotong University (RCS2014ZT20, RCS2014ZZ01), and the Hong Kong Research Grant Council (Grant No. PolyU 502111, 501212, 501913 and 15302114). B Ziyan Luo starkeynature@hotmail.com Liqun Qi liqun.qi@polyu.edu.hk Naihua Xiu nhxiu@bjtu.edu.cn 1 State Key Laboratory of Rail Traffic Control and Safety, Beijing Jiaotong University, Beijing 100044, People’s Repubic of China 2 Department of Applied Mathematics, The Hong Kong Polytechnic University, Hung Hom, Kowloon, Hong Kong, People’s Repubic of China 3 Department of Mathematics, School of Science, Beijing Jiaotong University, Beijing, People’s Repubic of China 123 Author's personal copy", "title": "" }, { "docid": "neg:1840024_17", "text": "We present a motion planning framework for autonomous on-road driving considering both the uncertainty caused by an autonomous vehicle and other traffic participants. The future motion of traffic participants is predicted using a local planner, and the uncertainty along the predicted trajectory is computed based on Gaussian propagation. For the autonomous vehicle, the uncertainty from localization and control is estimated based on a Linear-Quadratic Gaussian (LQG) framework. Compared with other safety assessment methods, our framework allows the planner to avoid unsafe situations more efficiently, thanks to the direct uncertainty information feedback to the planner. We also demonstrate our planner's ability to generate safer trajectories compared to planning only with a LQG framework.", "title": "" }, { "docid": "neg:1840024_18", "text": "Psoriatic arthritis is one of the spondyloarthritis. It is a disease of clinical heterogenicity, which may affect peripheral joints, as well as axial spine, with presence of inflammatory lesions in soft tissue, in a form of dactylitis and enthesopathy. Plain radiography remains the basic imaging modality for PsA diagnosis, although early inflammatory changes affecting soft tissue and bone marrow cannot be detected with its use, or the image is indistinctive. Typical radiographic features of PsA occur in an advanced disease, mainly within the synovial joints, but also in fibrocartilaginous joints, such as sacroiliac joints, and additionally in entheses of tendons and ligaments. Moll and Wright classified PsA into 5 subtypes: asymmetric oligoarthritis, symmetric polyarthritis, arthritis mutilans, distal interphalangeal arthritis of the hands and feet and spinal column involvement. In this part of the paper we discuss radiographic features of the disease. The next one will address magnetic resonance imaging and ultrasonography.", "title": "" } ]
1840025
Liveness Detection Using Gaze Collinearity
[ { "docid": "pos:1840025_0", "text": "We make some simple extensions to the Active Shape Model of Cootes et al. [4], and use it to locate features in frontal views of upright faces. We show on independent test data that with the extensions the Active Shape Model compares favorably with more sophisticated methods. The extensions are (i) fitting more landmarks than are actually needed (ii) selectively using twoinstead of one-dimensional landmark templates (iii) adding noise to the training set (iv) relaxing the shape model where advantageous (v) trimming covariance matrices by setting most entries to zero, and (vi) stacking two Active Shape Models in series.", "title": "" }, { "docid": "pos:1840025_1", "text": "A robust face detection technique along with mouth localization, processing every frame in real time (video rate), is presented. Moreover, it is exploited for motion analysis onsite to verify \"liveness\" as well as to achieve lip reading of digits. A methodological novelty is the suggested quantized angle features (\"quangles\") being designed for illumination invariance without the need for preprocessing (e.g., histogram equalization). This is achieved by using both the gradient direction and the double angle direction (the structure tensor angle), and by ignoring the magnitude of the gradient. Boosting techniques are applied in a quantized feature space. A major benefit is reduced processing time (i.e., that the training of effective cascaded classifiers is feasible in very short time, less than 1 h for data sets of order 104). Scale invariance is implemented through the use of an image scale pyramid. We propose \"liveness\" verification barriers as applications for which a significant amount of computation is avoided when estimating motion. Novel strategies to avert advanced spoofing attempts (e.g., replayed videos which include person utterances) are demonstrated. We present favorable results on face detection for the YALE face test set and competitive results for the CMU-MIT frontal face test set as well as on \"liveness\" verification barriers.", "title": "" }, { "docid": "pos:1840025_2", "text": "To increase reliability of face recognition system, the system must be able to distinguish real face from a copy of face such as a photograph. In this paper, we propose a fast and memory efficient method of live face detection for embedded face recognition system, based on the analysis of the movement of the eyes. We detect eyes in sequential input images and calculate variation of each eye region to determine whether the input face is a real face or not. Experimental results show that the proposed approach is competitive and promising for live face detection. Keywords—Liveness Detection, Eye detection, SQI.", "title": "" } ]
[ { "docid": "neg:1840025_0", "text": "Jump height is a critical aspect of volleyball players' blocking and attacking performance. Although previous studies demonstrated that creatine monohydrate supplementation (CrMS) improves jumping performance, none have yet evaluated its effect among volleyball players with proficient jumping skills. We examined the effect of 4 wk of CrMS on 1 RM spike jump (SJ) and repeated block jump (BJ) performance among 12 elite males of the Sherbrooke University volleyball team. Using a parallel, randomized, double-blind protocol, participants were supplemented with a placebo or creatine solution for 28 d, at a dose of 20 g/d in days 1-4, 10 g/d on days 5-6, and 5 g/d on days 7-28. Pre- and postsupplementation, subjects performed the 1 RM SJ test, followed by the repeated BJ test (10 series of 10 BJs; 3 s interval between jumps; 2 min recovery between series). Due to injuries (N = 2) and outlier data (N = 2), results are reported for eight subjects. Following supplementation, both groups improved SJ and repeated BJ performance. The change in performance during the 1 RM SJ test and over the first two repeated BJ series was unclear between groups. For series 3-6 and 7-10, respectively, CrMS further improved repeated BJ performance by 2.8% (likely beneficial change) and 1.9% (possibly beneficial change), compared with the placebo. Percent repeated BJ decline in performance across the 10 series did not differ between groups pre- and postsupplementation. In conclusion, CrMS likely improved repeated BJ height capability without influencing the magnitude of muscular fatigue in these elite, university-level volleyball players.", "title": "" }, { "docid": "neg:1840025_1", "text": "Image Understanding is fundamental to systems that need to extract contents and infer concepts from images. In this paper, we develop an architecture for understanding images, through which a system can recognize the content and the underlying concepts of an image and, reason and answer questions about both using a visual module, a reasoning module, and a commonsense knowledge base. In this architecture, visual data combines with background knowledge and; iterates through visual and reasoning modules to answer questions about an image or to generate a textual description of an image. We first provide motivations of such a Deep Image Understanding architecture and then, we describe the necessary components it should include. We also introduce our own preliminary implementation of this architecture and empirically show how this more generic implementation compares with a recent end-to-end Neural approach on specific applications. We address the knowledge-representation challenge in such an architecture by representing an image using a directed labeled graph (called Scene Description Graph). Our implementation uses generic visual recognition techniques and commonsense reasoning1 to extract such graphs from images. Our experiments show that the extracted graphs capture the syntactic and semantic content of an image with reasonable accuracy.", "title": "" }, { "docid": "neg:1840025_2", "text": "A frequently asked questions (FAQ) retrieval system improves the access to information by allowing users to pose natural language queries over an FAQ collection. From an information retrieval perspective, FAQ retrieval is a challenging task, mainly because of the lexical gap that exists between a query and an FAQ pair, both of which are typically very short. In this work, we explore the use of supervised learning to rank to improve the performance of domain-specific FAQ retrieval. While supervised learning-to-rank models have been shown to yield effective retrieval performance, they require costly human-labeled training data in the form of document relevance judgments or question paraphrases. We investigate how this labeling effort can be reduced using a labeling strategy geared toward the manual creation of query paraphrases rather than the more time-consuming relevance judgments. In particular, we investigate two such strategies, and test them by applying supervised ranking models to two domain-specific FAQ retrieval data sets, showcasing typical FAQ retrieval scenarios. Our experiments show that supervised ranking models can yield significant improvements in the precision-at-rank-5 measure compared to unsupervised baselines. Furthermore, we show that a supervised model trained using data labeled via a low-effort paraphrase-focused strategy has the same performance as that of the same model trained using fully labeled data, indicating that the strategy is effective at reducing the labeling effort while retaining the performance gains of the supervised approach. To encourage further research on FAQ retrieval we make our FAQ retrieval data set publicly available.", "title": "" }, { "docid": "neg:1840025_3", "text": "The nuclear envelope is a physical barrier that isolates the cellular DNA from the rest of the cell, thereby limiting pathogen invasion. The Human Immunodeficiency Virus (HIV) has a remarkable ability to enter the nucleus of non-dividing target cells such as lymphocytes, macrophages and dendritic cells. While this step is critical for replication of the virus, it remains one of the less understood aspects of HIV infection. Here, we review the viral and host factors that favor or inhibit HIV entry into the nucleus, including the viral capsid, integrase, the central viral DNA flap, and the host proteins CPSF6, TNPO3, Nucleoporins, SUN1, SUN2, Cyclophilin A and MX2. We review recent perspectives on the mechanism of action of these factors, and formulate fundamental questions that remain. Overall, these findings deepen our understanding of HIV nuclear import and strengthen the favorable position of nuclear HIV entry for antiviral targeting.", "title": "" }, { "docid": "neg:1840025_4", "text": "Since 2006, Alberts and Dorofee have led MSCE with a focus on returning risk management to its original intent—supporting effective management decisions that lead to program success. They began rethinking the traditional approaches to risk management, which led to the development of SEI Mosaic, a suite of methodologies that approach managing risk from a systemic view across the life cycle and supply chain. Using a systemic risk management approach enables program managers to develop and implement strategic, high-leverage mitigation solutions that align with mission and objectives.", "title": "" }, { "docid": "neg:1840025_5", "text": "Tumor heterogeneity presents a challenge for inferring clonal evolution and driver gene identification. Here, we describe a method for analyzing the cancer genome at a single-cell nucleotide level. To perform our analyses, we first devised and validated a high-throughput whole-genome single-cell sequencing method using two lymphoblastoid cell line single cells. We then carried out whole-exome single-cell sequencing of 90 cells from a JAK2-negative myeloproliferative neoplasm patient. The sequencing data from 58 cells passed our quality control criteria, and these data indicated that this neoplasm represented a monoclonal evolution. We further identified essential thrombocythemia (ET)-related candidate mutations such as SESN2 and NTRK1, which may be involved in neoplasm progression. This pilot study allowed the initial characterization of the disease-related genetic architecture at the single-cell nucleotide level. Further, we established a single-cell sequencing method that opens the way for detailed analyses of a variety of tumor types, including those with high genetic complex between patients.", "title": "" }, { "docid": "neg:1840025_6", "text": "Facebook, as one of the most popular social networking sites among college students, provides a platform for people to manage others' impressions of them. People tend to present themselves in a favorable way on their Facebook profile. This research examines the impact of using Facebook on people's perceptions of others' lives. It is argued that those with deeper involvement with Facebook will have different perceptions of others than those less involved due to two reasons. First, Facebook users tend to base judgment on examples easily recalled (the availability heuristic). Second, Facebook users tend to attribute the positive content presented on Facebook to others' personality, rather than situational factors (correspondence bias), especially for those they do not know personally. Questionnaires, including items measuring years of using Facebook, time spent on Facebook each week, number of people listed as their Facebook \"friends,\" and perceptions about others' lives, were completed by 425 undergraduate students taking classes across various academic disciplines at a state university in Utah. Surveys were collected during regular class period, except for two online classes where surveys were submitted online. The multivariate analysis indicated that those who have used Facebook longer agreed more that others were happier, and agreed less that life is fair, and those spending more time on Facebook each week agreed more that others were happier and had better lives. Furthermore, those that included more people whom they did not personally know as their Facebook \"friends\" agreed more that others had better lives.", "title": "" }, { "docid": "neg:1840025_7", "text": "This paper presents deadlock prevention are use to solve the deadlock problem of flexible manufacturing systems (FMS). Petri nets have been successfully as one of the most powerful tools for modeling of FMS. Their modeling power and a mathematical arsenal supporting the analysis of the modeled systems stimulate the increasing interest in Petri nets. With the structural object of Petri nets, siphons are important in the analysis and control of deadlocks in Petri nets (PNs) excellent properties. The deadlock prevention method are caused by the unmarked siphons, during the Petri nets are an effective way to model, analyze, simulation and control deadlocks in FMS is presented in this work. The characterization of special structural elements in Petri net so-called siphons has been a major approach for the investigation of deadlock-freeness in the center of FMS. The siphons are structures which allow for some implications on the net's can be well controlled by adding a control place (called monitor) for each uncontrolled siphon in the net in order to become deadlock-free situation in the system. Finally, We proposed method of modeling, simulation, control of FMS by using Petri nets, where deadlock analysis have Production line in parallel processing is demonstrate by a practical example used Petri Net-tool in MATLAB, is effective, and explicitly although its off-line computation.", "title": "" }, { "docid": "neg:1840025_8", "text": "The Semantic Web graph is growing at an incredible pace, enabling opportunities to discover new knowledge by interlinking and analyzing previously unconnected data sets. This confronts researchers with a conundrum: Whilst the data is available the programming models that facilitate scalability and the infrastructure to run various algorithms on the graph are missing. Some use MapReduce – a good solution for many problems. However, even some simple iterative graph algorithms do not map nicely to that programming model requiring programmers to shoehorn their problem to the MapReduce model. This paper presents the Signal/Collect programming model for synchronous and asynchronous graph algorithms. We demonstrate that this abstraction can capture the essence of many algorithms on graphs in a concise and elegant way by giving Signal/Collect adaptations of various relevant algorithms. Furthermore, we built and evaluated a prototype Signal/Collect framework that executes algorithms in our programming model. We empirically show that this prototype transparently scales and that guiding computations by scoring as well as asynchronicity can greatly improve the convergence of some example algorithms. We released the framework under the Apache License 2.0 (at http://www.ifi.uzh.ch/ddis/research/sc).", "title": "" }, { "docid": "neg:1840025_9", "text": "A fog radio access network (F-RAN) is studied, in which $K_T$ edge nodes (ENs) connected to a cloud server via orthogonal fronthaul links, serve $K_R$ users through a wireless Gaussian interference channel. Both the ENs and the users have finite-capacity cache memories, which are filled before the user demands are revealed. While a centralized placement phase is used for the ENs, which model static base stations, a decentralized placement is leveraged for the mobile users. An achievable transmission scheme is presented, which employs a combination of interference alignment, zero-forcing and interference cancellation techniques in the delivery phase, and the \\textit{normalized delivery time} (NDT), which captures the worst-case latency, is analyzed.", "title": "" }, { "docid": "neg:1840025_10", "text": "It is said that there’s nothing so practical as good theory. It may also be said that there’s nothing so theoretically interesting as good practice1. This is particularly true of efforts to relate constructivism as a theory of learning to the practice of instruction. Our goal in this paper is to provide a clear link between the theoretical principles of constructivism, the practice of instructional design, and the practice of teaching. We will begin with a basic characterization of constructivism identifying what we believe to be the central principles in learning and understanding. We will then identify and elaborate on eight instructional principles for the design of a constructivist learning environment. Finally, we will examine what we consider to be one of the best exemplars of a constructivist learning environment -Problem Based Learning as described by Barrows (1985, 1986, 1992).", "title": "" }, { "docid": "neg:1840025_11", "text": "Research in face recognition has largely been divided between those projects concerned with front-end image processing and those projects concerned with memory for familiar people. These perceptual and cognitive programmes of research have proceeded in parallel, with only limited mutual influence. In this paper we present a model of human face recognition which combines both a perceptual and a cognitive component. The perceptual front-end is based on principal components analysis of images, and the cognitive back-end is based on a simple interactive activation and competition architecture. We demonstrate that this model has a much wider predictive range than either perceptual or cognitive models alone, and we show that this type of combination is necessary in order to analyse some important effects in human face recognition. In sum, the model takes varying images of \"known\" faces and delivers information about these people.", "title": "" }, { "docid": "neg:1840025_12", "text": "Phosphatidylinositol-3,4,5-trisphosphate (PtdIns(3,4,5)P3 or PIP3) mediates signalling pathways as a second messenger in response to extracellular signals. Although primordial functions of phospholipids and RNAs have been hypothesized in the ‘RNA world’, physiological RNA–phospholipid interactions and their involvement in essential cellular processes have remained a mystery. We explicate the contribution of lipid-binding long non-coding RNAs (lncRNAs) in cancer cells. Among them, long intergenic non-coding RNA for kinase activation (LINK-A) directly interacts with the AKT pleckstrin homology domain and PIP3 at the single-nucleotide level, facilitating AKT–PIP3 interaction and consequent enzymatic activation. LINK-A-dependent AKT hyperactivation leads to tumorigenesis and resistance to AKT inhibitors. Genomic deletions of the LINK-A PIP3-binding motif dramatically sensitized breast cancer cells to AKT inhibitors. Furthermore, meta-analysis showed the correlation between LINK-A expression and incidence of a single nucleotide polymorphism (rs12095274: A > G), AKT phosphorylation status, and poor outcomes for breast and lung cancer patients. PIP3-binding lncRNA modulates AKT activation with broad clinical implications.", "title": "" }, { "docid": "neg:1840025_13", "text": "The use of wireless technologies in automation systems offers attractive benefits, but introduces a number of new technological challenges. The paper discusses these aspects for home and building automation applications. Relevant standards are surveyed. A wireless extension to KNX/EIB based on tunnelling over IEEE 802.15.4 is presented. The design emulates the properties of the KNX/EIB wired medium via wireless communication, allowing a seamless extension. Furthermore, it is geared towards zero-configuration and supports the easy integration of protocol security.", "title": "" }, { "docid": "neg:1840025_14", "text": "Thanks to advances in medical imaging technologies and numerical methods, patient-specific modelling is more and more used to improve diagnosis and to estimate the outcome of surgical interventions. It requires the extraction of the domain of interest from the medical scans of the patient, as well as the discretisation of this geometry. However, extracting smooth multi-material meshes that conform to the tissue boundaries described in the segmented image is still an active field of research. We propose to solve this issue by combining an implicit surface reconstruction method with a multi-region mesh extraction scheme. The surface reconstruction algorithm is based on multi-level partition of unity implicit surfaces, which we extended to the multi-material case. The mesh generation algorithm consists in a novel multi-domain version of the marching tetrahedra. It generates multi-region meshes as a set of triangular surface patches consistently joining each other at material junctions. This paper presents this original meshing strategy, starting from boundary points extraction from the segmented data to heterogeneous implicit surface definition, multi-region surface triangulation and mesh adaptation. Results indicate that the proposed approach produces smooth and high-quality triangular meshes with a reasonable geometric accuracy. Hence, the proposed method is well suited for subsequent volume mesh generation and finite element simulations.", "title": "" }, { "docid": "neg:1840025_15", "text": "This paper describes first results using the Unified Medical Language System (UMLS) for distantly supervised relation extraction. UMLS is a large knowledge base which contains information about millions of medical concepts and relations between them. Our approach is evaluated using existing relation extraction data sets that contain relations that are similar to some of those in UMLS.", "title": "" }, { "docid": "neg:1840025_16", "text": "Hope is the sum of goal thoughts as tapped by pathways and agency. Pathways reflect the perceived capability to produce goal routes; agency reflects the perception that one can initiate action along these pathways. Using trait and state hope scales, studies explored hope in college student athletes. In Study 1, male and female athletes were higher in trait hope than nonathletes; moreover, hope significantly predicted semester grade averages beyond cumulative grade point average and overall self-worth. In Study 2, with female cross-country athletes, trait hope predicted athletic outcomes; further, weekly state hope tended to predict athletic outcomes beyond dispositional hope, training, and self-esteem, confidence, and mood. In Study 3, with female track athletes, dispositional hope significantly predicted athletic outcomes beyond variance related to athletic abilities and affectivity; moreover, athletes had higher hope than nonathletes.", "title": "" }, { "docid": "neg:1840025_17", "text": "A single, stationary topic model such as latent Dirichlet allocation is inappropriate for modeling corpora that span long time periods, as the popularity of topics is likely to change over time. A number of models that incorporate time have been proposed, but in general they either exhibit limited forms of temporal variation, or require computationally expensive inference methods. In this paper we propose non-parametric Topics over Time (npTOT), a model for time-varying topics that allows an unbounded number of topics and flexible distribution over the temporal variations in those topics’ popularity. We develop a collapsed Gibbs sampler for the proposed model and compare against existing models on synthetic and real document sets.", "title": "" }, { "docid": "neg:1840025_18", "text": "This paper gives a broad overview of a complete framework for assessing the predictive uncertainty of scientific computing applications. The framework is complete in the sense that it treats both types of uncertainty (aleatory and epistemic) and incorporates uncertainty due to the form of the model and any numerical approximations used. Aleatory (or random) uncertainties in model inputs are treated using cumulative distribution functions, while epistemic (lack of knowledge) uncertainties are treated as intervals. Approaches for propagating both types of uncertainties through the model to the system response quantities of interest are discussed. Numerical approximation errors (due to discretization, iteration, and round off) are estimated using verification techniques, and the conversion of these errors into epistemic uncertainties is discussed. Model form uncertainties are quantified using model validation procedures, which include a comparison of model predictions to experimental data and then extrapolation of this uncertainty structure to points in the application domain where experimental data do not exist. Finally, methods for conveying the total predictive uncertainty to decision makers are presented.", "title": "" } ]
1840026
Offline signature verification using classifier combination of HOG and LBP features
[ { "docid": "pos:1840026_0", "text": "A method for conducting off-line handwritten signature verification is described. It works at the global image level and measures the grey level variations in the image using statistical texture features. The co-occurrence matrix and local binary pattern are analysed and used as features. This method begins with a proposed background removal. A histogram is also processed to reduce the influence of different writing ink pens used by signers. Genuine samples and random forgeries have been used to train an SVM model and random and skilled forgeries have been used for testing it. Results are reasonable according to the state-of-the-art and approaches that use the same two databases: MCYT-75 and GPDS100 Corpuses. The combination of the proposed features and those proposed by other authors, based on geometric information, also promises improvements in performance. & 2010 Elsevier Ltd. All rights reserved.", "title": "" } ]
[ { "docid": "neg:1840026_0", "text": "Understanding human actions is a key problem in computer vision. However, recognizing actions is only the first step of understanding what a person is doing. In this paper, we introduce the problem of predicting why a person has performed an action in images. This problem has many applications in human activity understanding, such as anticipating or explaining an action. To study this problem, we introduce a new dataset of people performing actions annotated with likely motivations. However, the information in an image alone may not be sufficient to automatically solve this task. Since humans can rely on their lifetime of experiences to infer motivation, we propose to give computer vision systems access to some of these experiences by using recently developed natural language models to mine knowledge stored in massive amounts of text. While we are still far away from fully understanding motivation, our results suggest that transferring knowledge from language into vision can help machines understand why people in images might be performing an action.", "title": "" }, { "docid": "neg:1840026_1", "text": "The purpose of this study was to describe how reaching onset affects the way infants explore objects and their own bodies. We followed typically developing infants longitudinally from 2 through 5 months of age. At each visit we coded the behaviors infants performed with their hand when an object was attached to it versus when the hand was bare. We found increases in the performance of most exploratory behaviors after the emergence of reaching. These increases occurred both with objects and with bare hands. However, when interacting with objects, infants performed the same behaviors they performed on their bare hands but they performed them more often and in unique combinations. The results support the tenets that: (1) the development of object exploration begins in the first months of life as infants learn to selectively perform exploratory behaviors on their bodies and objects, (2) the onset of reaching is accompanied by significant increases in exploration of both objects and one's own body, (3) infants adapt their self-exploratory behaviors by amplifying their performance and combining them in unique ways to interact with objects.", "title": "" }, { "docid": "neg:1840026_2", "text": "This paper describes recent work on the “Crosswatch” project, which is a computer vision-based smartphone system developed for providing guidance to blind and visually impaired travelers at traffic intersections. A key function of Crosswatch is self-localization - the estimation of the user's location relative to the crosswalks in the current traffic intersection. Such information may be vital to users with low or no vision to ensure that they know which crosswalk they are about to enter, and are properly aligned and positioned relative to the crosswalk. However, while computer vision-based methods have been used for finding crosswalks and helping blind travelers align themselves to them, these methods assume that the entire crosswalk pattern can be imaged in a single frame of video, which poses a significant challenge for a user who lacks enough vision to know where to point the camera so as to properly frame the crosswalk. In this paper we describe work in progress that tackles the problem of crosswalk detection and self-localization, building on recent work describing techniques enabling blind and visually impaired users to acquire 360° image panoramas while turning in place on a sidewalk. The image panorama is converted to an aerial (overhead) view of the nearby intersection, centered on the location that the user is standing at, so as to facilitate matching with a template of the intersection obtained from Google Maps satellite imagery. The matching process allows crosswalk features to be detected and permits the estimation of the user's precise location relative to the crosswalk of interest. We demonstrate our approach on intersection imagery acquired by blind users, thereby establishing the feasibility of the approach.", "title": "" }, { "docid": "neg:1840026_3", "text": "Recent research endeavors have shown the potential of using feed-forward convolutional neural networks to accomplish fast style transfer for images. In this work, we take one step further to explore the possibility of exploiting a feed-forward network to perform style transfer for videos and simultaneously maintain temporal consistency among stylized video frames. Our feed-forward network is trained by enforcing the outputs of consecutive frames to be both well stylized and temporally consistent. More specifically, a hybrid loss is proposed to capitalize on the content information of input frames, the style information of a given style image, and the temporal information of consecutive frames. To calculate the temporal loss during the training stage, a novel two-frame synergic training mechanism is proposed. Compared with directly applying an existing image style transfer method to videos, our proposed method employs the trained network to yield temporally consistent stylized videos which are much more visually pleasant. In contrast to the prior video style transfer method which relies on time-consuming optimization on the fly, our method runs in real time while generating competitive visual results.", "title": "" }, { "docid": "neg:1840026_4", "text": "[Purpose] The present study was performed to evaluate the changes in the scapular alignment, pressure pain threshold and pain in subjects with scapular downward rotation after 4 weeks of wall slide exercise or sling slide exercise. [Subjects and Methods] Twenty-two subjects with scapular downward rotation participated in this study. The alignment of the scapula was measured using radiographic analysis (X-ray). Pain and pressure pain threshold were assessed using visual analogue scale and digital algometer. Patients were assessed before and after a 4 weeks of exercise. [Results] In the within-group comparison, the wall slide exercise group showed significant differences in the resting scapular alignment, pressure pain threshold, and pain after four weeks. The between-group comparison showed that there were significant differences between the wall slide group and the sling slide group after four weeks. [Conclusion] The results of this study found that the wall slide exercise may be effective at reducing pain and improving scapular alignment in subjects with scapular downward rotation.", "title": "" }, { "docid": "neg:1840026_5", "text": "In this paper, a new dataset, HazeRD, is proposed for benchmarking dehazing algorithms under more realistic haze conditions. HazeRD contains fifteen real outdoor scenes, for each of which five different weather conditions are simulated. As opposed to prior datasets that made use of synthetically generated images or indoor images with unrealistic parameters for haze simulation, our outdoor dataset allows for more realistic simulation of haze with parameters that are physically realistic and justified by scattering theory. All images are of high resolution, typically six to eight megapixels. We test the performance of several state-of-the-art dehazing techniques on HazeRD. The results exhibit a significant difference among algorithms across the different datasets, reiterating the need for more realistic datasets such as ours and for more careful benchmarking of the methods.", "title": "" }, { "docid": "neg:1840026_6", "text": "Routing protocols for Wireless Sensor Networks (WSN) are designed to select parent nodes so that data packets can reach their destination in a timely and efficient manner. Typically neighboring nodes with strongest connectivity are more selected as parents. This Greedy Routing approach can lead to unbalanced routing loads in the network. Consequently, the network experiences the early death of overloaded nodes causing permanent network partition. Herein, we propose a framework for load balancing of routing in WSN. In-network path tagging is used to monitor network traffic load of nodes. Based on this, nodes are identified as being relatively overloaded, balanced or underloaded. A mitigation algorithm finds suitable new parents for switching from overloaded nodes. The routing engine of the child of the overloaded node is then instructed to switch parent. A key future of the proposed framework is that it is primarily implemented at the Sink and so requires few changes to existing routing protocols. The framework was implemented in TinyOS on TelosB motes and its performance was assessed in a testbed network and in TOSSIM simulation. The algorithm increased the lifetime of the network by 41 % as recorded in the testbed experiment. The Packet Delivery Ratio was also improved from 85.97 to 99.47 %. Finally a comparative study was performed using the proposed framework with various existing routing protocols.", "title": "" }, { "docid": "neg:1840026_7", "text": "One of the first steps in the utterance interpretation pipeline of many task-oriented conversational AI systems is to identify user intents and the corresponding slots. Neural sequence labeling models have achieved very high accuracy on these tasks when trained on large amounts of training data. However, collecting this data is very time-consuming and therefore it is unfeasible to collect large amounts of data for many languages. For this reason, it is desirable to make use of existing data in a high-resource language to train models in low-resource languages. In this paper, we investigate the performance of three different methods for cross-lingual transfer learning, namely (1) translating the training data, (2) using cross-lingual pre-trained embeddings, and (3) a novel method of using a multilingual machine translation encoder as contextual word representations. We find that given several hundred training examples in the the target language, the latter two methods outperform translating the training data. Further, in very low-resource settings, we find that multilingual contextual word representations give better results than using crosslingual static embeddings. We release a dataset of around 57k annotated utterances in English (43k), Spanish (8.6k) and Thai (5k) for three task oriented domains at https://fb.me/multilingual_task_oriented_data.", "title": "" }, { "docid": "neg:1840026_8", "text": "In this work, we tackle the problem of instance segmentation, the task of simultaneously solving object detection and semantic segmentation. Towards this goal, we present a model, called MaskLab, which produces three outputs: box detection, semantic segmentation, and direction prediction. Building on top of the Faster-RCNN object detector, the predicted boxes provide accurate localization of object instances. Within each region of interest, MaskLab performs foreground/background segmentation by combining semantic and direction prediction. Semantic segmentation assists the model in distinguishing between objects of different semantic classes including background, while the direction prediction, estimating each pixel's direction towards its corresponding center, allows separating instances of the same semantic class. Moreover, we explore the effect of incorporating recent successful methods from both segmentation and detection (e.g., atrous convolution and hypercolumn). Our proposed model is evaluated on the COCO instance segmentation benchmark and shows comparable performance with other state-of-art models.", "title": "" }, { "docid": "neg:1840026_9", "text": "Euler diagrams visually represent containment, intersection and exclusion using closed curves. They first appeared several hundred years ago, however, there has been a resurgence in Euler diagram research in the twenty-first century. This was initially driven by their use in visual languages, where they can be used to represent logical expressions diagrammatically. This work lead to the requirement to automatically generate Euler diagrams from an abstract description. The ability to generate diagrams has accelerated their use in information visualization, both in the standard case where multiple grouping of data items inside curves is required and in the area-proportional case where the area of curve intersections is important. As a result, examining the usability of Euler diagrams has become an important aspect of this research. Usability has been investigated by empirical studies, but much research has concentrated on wellformedness, which concerns how curves and other features of the diagram interrelate. This work has revealed the drawability of Euler diagrams under various wellformedness properties and has developed embedding methods that meet these properties. Euler diagram research surveyed in this paper includes theoretical results, generation techniques, transformation methods and the development of automated reasoning systems for Euler diagrams. It also overviews application areas and the ways in which Euler diagrams have been extended.", "title": "" }, { "docid": "neg:1840026_10", "text": "Sememes are defined as the minimum semantic units of human languages. People have manually annotated lexical sememes for words and form linguistic knowledge bases. However, manual construction is time-consuming and labor-intensive, with significant annotation inconsistency and noise. In this paper, we for the first time explore to automatically predict lexical sememes based on semantic meanings of words encoded by word embeddings. Moreover, we apply matrix factorization to learn semantic relations between sememes and words. In experiments, we take a real-world sememe knowledge base HowNet for training and evaluation, and the results reveal the effectiveness of our method for lexical sememe prediction. Our method will be of great use for annotation verification of existing noisy sememe knowledge bases and annotation suggestion of new words and phrases.", "title": "" }, { "docid": "neg:1840026_11", "text": "Overview Aggressive driving is a major concern of the American public, ranking at or near the top of traffic safety issues in national surveys of motorists. However, the concept of aggressive driving is not well defined, and its overall impact on traffic safety has not been well quantified due to inadequacies and limitation of available data. This paper reviews published scientific literature on aggressive driving; discusses various definitions of aggressive driving; cites several specific behaviors that are typically associated with aggressive driving; and summarizes past research on the individuals or groups most likely to behave aggressively. Since adequate data to precisely quantify the percentage of fatal crashes that involve aggressive driving do not exist, in this review, we have quantified the number of fatal crashes in which one or more driver actions typically associated with aggressive driving were reported. We found these actions were reported in 56 percent of fatal crashes from 2003 through 2007, with excessive speed being the number one factor. Ideally, an estimate of the prevalence of aggressive driving would include only instances in which such actions were performed intentionally; however, available data on motor vehicle crashes do not contain such information, thus it is important to recognize that this 56 percent may to some degree overestimate the contribution of aggressive driving to fatal crashes. On the other hand, it is likely that aggressive driving contributes to at least some crashes in which it is not reported due to lack of evidence. Despite the clear limitations associated with our attempt to estimate the contribution of potentially-aggressive driver actions to fatal crashes, it is clear that aggressive driving poses a serious traffic safety threat. In addition, our review further indicated that the \" Do as I say, not as I do \" culture, previously reported in the Foundation's Traffic Safety Culture Index, very much applies to aggressive driving.", "title": "" }, { "docid": "neg:1840026_12", "text": "Ontology is playing an increasingly important role in knowledge management and the Semantic Web. This study presents a novel episode-based ontology construction mechanism to extract domain ontology from unstructured text documents. Additionally, fuzzy numbers for conceptual similarity computing are presented for concept clustering and taxonomic relation definitions. Moreover, concept attributes and operations can be extracted from episodes to construct a domain ontology, while non-taxonomic relations can be generated from episodes. The fuzzy inference mechanism is also applied to obtain new instances for ontology learning. Experimental results show that the proposed approach can effectively construct a Chinese domain ontology from unstructured text documents. 2006 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "neg:1840026_13", "text": "Due to the heterogeneous and resource-constrained characters of Internet of Things (IoT), how to guarantee ubiquitous network connectivity is challenging. Although LTE cellular technology is the most promising solution to provide network connectivity in IoTs, information diffusion by cellular network not only occupies its saturating bandwidth, but also costs additional fees. Recently, NarrowBand-IoT (NB-IoT), introduced by 3GPP, is designed for low-power massive devices, which intends to refarm wireless spectrum and increase network coverage. For the sake of providing high link connectivity and capacity, we stimulate effective cooperations among user equipments (UEs), and propose a social-aware group formation framework to allocate resource blocks (RBs) effectively following an in-band NB-IoT solution. Specifically, we first introduce a social-aware multihop device-to-device (D2D) communication scheme to upload information toward the eNodeB within an LTE, so that a logical cooperative D2D topology can be established. Then, we formulate the D2D group formation as a scheduling optimization problem for RB allocation, which selects the feasible partition for the UEs by jointly considering relay method selection and spectrum reuse for NB-IoTs. Since the formulated optimization problem has a high computational complexity, we design a novel heuristic with a comprehensive consideration of power control and relay selection. Performance evaluations based on synthetic and real trace simulations manifest that the presented method can significantly increase link connectivity, link capacity, network throughput, and energy efficiency comparing with the existing solutions.", "title": "" }, { "docid": "neg:1840026_14", "text": "This document is not intended to create, does not create, and may not be relied upon to create any rights, substantive or procedural, enforceable by law by any party in any matter civil or criminal. Findings and conclusions of the research reported here are those of the authors and do not necessarily reflect the official position or policies of the U.S. Department of Justice. The products, manufacturers, and organizations discussed in this document are presented for informational purposes only and do not constitute product approval or endorsement by the Much of crime mapping is devoted to detecting high-crime-density areas known as hot spots. Hot spot analysis helps police identify high-crime areas, types of crime being committed, and the best way to respond. This report discusses hot spot analysis techniques and software and identifies when to use each one. The visual display of a crime pattern on a map should be consistent with the type of hot spot and possible police action. For example, when hot spots are at specific addresses, a dot map is more appropriate than an area map, which would be too imprecise. In this report, chapters progress in sophis­ tication. Chapter 1 is for novices to crime mapping. Chapter 2 is more advanced, and chapter 3 is for highly experienced analysts. The report can be used as a com­ panion to another crime mapping report ■ Identifying hot spots requires multiple techniques; no single method is suffi­ cient to analyze all types of crime. ■ Current mapping technologies have sig­ nificantly improved the ability of crime analysts and researchers to understand crime patterns and victimization. ■ Crime hot spot maps can most effective­ ly guide police action when production of the maps is guided by crime theories (place, victim, street, or neighborhood).", "title": "" }, { "docid": "neg:1840026_15", "text": "With a goal of understanding what drives generalization in deep networks, we consider several recently suggested explanations, including norm-based control, sharpness and robustness. We study how these measures can ensure generalization, highlighting the importance of scale normalization, and making a connection between sharpness and PAC-Bayes theory. We then investigate how well the measures explain different observed phenomena.", "title": "" }, { "docid": "neg:1840026_16", "text": "The high power density converter is required due to the strict demands of volume and weight in more electric aircraft, which makes SiC extremely attractive for this application. In this work, a prototype of 50 kW SiC high power density converter with the topology of two-level three-phase voltage source inverter is demonstrated. This converter is driven at high switching speed based on the optimization in switching characterization. It operates at a switching frequency up to 100 kHz and a low dead time of 250 ns. And the converter efficiency is measured to be 99% at 40 kHz and 97.8% at 100 kHz.", "title": "" }, { "docid": "neg:1840026_17", "text": "BACKGROUND\nNutritional supplementation may be used to treat muscle loss with aging (sarcopenia). However, if physical activity does not increase, the elderly tend to compensate for the increased energy delivered by the supplements with reduced food intake, which results in a calorie substitution rather than supplementation. Thus, an effective supplement should stimulate muscle anabolism more efficiently than food or common protein supplements. We have shown that balanced amino acids stimulate muscle protein anabolism in the elderly, but it is unknown whether all amino acids are necessary to achieve this effect.\n\n\nOBJECTIVE\nWe assessed whether nonessential amino acids are required in a nutritional supplement to stimulate muscle protein anabolism in the elderly.\n\n\nDESIGN\nWe compared the response of muscle protein metabolism to either 18 g essential amino acids (EAA group: n = 6, age 69 +/- 2 y; +/- SD) or 40 g balanced amino acids (18 g essential amino acids + 22 g nonessential amino acids, BAA group; n = 8, age 71 +/- 2 y) given orally in small boluses every 10 min for 3 h to healthy elderly volunteers. Muscle protein metabolism was measured in the basal state and during amino acid administration via L-[ring-(2)H(5)]phenylalanine infusion, femoral arterial and venous catheterization, and muscle biopsies.\n\n\nRESULTS\nPhenylalanine net balance (in nmol x min(-1). 100 mL leg volume(-1)) increased from the basal state (P < 0.01), with no differences between groups (BAA: from -16 +/- 5 to 16 +/- 4; EAA: from -18 +/- 5 to 14 +/- 13) because of an increase (P < 0.01) in muscle protein synthesis and no change in breakdown.\n\n\nCONCLUSION\nEssential amino acids are primarily responsible for the amino acid-induced stimulation of muscle protein anabolism in the elderly.", "title": "" }, { "docid": "neg:1840026_18", "text": "AUTOSAR is a standard for the development of software for embedded devices, primarily created for the automotive domain. It specifies a software architecture with more than 80 software modules that provide services to one or more software components. With the trend towards integrating safety-relevant systems into embedded devices, conformance with standards such as ISO 26262 [ISO11] or ISO/IEC 61508 [IEC10] becomes increasingly important. This article presents an approach to providing freedom from interference between software components by using the MPU available on many modern microcontrollers. Each software component gets its own dedicated memory area, a so-called memory partition. This concept is well known in other industries like the aerospace industry, where the IMA architecture is now well established. The memory partitioning mechanism is implemented by a microkernel, which integrates seamlessly into the architecture specified by AUTOSAR. The development has been performed as SEooC as described in ISO 26262, which is a new development approach. We describe the procedure for developing an SEooC. AUTOSAR: AUTomotive Open System ARchitecture, see [ASR12]. MPU: Memory Protection Unit. 3 IMA: Integrated Modular Avionics, see [RTCA11]. 4 SEooC: Safety Element out of Context, see [ISO11].", "title": "" } ]
1840027
A Parallel Method for Earth Mover's Distance
[ { "docid": "pos:1840027_0", "text": "We propose simple and extremely efficient methods for solving the Basis Pursuit problem min{‖u‖1 : Au = f, u ∈ R}, which is used in compressed sensing. Our methods are based on Bregman iterative regularization and they give a very accurate solution after solving only a very small number of instances of the unconstrained problem min u∈Rn μ‖u‖1 + 1 2 ‖Au− f‖2, for given matrix A and vector fk. We show analytically that this iterative approach yields exact solutions in a finite number of steps, and present numerical results that demonstrate that as few as two to six iterations are sufficient in most cases. Our approach is especially useful for many compressed sensing applications where matrix-vector operations involving A and A> can be computed by fast transforms. Utilizing a fast fixed-point continuation solver that is solely based on such operations for solving the above unconstrained sub-problem, we were able to solve huge instances of compressed sensing problems quickly on a standard PC.", "title": "" } ]
[ { "docid": "neg:1840027_0", "text": "Traumatic coma was produced in 45 monkeys by accelerating the head without impact in one of three directions. The duration of coma, degree of neurological impairment, and amount of diffuse axonal injury (DAI) in the brain were directly related to the amount of coronal head motion used. Coma of less than 15 minutes (concussion) occurred in 11 of 13 animals subjected to sagittal head motion, in 2 of 6 animals with oblique head motion, and in 2 of 26 animals with full lateral head motion. All 15 concussioned animals had good recovery, and none had DAI. Conversely, coma lasting more than 6 hours occurred in one of the sagittal or oblique injury groups but was present in 20 of the laterally injured animals, all of which were severely disabled afterward. All laterally injured animals had a degree of DAI similar to that found in severe human head injury. Coma lasting 16 minutes to 6 hours occurred in 2 of 13 of the sagittal group, 4 of 6 in the oblique group, and 4 of 26 in the lateral group, these animals had less neurological disability and less DAI than when coma lasted longer than 6 hours. These experimental findings duplicate the spectrum of traumatic coma seen in human beings and include axonal damage identical to that seen in sever head injury in humans. Since the amount of DAI was directly proportional to the severity of injury (duration of coma and quality of outcome), we conclude that axonal damage produced by coronal head acceleration is a major cause of prolonged traumatic coma and its sequelae.", "title": "" }, { "docid": "neg:1840027_1", "text": "A technique intended to increase the diversity order of bit-interleaved coded modulations (BICM) over non Gaussian channels is presented. It introduces simple modifications to the mapper and to the corresponding demapper. They consist of a constellation rotation coupled with signal space component interleaving. Iterative processing at the receiver side can provide additional improvement to the BICM performance. This method has been shown to perform well over fading channels with or without erasures. It has been adopted for the 4-, 16-, 64- and 256-QAM constellations considered in the DVB-T2 standard. Resulting gains can vary from 0.2 dB to several dBs depending on the order of the constellation, the coding rate and the channel model.", "title": "" }, { "docid": "neg:1840027_2", "text": "Research on Offline Handwritten Signature Verification explored a large variety of handcrafted feature extractors, ranging from graphology, texture descriptors to interest points. In spite of advancements in the last decades, performance of such systems is still far from optimal when we test the systems against skilled forgeries - signature forgeries that target a particular individual. In previous research, we proposed a formulation of the problem to learn features from data (signature images) in a Writer-Independent format, using Deep Convolutional Neural Networks (CNNs), seeking to improve performance on the task. In this research, we push further the performance of such method, exploring a range of architectures, and obtaining a large improvement in state-of-the-art performance on the GPDS dataset, the largest publicly available dataset on the task. In the GPDS-160 dataset, we obtained an Equal Error Rate of 2.74%, compared to 6.97% in the best result published in literature (that used a combination of multiple classifiers). We also present a visual analysis of the feature space learned by the model, and an analysis of the errors made by the classifier. Our analysis shows that the model is very effective in separating signatures that have a different global appearance, while being particularly vulnerable to forgeries that very closely resemble genuine signatures, even if their line quality is bad, which is the case of slowly-traced forgeries.", "title": "" }, { "docid": "neg:1840027_3", "text": "This paper presents a novel online video recommendation system called VideoReach, which alleviates users' efforts on finding the most relevant videos according to current viewings without a sufficient collection of user profiles as required in traditional recommenders. In this system, video recommendation is formulated as finding a list of relevant videos in terms of multimodal relevance (i.e. textual, visual, and aural relevance) and user click-through. Since different videos have different intra-weights of relevance within an individual modality and inter-weights among different modalities, we adopt relevance feedback to automatically find optimal weights by user click-though, as well as an attention fusion function to fuse multimodal relevance. We use 20 clips as the representative test videos, which are searched by top 10 queries from more than 13k online videos, and report superior performance compared with an existing video site.", "title": "" }, { "docid": "neg:1840027_4", "text": "25 years ago, Lenstra, Lenstra and Lovász presented their c el brated LLL lattice reduction algorithm. Among the various applicatio ns of the LLL algorithm is a method due to Coppersmith for finding small roots of polyn mial equations. We give a survey of the applications of this root finding metho d t the problem of inverting the RSA function and the factorization problem. A s we will see, most of the results are of a dual nature, they can either be interpret ed as cryptanalytic results or as hardness/security results.", "title": "" }, { "docid": "neg:1840027_5", "text": "Social Networking has become today’s lifestyle and anyone can easily receive information about everyone in the world. It is very useful if a personal identity can be obtained from the mobile device and also connected to social networking. Therefore, we proposed a face recognition system on mobile devices by combining cloud computing services. Our system is designed in the form of an application developed on Android mobile devices which utilized the Face.com API as an image data processor for cloud computing services. We also applied the Augmented Reality as an information viewer to the users. The result of testing shows that the system is able to recognize face samples with the average percentage of 85% with the total computation time for the face recognition system reached 7.45 seconds, and the average augmented reality translation time is 1.03 seconds to get someone’s information.", "title": "" }, { "docid": "neg:1840027_6", "text": "The combination of GPS/INS provides an ideal navigation system of full capability of continuously outputting position, velocity, and attitude of the host platform. However, the accuracy of INS degrades with time when GPS signals are blocked in environments such as tunnels, dense urban canyons and indoors. To dampen down the error growth, the INS sensor errors should be properly estimated and compensated before the inertial data are involved in the navigation computation. Therefore appropriate modelling of the INS sensor errors is a necessity. Allan Variance (AV) is a simple and efficient method for verifying and modelling these errors by representing the root mean square (RMS) random drift error as a function of averaging time. The AV can be used to determine the characteristics of different random processes. This paper applies the AV to analyse and model different types of random errors residing in the measurements of MEMS inertial sensors. The derived error model will be further applied to a low-cost GPS/MEMS-INS system once the correctness of the model is verified. The paper gives the detail of the AV analysis as well as presents the test results.", "title": "" }, { "docid": "neg:1840027_7", "text": "This article addresses the performance of distributed database systems. Specifically, we present an algorithm for dynamic replication of an object in distributed systems. The algorithm is adaptive in the sence that it changes the replication scheme of the object i.e., the set of processors at which the object inreplicated) as changes occur in the read-write patern of the object (i.e., the number of reads and writes issued by each processor). The algorithm continuously moves the replication scheme towards an optimal one. We show that the algorithm can be combined with the concurrency control and recovery mechanisms of ta distributed database management system. The performance of the algorithm is analyzed theoretically and experimentally. On the way we provide a lower bound on the performance of any dynamic replication algorith.", "title": "" }, { "docid": "neg:1840027_8", "text": "Finding community structures in online social networks is an important methodology for understanding the internal organization of users and actions. Most previous studies have focused on structural properties to detect communities. They do not analyze the information gathered from the posting activities of members of social networks, nor do they consider overlapping communities. To tackle these two drawbacks, a new overlapping community detection method involving social activities and semantic analysis is proposed. This work applies a fuzzy membership to detect overlapping communities with different extent and run semantic analysis to include information contained in posts. The available resource description format contributes to research in social networks. Based on this new understanding of social networks, this approach can be adopted for large online social networks and for social portals, such as forums, that are not based on network topology. The efficiency and feasibility of this method is verified by the available experimental analysis. The results obtained by the tests on real networks indicate that the proposed approach can be effective in discovering labelled and overlapping communities with a high amount of modularity. This approach is fast enough to process very large and dense social networks. 6", "title": "" }, { "docid": "neg:1840027_9", "text": "The NDN project investigates Jacobson's proposed evolution from today's host-centric network architecture (IP) to a data-centric network architecture (NDN). This conceptually simple shift has far-reaching implications in how we design, develop, deploy and use networks and applications. The NDN design and development has attracted significant attention from the networking community. To facilitate broader participation in addressing NDN research and development challenges, this tutorial will describe the vision of this new architecture and its basic components and operations.", "title": "" }, { "docid": "neg:1840027_10", "text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/page/info/about/policies/terms.jsp. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.", "title": "" }, { "docid": "neg:1840027_11", "text": "In this paper, tension propagation analysis of a newly designed multi-DOF robotic platform for single-port access surgery (SPS) is presented. The analysis is based on instantaneous kinematics of the proposed 6-DOF surgical instrument, and provides the decision criteria for estimating the payload of a surgical instrument according to its pose changes and specifications of a driving-wire. Also, the wire-tension and the number of reduction ratio to manage such a payload can be estimated, quantitatively. The analysis begins with derivation of the power transmission efficiency through wire-interfaces from each instrument joint to an actuator. Based on the energy conservation law and the capstan equation, we modeled the degradation of power transmission efficiency due to 1) the reducer called wire-reduction mechanism, 2) bending of proximal instrument joints, and 3) bending of hyper-redundant guide tube. Based on the analysis, the tension of driving-wires was computed according to various manipulation poses and loading conditions. In our experiment, a newly designed surgical instrument successfully managed the external load of 1kgf, which was applied to the end effector of a surgical manipulator.", "title": "" }, { "docid": "neg:1840027_12", "text": "A tumor is a mass of tissue that's formed by an accumulation of abnormal cells. Normally, the cells in your body age, die, and are replaced by new cells. With cancer and other tumors, something disrupts this cycle. Tumor cells grow, even though the body does not need them, and unlike normal old cells, they don't die. As this process goes on, the tumor continues to grow as more and more cells are added to the mass. Image processing is an active research area in which medical image processing is a highly challenging field. Brain tumor analysis is done by doctors but its grading gives different conclusions which may vary from one doctor to another. In this project, it provides a foundation of segmentation and edge detection, as the first step towards brain tumor grading. Current segmentation approaches are reviewed with an emphasis placed on revealing the advantages and disadvantages of these methods for medical imaging applications. There are dissimilar types of algorithm were developed for brain tumor detection. Comparing to the other algorithms the performance of fuzzy c-means plays a major role. The patient's stage is determined by this process, whether it can be cured with medicine or not. Also we study difficulty to detect Mild traumatic brain injury (mTBI) the current tools are qualitative, which can lead to poor diagnosis and treatment and to overcome these difficulties, an algorithm is proposed that takes advantage of subject information and texture information from MR images. A contextual model is developed to simulate the progression of the disease using multiple inputs, such as the time post injury and the location of injury. Textural features are used along with feature selection for a single MR modality.", "title": "" }, { "docid": "neg:1840027_13", "text": "The view that humans comprise only two types of beings, women and men, a framework that is sometimes referred to as the \"gender binary,\" played a profound role in shaping the history of psychological science. In recent years, serious challenges to the gender binary have arisen from both academic research and social activism. This review describes 5 sets of empirical findings, spanning multiple disciplines, that fundamentally undermine the gender binary. These sources of evidence include neuroscience findings that refute sexual dimorphism of the human brain; behavioral neuroendocrinology findings that challenge the notion of genetically fixed, nonoverlapping, sexually dimorphic hormonal systems; psychological findings that highlight the similarities between men and women; psychological research on transgender and nonbinary individuals' identities and experiences; and developmental research suggesting that the tendency to view gender/sex as a meaningful, binary category is culturally determined and malleable. Costs associated with reliance on the gender binary and recommendations for future research, as well as clinical practice, are outlined. (PsycINFO Database Record", "title": "" }, { "docid": "neg:1840027_14", "text": "This paper presents a radar sensor package specifically developed for wide-coverage sounding and imaging of polar ice sheets from a variety of aircraft. Our instruments address the need for a reliable remote sensing solution well-suited for extensive surveys at low and high altitudes and capable of making measurements with fine spatial and temporal resolution. The sensor package that we are presenting consists of four primary instruments and ancillary systems with all the associated antennas integrated into the aircraft to maintain aerodynamic performance. The instruments operate simultaneously over different frequency bands within the 160 MHz-18 GHz range. The sensor package has allowed us to sound the most challenging areas of the polar ice sheets, ice sheet margins, and outlet glaciers; to map near-surface internal layers with fine resolution; and to detect the snow-air and snow-ice interfaces of snow cover over sea ice to generate estimates of snow thickness. In this paper, we provide a succinct description of each radar and associated antenna structures and present sample results to document their performance. We also give a brief overview of our field measurement programs and demonstrate the unique capability of the sensor package to perform multifrequency coincidental measurements from a single airborne platform. Finally, we illustrate the relevance of using multispectral radar data as a tool to characterize the entire ice column and to reveal important subglacial features.", "title": "" }, { "docid": "neg:1840027_15", "text": "Single cell segmentation is critical and challenging in live cell imaging data analysis. Traditional image processing methods and tools require time-consuming and labor-intensive efforts of manually fine-tuning parameters. Slight variations of image setting may lead to poor segmentation results. Recent development of deep convolutional neural networks(CNN) provides a potentially efficient, general and robust method for segmentation. Most existing CNN-based methods treat segmentation as a pixel-wise classification problem. However, three unique problems of cell images adversely affect segmentation accuracy: lack of established training dataset, few pixels on cell boundaries, and ubiquitous blurry features. The problem becomes especially severe with densely packed cells, where a pixel-wise classification method tends to identify two neighboring cells with blurry shared boundary as one cell, leading to poor cell count accuracy and affecting subsequent analysis. Here we developed a different learning strategy that combines strengths of CNN and watershed algorithm. The method first trains a CNN to learn Euclidean distance transform of binary masks corresponding to the input images. Then another CNN is trained to detect individual cells in the Euclidean distance transform. In the third step, the watershed algorithm takes the outputs from the previous steps as inputs and performs the segmentation. We tested the combined method and various forms of the pixel-wise classification algorithm on segmenting fluorescence and transmitted light images. The new method achieves similar pixel accuracy but significant higher cell count accuracy than pixel-wise classification methods do, and the advantage is most obvious when applying on noisy images of densely packed cells.", "title": "" }, { "docid": "neg:1840027_16", "text": "This paper addresses the problem of establishing correspondences between two sets of visual features using higher order constraints instead of the unary or pairwise ones used in classical methods. Concretely, the corresponding hypergraph matching problem is formulated as the maximization of a multilinear objective function over all permutations of the features. This function is defined by a tensor representing the affinity between feature tuples. It is maximized using a generalization of spectral techniques where a relaxed problem is first solved by a multidimensional power method and the solution is then projected onto the closest assignment matrix. The proposed approach has been implemented, and it is compared to state-of-the-art algorithms on both synthetic and real data.", "title": "" }, { "docid": "neg:1840027_17", "text": "With the increasingly complex electromagnetic environment of communication, as well as the gradually increased radar signal types, how to effectively identify the types of radar signals at low SNR becomes a hot topic. A radar signal recognition algorithm based on entropy features, which describes the distribution characteristics for different types of radar signals by extracting Shannon entropy, Singular spectrum Shannon entropy and Singular spectrum index entropy features, was proposed to achieve the purpose of signal identification. Simulation results show that, the algorithm based on entropies has good anti-noise performance, and it can still describe the characteristics of signals well even at low SNR, which can achieve the purpose of identification and classification for different radar signals.", "title": "" }, { "docid": "neg:1840027_18", "text": "Hyunsook Yoon Dongguk University, Korea This paper reports on a qualitative study that investigated the changes in students’ writing process associated with corpus use over an extended period of time. The primary purpose of this study was to examine how corpus technology affects students’ development of competence as second language (L2) writers. The research was mainly based on case studies with six L2 writers in an English for Academic Purposes writing course. The findings revealed that corpus use not only had an immediate effect by helping the students solve immediate writing/language problems, but also promoted their perceptions of lexicogrammar and language awareness. Once the corpus approach was introduced to the writing process, the students assumed more responsibility for their writing and became more independent writers, and their confidence in writing increased. This study identified a wide variety of individual experiences and learning contexts that were involved in deciding the levels of the students’ willingness and success in using corpora. This paper also discusses the distinctive contributions of general corpora to English for Academic Purposes and the importance of lexical and grammatical aspects in L2 writing pedagogy.", "title": "" }, { "docid": "neg:1840027_19", "text": "How can web services that depend on user generated content discern fake social engagement activities by spammers from legitimate ones? In this paper, we focus on the social site of YouTube and the problem of identifying bad actors posting inorganic contents and inflating the count of social engagement metrics. We propose an effective method, Leas (Local Expansion at Scale), and show how the fake engagement activities on YouTube can be tracked over time by analyzing the temporal graph based on the engagement behavior pattern between users and YouTube videos. With the domain knowledge of spammer seeds, we formulate and tackle the problem in a semi-supervised manner — with the objective of searching for individuals that have similar pattern of behavior as the known seeds — based on a graph diffusion process via local spectral subspace. We offer a fast, scalable MapReduce deployment adapted from the localized spectral clustering algorithm. We demonstrate the effectiveness of our deployment at Google by achieving a manual review accuracy of 98% on YouTube Comments graph in practice. Comparing with the state-of-the-art algorithm CopyCatch, Leas achieves 10 times faster running time on average. Leas is now actively in use at Google, searching for daily deceptive practices on YouTube’s engagement graph spanning over a", "title": "" } ]
1840028
Tell me a story--a conceptual exploration of storytelling in healthcare education.
[ { "docid": "pos:1840028_0", "text": "IT may appear blasphemous to paraphrase the title of the classic article of Vannevar Bush but it may be a mitigating factor that it is done to pay tribute to another legendary scientist, Eugene Garfield. His ideas of citationbased searching, resource discovery and quantitative evaluation of publications serve as the basis for many of the most innovative and powerful online information services these days. Bush 60 years ago contemplated – among many other things – an information workstation, the Memex. A researcher would use it to annotate, organize, link, store, and retrieve microfilmed documents. He is acknowledged today as the forefather of the hypertext system, which in turn, is the backbone of the Internet. He outlined his thoughts in an essay published in the Atlantic Monthly. Maybe because of using a nonscientific outlet the paper was hardly quoted and cited in scholarly and professional journals for 30 years. Understandably, the Atlantic Monthly was not covered by the few, specialized abstracting and indexing databases of scientific literature. Such general interest magazines are not source journals in either the Web of Science (WoS), or Scopus databases. However, records for items which cite the ‘As We May Think’ article of Bush (also known as the ‘Memex’ paper) are listed with appropriate bibliographic information. Google Scholar (G-S) lists the records for the Memex paper and many of its citing papers. It is a rather confusing list with many dead links or otherwise dysfunctional links, and a hodge-podge of information related to Bush. It is quite telling that (based on data from the 1945– 2005 edition of WoS) the article of Bush gathered almost 90% of all its 712 citations in WoS between 1975 and 2005, peaking in 1999 with 45 citations in that year alone. Undoubtedly, this proportion is likely to be distorted because far fewer source articles from far fewer journals were processed by the Institute for Scientific Information for 1945–1974 than for 1975–2005. Scopus identifies 267 papers citing the Bush article. The main reason for the discrepancy is that Scopus includes cited references only from 1995 onward, while WoS does so from 1945. Bush’s impatience with the limitations imposed by the traditional classification and indexing tools and practices of the time is palpable. It is worth to quote it as a reminder. Interestingly, he brings up the terms ‘web of trails’ and ‘association of thoughts’ which establishes the link between him and Garfield.", "title": "" } ]
[ { "docid": "neg:1840028_0", "text": "Depth cameras are low-cost, plug & play solution to generate point cloud. 3D depth camera yields depth images which do not convey the actual distance. A 3D camera driver does not support raw depth data output, these are usually filtered and calibrated as per the sensor specifications and hence a method is required to map every pixel back to its original point in 3D space. This paper demonstrates the method to triangulate a pixel from the 2D depth image back to its actual position in 3D space. Further this method illustrates the independence of this mapping operation, which facilitates parallel computing. Triangulation method and ratios between the pixel positions and camera parameters are used to estimate the true position in 3D space. The algorithm performance can be increased by 70% by the usage of TPL libraries. This performance differs from processor to processor", "title": "" }, { "docid": "neg:1840028_1", "text": "Creative individuals increasingly rely on online crowdfunding platforms to crowdsource funding for new ventures. For novice crowdfunding project creators, however, there are few resources to turn to for assistance in the planning of crowdfunding projects. We are building a tool for novice project creators to get feedback on their project designs. One component of this tool is a comparison to existing projects. As such, we have applied a variety of machine learning classifiers to learn the concept of a successful online crowdfunding project at the time of project launch. Currently our classifier can predict with roughly 68% accuracy, whether a project will be successful or not. The classification results will eventually power a prediction segment of the proposed feedback tool. Future work involves turning the results of the machine learning algorithms into human-readable content and integrating this content into the feedback tool.", "title": "" }, { "docid": "neg:1840028_2", "text": "This paper focuses on localization that serves as a smart service. Among the primary services provided by Internet of Things (IoT), localization offers automatically discoverable services. Knowledge relating to an object's position, especially when combined with other information collected from sensors and shared with other smart objects, allows us to develop intelligent systems to fast respond to changes in an environment. Today, wireless sensor networks (WSNs) have become a critical technology for various kinds of smart environments through which different kinds of devices can connect with each other coinciding with the principles of IoT. Among various WSN techniques designed for positioning an unknown node, the trilateration approach based on the received signal strength is the most suitable for localization due to its implementation simplicity and low hardware requirement. However, its performance is susceptible to external factors, such as the number of people present in a room, the shape and dimension of an environment, and the positions of objects and devices. To improve the localization accuracy of trilateration, we develop a novel distributed localization algorithm with a dynamic-circle-expanding mechanism capable of more accurately establishing the geometric relationship between an unknown node and reference nodes. The results of real world experiments and computer simulation show that the average error of position estimation is 0.67 and 0.225 m in the best cases, respectively. This suggests that the proposed localization algorithm outperforms other existing methods.", "title": "" }, { "docid": "neg:1840028_3", "text": "If two hospitals are providing identical services in all respects, except for the brand name, why are customers willing to pay more for one hospital than the other? That is, the brand name is not just a name, but a name that contains value (brand equity). Brand equity is the value that the brand name endows to the product, such that consumers are willing to pay a premium price for products with the particular brand name. Accordingly, a company needs to manage its brand carefully so that its brand equity does not depreciate. Although measuring brand equity is important, managers have no brand equity index that is psychometrically robust and parsimonious enough for practice. Indeed, index construction is quite different from conventional scale development. Moreover, researchers might still be unaware of the potential appropriateness of formative indicators for operationalizing particular constructs. Toward this end, drawing on the brand equity literature and following the index construction procedure, this study creates a brand equity index for a hospital. The results reveal a parsimonious five-indicator brand equity index that can adequately capture the full domain of brand equity. This study also illustrates the differences between index construction and scale development.", "title": "" }, { "docid": "neg:1840028_4", "text": "Mild cognitive impairment (MCI) is the prodromal stage of Alzheimer's disease (AD). Identifying MCI subjects who are at high risk of converting to AD is crucial for effective treatments. In this study, a deep learning approach based on convolutional neural networks (CNN), is designed to accurately predict MCI-to-AD conversion with magnetic resonance imaging (MRI) data. First, MRI images are prepared with age-correction and other processing. Second, local patches, which are assembled into 2.5 dimensions, are extracted from these images. Then, the patches from AD and normal controls (NC) are used to train a CNN to identify deep learning features of MCI subjects. After that, structural brain image features are mined with FreeSurfer to assist CNN. Finally, both types of features are fed into an extreme learning machine classifier to predict the AD conversion. The proposed approach is validated on the standardized MRI datasets from the Alzheimer's Disease Neuroimaging Initiative (ADNI) project. This approach achieves an accuracy of 79.9% and an area under the receiver operating characteristic curve (AUC) of 86.1% in leave-one-out cross validations. Compared with other state-of-the-art methods, the proposed one outperforms others with higher accuracy and AUC, while keeping a good balance between the sensitivity and specificity. Results demonstrate great potentials of the proposed CNN-based approach for the prediction of MCI-to-AD conversion with solely MRI data. Age correction and assisted structural brain image features can boost the prediction performance of CNN.", "title": "" }, { "docid": "neg:1840028_5", "text": "In this paper, we consider a deterministic global optimization algorithm for solving a general linear sum of ratios (LFP). First, an equivalent optimization problem (LFP1) of LFP is derived by exploiting the characteristics of the constraints of LFP. By a new linearizing method the linearization relaxation function of the objective function of LFP1 is derived, then the linear relaxation programming (RLP) of LFP1 is constructed and the proposed branch and bound algorithm is convergent to the global minimum through the successive refinement of the linear relaxation of the feasible region of the objection function and the solutions of a series of RLP. And finally the numerical experiments are given to illustrate the feasibility of the proposed algorithm. 2006 Elsevier Inc. All rights reserved.", "title": "" }, { "docid": "neg:1840028_6", "text": "Context-aware recommendation algorithms focus on refining recommendations by considering additional information, available to the system. This topic has gained a lot of attention recently. Among others, several factorization methods were proposed to solve the problem, although most of them assume explicit feedback which strongly limits their real-world applicability. While these algorithms apply various loss functions and optimization strategies, the preference modeling under context is less explored due to the lack of tools allowing for easy experimentation with various models. As context dimensions are introduced beyond users and items, the space of possible preference models and the importance of proper modeling largely increases. In this paper we propose a general factorization framework (GFF), a single flexible algorithm that takes the preference model as an input and computes latent feature matrices for the input dimensions. GFF allows us to easily experiment with various linear models on any context-aware recommendation task, be it explicit or implicit feedback based. The scaling properties makes it usable under real life circumstances as well. We demonstrate the framework’s potential by exploring various preference models on a 4-dimensional context-aware problem with contexts that are available for almost any real life datasets. We show in our experiments—performed on five real life, implicit feedback datasets—that proper preference modelling significantly increases recommendation accuracy, and previously unused models outperform the traditional ones. Novel models in GFF also outperform state-of-the-art factorization algorithms. We also extend the method to be fully compliant to the Multidimensional Dataspace Model, one of the most extensive data models of context-enriched data. Extended GFF allows the seamless incorporation of information into the factorization framework beyond context, like item metadata, social networks, session information, etc. Preliminary experiments show great potential of this capability.", "title": "" }, { "docid": "neg:1840028_7", "text": "Traditional information systems return answers after a user submits a complete query. Users often feel “left in the dark” when they have limited knowledge about the underlying data and have to use a try-and-see approach for finding information. A recent trend of supporting autocomplete in these systems is a first step toward solving this problem. In this paper, we study a new information-access paradigm, called “type-ahead search” in which the system searches the underlying data “on the fly” as the user types in query keywords. It extends autocomplete interfaces by allowing keywords to appear at different places in the underlying data. This framework allows users to explore data as they type, even in the presence of minor errors. We study research challenges in this framework for large amounts of data. Since each keystroke of the user could invoke a query on the backend, we need efficient algorithms to process each query within milliseconds. We develop various incremental-search algorithms for both single-keyword queries and multi-keyword queries, using previously computed and cached results in order to achieve a high interactive speed. We develop novel techniques to support fuzzy search by allowing mismatches between query keywords and answers. We have deployed several real prototypes using these techniques. One of them has been deployed to support type-ahead search on the UC Irvine people directory, which has been used regularly and well received by users due to its friendly interface and high efficiency.", "title": "" }, { "docid": "neg:1840028_8", "text": "Safety planning in the construction industry is generally done separately from the project execution planning. This separation creates difficulties for safety engineers to analyze what, when, why and where safety measures are needed for preventing accidents. Lack of information and integration of available data (safety plan, project schedule, 2D project drawings) during the planning stage often results in scheduling work activities with overlapping space needs that then can create hazardous conditions, for example, work above other crew. These space requirements are time dependent and often neglected due to the manual effort that is required to handle the data. Representation of project-specific activity space requirements in 4D models hardly happen along with schedule and work break-down structure. Even with full cooperation of all related stakeholders, current safety planning and execution still largely depends on manual observation and past experiences. The traditional manual observation is inefficient, error-prone, and the observed result can be easily effected by subjective judgments. This paper will demonstrate the development of an automated safety code checking tool for Building Information Modeling (BIM), work breakdown structure, and project schedules in conjunction with safety criteria to reduce the potential for accidents on construction projects. The automated safety compliance rule checker code builds on existing applications for building code compliance checking, structural analysis, and constructability analysis etc. and also the advances in 4D simulations for scheduling. Preliminary results demonstrate a computer-based automated tool can assist in safety planning and execution of projects on a day to day basis.", "title": "" }, { "docid": "neg:1840028_9", "text": "We define a new distance measure the resistor-average distance between two probability distributions that is closely related to the Kullback-Leibler distance. While the KullbackLeibler distance is asymmetric in the two distributions, the resistor-average distance is not. It arises from geometric considerations similar to those used to derive the Chernoff distance. Determining its relation to well-known distance measures reveals a new way to depict how commonly used distance measures relate to each other.", "title": "" }, { "docid": "neg:1840028_10", "text": "In this paper, we propose series of algorithms for detecting change points in time-series data based on subspace identification, meaning a geometric approach for estimating linear state-space models behind time-series data. Our algorithms are derived from the principle that the subspace spanned by the columns of an observability matrix and the one spanned by the subsequences of time-series data are approximately equivalent. In this paper, we derive a batch-type algorithm applicable to ordinary time-series data, i.e. consisting of only output series, and then introduce the online version of the algorithm and the extension to be available with input-output time-series data. We illustrate the effectiveness of our algorithms with comparative experiments using some artificial and real datasets.", "title": "" }, { "docid": "neg:1840028_11", "text": "Cloud computing is a term coined to a network that offers incredible processing power, a wide array of storage space and unbelievable speed of computation. Social media channels, corporate structures and individual consumers are all switching to the magnificent world of cloud computing. The flip side to this coin is that with cloud storage emerges the security issues of confidentiality, data integrity and data availability. Since the “cloud” is a mere collection of tangible super computers spread across the world, authentication and authorization for data access is more than a necessity. Our work attempts to overcome these security threats. The proposed methodology suggests the encryption of the files to be uploaded on the cloud. The integrity and confidentiality of the data uploaded by the user is ensured doubly by not only encrypting it but also providing access to the data only on successful authentication. KeywordsCloud computing, security, encryption, password based AES algorithm", "title": "" }, { "docid": "neg:1840028_12", "text": "Stretching has long been used in many physical activities to increase range of motion (ROM) around a joint. Stretching also has other acute effects on the neuromuscular system. For instance, significant reductions in maximal voluntary strength, muscle power or evoked contractile properties have been recorded immediately after a single bout of static stretching, raising interest in other stretching modalities. Thus, the effects of dynamic stretching on subsequent muscular performance have been questioned. This review aimed to investigate performance and physiological alterations following dynamic stretching. There is a substantial amount of evidence pointing out the positive effects on ROM and subsequent performance (force, power, sprint and jump). The larger ROM would be mainly attributable to reduced stiffness of the muscle-tendon unit, while the improved muscular performance to temperature and potentiation-related mechanisms caused by the voluntary contraction associated with dynamic stretching. Therefore, if the goal of a warm-up is to increase joint ROM and to enhance muscle force and/or power, dynamic stretching seems to be a suitable alternative to static stretching. Nevertheless, numerous studies reporting no alteration or even performance impairment have highlighted possible mitigating factors (such as stretch duration, amplitude or velocity). Accordingly, ballistic stretching, a form of dynamic stretching with greater velocities, would be less beneficial than controlled dynamic stretching. Notwithstanding, the literature shows that inconsistent description of stretch procedures has been an important deterrent to reaching a clear consensus. In this review, we highlight the need for future studies reporting homogeneous, clearly described stretching protocols, and propose a clarified stretching terminology and methodology.", "title": "" }, { "docid": "neg:1840028_13", "text": "With the exponential growth of information being transmitted as a result of various networks, the issues related to providing security to transmit information have considerably increased. Mathematical models were proposed to consolidate the data being transmitted and to protect the same from being tampered with. Work was carried out on the application of 1D and 2D cellular automata (CA) rules for data encryption and decryption in cryptography. A lot more work needs to be done to develop suitable algorithms and 3D CA rules for encryption and description of 3D chaotic information systems. Suitable coding for the algorithms are developed and the results are evaluated for the performance of the algorithms. Here 3D cellular automata encryption and decryption algorithms are used to provide security of data by arranging plain texts and images into layers of cellular automata by using the cellular automata neighbourhood system. This has resulted in highest order of security for transmitted data.", "title": "" }, { "docid": "neg:1840028_14", "text": "Research in Artificial Intelligence is breaking technology barriers every day. New algorithms and high performance computing are making things possible which we could only have imagined earlier. Though the enhancements in AI are making life easier for human beings day by day, there is constant fear that AI based systems will pose a threat to humanity. People in AI community have diverse set of opinions regarding the pros and cons of AI mimicking human behavior. Instead of worrying about AI advancements, we propose a novel idea of cognitive agents, including both human and machines, living together in a complex adaptive ecosystem, collaborating on human computation for producing essential social goods while promoting sustenance, survival and evolution of the agents’ life cycle. We highlight several research challenges and technology barriers in achieving this goal. We propose a governance mechanism around this ecosystem to ensure ethical behaviors of all cognitive agents. Along with a novel set of use-cases of Cogniculture , we discuss the road map ahead", "title": "" }, { "docid": "neg:1840028_15", "text": "Twitter is now used to distribute substantive content such as breaking news, increasing the importance of assessing the credibility of tweets. As users increasingly access tweets through search, they have less information on which to base credibility judgments as compared to consuming content from direct social network connections. We present survey results regarding users' perceptions of tweet credibility. We find a disparity between features users consider relevant to credibility assessment and those currently revealed by search engines. We then conducted two experiments in which we systematically manipulated several features of tweets to assess their impact on credibility ratings. We show that users are poor judges of truthfulness based on content alone, and instead are influenced by heuristics such as user name when making credibility assessments. Based on these findings, we discuss strategies tweet authors can use to enhance their credibility with readers (and strategies astute readers should be aware of!). We propose design improvements for displaying social search results so as to better convey credibility.", "title": "" }, { "docid": "neg:1840028_16", "text": "This paper aims to classify and analyze recent as well as classic image registration techniques. Image registration is the process of super imposing images of the same scene taken at different times, location and by different sensors. It is a key enabling technology in medical image analysis for integrating and analyzing information from various modalities. Basically image registration finds temporal correspondences between the set of images and uses transformation model to infer features from these correspondences.The approaches for image registration can beclassified according to their nature vizarea-based and feature-based and dimensionalityvizspatial domain and frequency domain. The procedure of image registration by intensity based model, spatial domain transform, Rigid transform and Non rigid transform based on the above mentioned classification has been performed and the eminence of image is measured by the three quality parameters such as SNR, PSNR and MSE. The techniques have been implemented and inferred thatthe non-rigid transform exhibit higher perceptual quality and offer visually sharper image than other techniques.Problematic issues of image registration techniques and outlook for the future research are discussed. This work may be one of the comprehensive reference sources for the researchers involved in image registration.", "title": "" }, { "docid": "neg:1840028_17", "text": "Many important data analysis applications present with severely imbalanced datasets with respect to the target variable. A typical example is medical image analysis, where positive samples are scarce, while performance is commonly estimated against the correct detection of these positive examples. We approach this challenge by formulating the problem as anomaly detection with generative models. We train a generative model without supervision on the ‘negative’ (common) datapoints and use this model to estimate the likelihood of unseen data. A successful model allows us to detect the ‘positive’ case as low likelihood datapoints. In this position paper, we present the use of state-of-the-art deep generative models (GAN and VAE) for the estimation of a likelihood of the data. Our results show that on the one hand both GANs and VAEs are able to separate the ‘positive’ and ‘negative’ samples in the MNIST case. On the other hand, for the NLST case, neither GANs nor VAEs were able to capture the complexity of the data and discriminate anomalies at the level that this task requires. These results show that even though there are a number of successes presented in the literature for using generative models in similar applications, there remain further challenges for broad successful implementation.", "title": "" }, { "docid": "neg:1840028_18", "text": "News agencies and other news providers or consumers are confronted with the task of extracting events from news articles. This is done i) either to monitor and, hence, to be informed about events of specific kinds over time and/or ii) to react to events immediately. In the past, several promising approaches to extracting events from text have been proposed. Besides purely statistically-based approaches there are methods to represent events in a semantically-structured form, such as graphs containing actions (predicates), participants (entities), etc. However, it turns out to be very difficult to automatically determine whether an event is real or not. In this paper, we give an overview of approaches which proposed solutions for this research problem. We show that there is no gold standard dataset where real events are annotated in text documents in a fine-grained, semantically-enriched way. We present a methodology of creating such a dataset with the help of crowdsourcing and present preliminary results.", "title": "" }, { "docid": "neg:1840028_19", "text": "In this paper we tackle the inversion of large-scale dense matrices via conventional matrix factorizations (LU, Cholesky, LDL ) and the Gauss-Jordan method on hybrid platforms consisting of a multi-core CPU and a many-core graphics processor (GPU). Specifically, we introduce the different matrix inversion algorithms using a unified framework based on the notation from the FLAME project; we develop hybrid implementations for those matrix operations underlying the algorithms, alternative to those in existing libraries for singleGPU systems; and we perform an extensive experimental study on a platform equipped with state-of-the-art general-purpose architectures from Intel and a “Fermi” GPU from NVIDIA that exposes the efficiency of the different inversion approaches. Our study and experimental results show the simplicity and performance advantage of the GJE-based inversion methods, and the difficulties associated with the symmetric indefinite case.", "title": "" } ]
1840029
Measurement Issues in Galvanic Intrabody Communication: Influence of Experimental Setup
[ { "docid": "pos:1840029_0", "text": "Modeling of intrabody communication (IBC) entails the understanding of the interaction between electromagnetic fields and living tissues. At the same time, an accurate model can provide practical hints toward the deployment of an efficient and secure communication channel for body sensor networks. In the literature, two main IBC coupling techniques have been proposed: galvanic and capacitive coupling. Nevertheless, models that are able to emulate both coupling approaches have not been reported so far. In this paper, a simple model based on a distributed parameter structure with the flexibility to adapt to both galvanic and capacitive coupling has been proposed. In addition, experimental results for both coupling methods were acquired by means of two harmonized measurement setups. The model simulations have been subsequently compared with the experimental data, not only to show their validity but also to revise the practical frequency operation range for both techniques. Finally, the model, along with the experimental results, has also allowed us to provide some practical rules to optimally tackle IBC design.", "title": "" }, { "docid": "pos:1840029_1", "text": "The signal transmission mechanism on the surface of the human body is studied for the application to body channel communication (BCC). From Maxwell's equations, the complete equation of electrical field on the human body is developed to obtain a general BCC model. The mechanism of BCC consists of three parts according to the operating frequencies and channel distances: the quasi-static near-field coupling part, the reactive induction-field radiation part, and the surface wave far-field propagation part. The general BCC model by means of the near-field and far-field approximation is developed to be valid in the frequency range from 100 kHz to 100 MHz and distance up to 1.3 m based on the measurements of the body channel characteristics. Finally, path loss characteristics of BCC are formulated for the design of BCC systems and many potential applications.", "title": "" }, { "docid": "pos:1840029_2", "text": "With the growing number of wearable devices and applications, there is an increasing need for a flexible body channel communication (BCC) system that supports both scalable data rate and low power operation. In this paper, a highly flexible frequency-selective digital transmission (FSDT) transmitter that supports both data scalability and low power operation with the aid of two novel implementation methods is presented. In an FSDT system, data rate is limited by the number of Walsh spreading codes available for use in the optimal body channel band of 40-80 MHz. The first method overcomes this limitation by applying multi-level baseband coding scheme to a carrierless FSDT system to enhance the bandwidth efficiency and to support a data rate of 60 Mb/s within a 40-MHz bandwidth. The proposed multi-level coded FSDT system achieves six times higher data rate as compared to other BCC systems. The second novel implementation method lies in the use of harmonic frequencies of a Walsh encoded FSDT system that allows the BCC system to operate in the optimal channel bandwidth between 40-80 MHz with half the clock frequency. Halving the clock frequency results in a power consumption reduction of 32%. The transmitter was fabricated in a 65-nm CMOS process. It occupies a core area of 0.24 × 0.3 mm 2. When operating under a 60-Mb/s data-rate mode, the transmitter consumes 1.85 mW and it consumes only 1.26 mW when operating under a 5-Mb/s data-rate mode.", "title": "" } ]
[ { "docid": "neg:1840029_0", "text": "Estimating the traversability of rough terrain is a critical task for an outdoor mobile robot. While classifying structured environment can be learned from large number of training data, it is an extremely difficult task to learn and estimate traversability of unstructured rough terrain. Moreover, in many cases information from a single sensor may not be sufficient for estimating traversability reliably in the absence of artificial landmarks such as lane markings or curbs. Our approach estimates traversability of the terrain and build a 2D probabilistic grid map online using 3D-LIDAR and camera. The combination of LIDAR and camera is favoured in many robotic application because they provide complementary information. Our approach assumes the data captured by these two sensors are independent and build separate traversability maps, each with information captured from one sensor. Traversability estimation with vision sensor autonomously collects training data and update classifier without human intervention as the vehicle traverse the terrain. Traversability estimation with 3D-LIDAR measures the slopes of the ground to predict the traversability. Two independently built probabilistic maps are fused using Bayes' rule to improve the detection performance. This is in contrast with other methods in which each sensor performs different tasks. We have implemented the algorithm on a UGV(Unmanned Ground Vehicle) and tested our approach on a rough terrain to evaluate the detection performance.", "title": "" }, { "docid": "neg:1840029_1", "text": "In vivo fluorescence imaging suffers from suboptimal signal-to-noise ratio and shallow detection depth, which is caused by the strong tissue autofluorescence under constant external excitation and the scattering and absorption of short-wavelength light in tissues. Here we address these limitations by using a novel type of optical nanoprobes, photostimulable LiGa5O8:Cr(3+) near-infrared (NIR) persistent luminescence nanoparticles, which, with very-long-lasting NIR persistent luminescence and unique photo-stimulated persistent luminescence (PSPL) capability, allow optical imaging to be performed in an excitation-free and hence, autofluorescence-free manner. LiGa5O8:Cr(3+) nanoparticles pre-charged by ultraviolet light can be repeatedly (>20 times) stimulated in vivo, even in deep tissues, by short-illumination (~15 seconds) with a white light-emitting-diode flashlight, giving rise to multiple NIR PSPL that expands the tracking window from several hours to more than 10 days. Our studies reveal promising potential of these nanoprobes in cell tracking and tumor targeting, exhibiting exceptional sensitivity and penetration that far exceed those afforded by conventional fluorescence imaging.", "title": "" }, { "docid": "neg:1840029_2", "text": "This paper describes the SimBow system submitted at SemEval2017-Task3, for the question-question similarity subtask B. The proposed approach is a supervised combination of different unsupervised textual similarities. These textual similarities rely on the introduction of a relation matrix in the classical cosine similarity between bag-of-words, so as to get a softcosine that takes into account relations between words. According to the type of relation matrix embedded in the soft-cosine, semantic or lexical relations can be considered. Our system ranked first among the official submissions of subtask B.", "title": "" }, { "docid": "neg:1840029_3", "text": "By analyzing the relationship of S-parameter between two-port differential and four-port single-ended networks, a method is found for measuring the S-parameter of a differential amplifier on wafer by using a normal two-port vector network analyzer. With this method, it should not especially purchase a four-port vector network analyzer. Furthermore, the method was also suitable for measuring S-parameter of any multi-port circuit by using two-ports measurement set.", "title": "" }, { "docid": "neg:1840029_4", "text": "The transition from user requirements to UML diagrams is a difficult task for the designer espec ially when he handles large texts expressing these needs. Modelin g class Diagram must be performed frequently, even during t he development of a simple application. This paper prop oses an approach to facilitate class diagram extraction from textual requirements using NLP techniques and domain ontolog y. Keywords-component; Class Diagram, Natural Language Processing, GATE, Domain ontology, requirements.", "title": "" }, { "docid": "neg:1840029_5", "text": "This paper gives an overview on different research activities on electronically steerable antennas at Ka-band within the framework of the SANTANA project. In addition, it gives an outlook on future objectives, namely the perspective of testing SANTANA technologies with the projected German research satellite “Heinrich Hertz”.", "title": "" }, { "docid": "neg:1840029_6", "text": "The fast-growing nature of instant messaging applications usage on Android mobile devices brought about a proportional increase on the number of cyber-attack vectors that could be perpetrated on them. Android mobile phones store significant amount of information in the various memory partitions when Instant Messaging (IM) applications (WhatsApp, Skype, and Facebook) are executed on them. As a result of the enormous crimes committed using instant messaging applications, and the amount of electronic based traces of evidence that can be retrieved from the suspect’s device where an investigation could convict or refute a person in the court of law and as such, mobile phones have become a vulnerable ground for digital evidence mining. This paper aims at using forensic tools to extract and analyse left artefacts digital evidence from IM applications on Android phones using android studio as the virtual machine. Digital forensic investigation methodology by Bill Nelson was applied during this research. Some of the key results obtained showed how digital forensic evidence such as call logs, contacts numbers, sent/retrieved messages, and images can be mined from simulated android phones when running these applications. These artefacts can be used in the court of law as evidence during cybercrime investigation.", "title": "" }, { "docid": "neg:1840029_7", "text": "\"Of what a strange nature is knowledge! It clings to the mind, when it has once seized on it, like a lichen on the rock,\" Abstract We describe a theoretical system intended to facilitate the use of knowledge In an understand­ ing system. The notion of script is introduced to account for knowledge about mundane situations. A program, SAM, is capable of using scripts to under­ stand. The notion of plans is introduced to ac­ count for general knowledge about novel situa­ tions. I. Preface In an attempt to provide theory where there have been mostly unrelated systems, Minsky (1974) recently described the as fitting into the notion of \"frames.\" Minsky at­ tempted to relate this work, in what is essentially language processing, to areas of vision research that conform to the same notion. Mlnsky's frames paper has created quite a stir in AI and some immediate spinoff research along the lines of developing frames manipulators (e.g. Bobrow, 1975; Winograd, 1975). We find that we agree with much of what Minsky said about frames and with his characterization of our own work. The frames idea is so general, however, that It does not lend itself to applications without further specialization. This paper is an attempt to devel­ op further the lines of thought set out in Schank (1975a) and Abelson (1973; 1975a). The ideas pre­ sented here can be viewed as a specialization of the frame idea. We shall refer to our central constructs as \"scripts.\" II. The Problem Researchers in natural language understanding have felt for some time that the eventual limit on the solution of our problem will be our ability to characterize world knowledge. Various researchers have approached world knowledge in various ways. Winograd (1972) dealt with the problem by severely restricting the world. This approach had the po­ sitive effect of producing a working system and the negative effect of producing one that was only minimally extendable. Charniak (1972) approached the problem from the other end entirely and has made some interesting first steps, but because his work is not grounded in any representational sys­ tem or any working computational system the res­ triction of world knowledge need not critically concern him. Our feeling is that an effective characteri­ zation of knowledge can result in a real under­ standing system in the not too distant future. We expect that programs based on the theory we out­ …", "title": "" }, { "docid": "neg:1840029_8", "text": "The uses of the World Wide Web on the Internet for commerce and information access continue to expand. The e-commerce business has proven to be a promising channel of choice for consumers as it is gradually transforming into a mainstream business activity. However, lack of trust has been identified as a major obstacle to the adoption of online shopping. Empirical study of online trust is constrained by the shortage of high-quality measures of general trust in the e-commence contexts. Based on theoretical or empirical studies in the literature of marketing or information system, nine factors have sound theoretical sense and support from the literature. A survey method was used for data collection in this study. A total of 172 usable questionnaires were collected from respondents. This study presents a new set of instruments for use in studying online trust of an individual. The items in the instrument were analyzed using a factors analysis. The results demonstrated reliable reliability and validity in the instrument.This study identified seven factors has a significant impact on online trust. The seven dominant factors are reputation, third-party assurance, customer service, propensity to trust, website quality, system assurance and brand. As consumers consider that doing business with online vendors involves risk and uncertainty, online business organizations need to overcome these barriers. Further, implication of the finding also provides e-commerce practitioners with guideline for effectively engender online customer trust.", "title": "" }, { "docid": "neg:1840029_9", "text": "Noninvasive body contouring has become one of the fastest-growing areas of esthetic medicine. Many patients appear to prefer nonsurgical less-invasive procedures owing to the benefits of fewer side effects and shorter recovery times. Increasingly, 635-nm low-level laser therapy (LLLT) has been used in the treatment of a variety of medical conditions and has been shown to improve wound healing, reduce edema, and relieve acute pain. Within the past decade, LLLT has also emerged as a new modality for noninvasive body contouring. Research has shown that LLLT is effective in reducing overall body circumference measurements of specifically treated regions, including the hips, waist, thighs, and upper arms, with recent studies demonstrating the long-term effectiveness of results. The treatment is painless, and there appears to be no adverse events associated with LLLT. The mechanism of action of LLLT in body contouring is believed to stem from photoactivation of cytochrome c oxidase within hypertrophic adipocytes, which, in turn, affects intracellular secondary cascades, resulting in the formation of transitory pores within the adipocytes' membrane. The secondary cascades involved may include, but are not limited to, activation of cytosolic lipase and nitric oxide. Newly formed pores release intracellular lipids, which are further metabolized. Future studies need to fully outline the cellular and systemic effects of LLLT as well as determine optimal treatment protocols.", "title": "" }, { "docid": "neg:1840029_10", "text": "The objective of the present study is to evaluate the acute effects of low-level laser therapy (LLLT) on functional capacity, perceived exertion, and blood lactate in hospitalized patients with heart failure (HF). Patients diagnosed with systolic HF (left ventricular ejection fraction <45 %) were randomized and allocated prospectively into two groups: placebo LLLT group (n = 10)—subjects who were submitted to placebo laser and active LLLT group (n = 10)—subjects who were submitted to active laser. The 6-min walk test (6MWT) was performed, and blood lactate was determined at rest (before LLLT application and 6MWT), immediately after the exercise test (time 0) and recovery (3, 6, and 30 min). A multi-diode LLLT cluster probe (DMC, São Carlos, Brazil) was used. Both groups increased 6MWT distance after active or placebo LLLT application compared to baseline values (p = 0.03 and p = 0.01, respectively); however, no difference was observed during intergroup comparison. The active LLLT group showed a significant reduction in the perceived exertion Borg (PEB) scale compared to the placebo LLLT group (p = 0.006). In addition, the group that received active LLLT showed no statistically significant difference for the blood lactate level through the times analyzed. The placebo LLLT group demonstrated a significant increase in blood lactate between the rest and recovery phase (p < 0.05). Acute effects of LLLT irradiation on skeletal musculature were not able to improve the functional capacity of hospitalized patients with HF, although it may favorably modulate blood lactate metabolism and reduce perceived muscle fatigue.", "title": "" }, { "docid": "neg:1840029_11", "text": "Recurrent Neural Network Language Models (RNN-LMs) have recently shown exceptional performance across a variety of applications. In this paper, we modify the architecture to perform Language Understanding, and advance the state-of-the-art for the widely used ATIS dataset. The core of our approach is to take words as input as in a standard RNN-LM, and then to predict slot labels rather than words on the output side. We present several variations that differ in the amount of word context that is used on the input side, and in the use of non-lexical features. Remarkably, our simplest model produces state-of-the-art results, and we advance state-of-the-art through the use of bagof-words, word embedding, named-entity, syntactic, and wordclass features. Analysis indicates that the superior performance is attributable to the task-specific word representations learned by the RNN.", "title": "" }, { "docid": "neg:1840029_12", "text": "The objective of this study is to examine how personal factors such as lifestyle, personality, and economic situations affect the consumer behavior of Malaysian university students. A quantitative approach was adopted and a self-administered questionnaire was distributed to collect data from university students. Findings illustrate that ‘personality’ influences the consumer behavior among Malaysian university student. This study also noted that the economic situation had a negative relationship with consumer behavior. Findings of this study improve our understanding of consumer behavior of Malaysian University Students. The findings of this study provide valuable insights in identifying and taking steps to improve on the services, ambience, and needs of the student segment of the Malaysian market.", "title": "" }, { "docid": "neg:1840029_13", "text": "The pomegranate, Punica granatum L., is an ancient, mystical, unique fruit borne on a small, long-living tree cultivated throughout the Mediterranean region, as far north as the Himalayas, in Southeast Asia, and in California and Arizona in the United States. In addition to its ancient historical uses, pomegranate is used in several systems of medicine for a variety of ailments. The synergistic action of the pomegranate constituents appears to be superior to that of single constituents. In the past decade, numerous studies on the antioxidant, anticarcinogenic, and anti-inflammatory properties of pomegranate constituents have been published, focusing on treatment and prevention of cancer, cardiovascular disease, diabetes, dental conditions, erectile dysfunction, bacterial infections and antibiotic resistance, and ultraviolet radiation-induced skin damage. Other potential applications include infant brain ischemia, male infertility, Alzheimer's disease, arthritis, and obesity.", "title": "" }, { "docid": "neg:1840029_14", "text": "{Portions reprinted, with permission from Keim et al. #2001 IEEE Abstract Simple presentation graphics are intuitive and easy-to-use, but show only highly aggregated data presenting only a very small number of data values (as in the case of bar charts) and may have a high degree of overlap occluding a significant portion of the data values (as in the case of the x-y plots). In this article, the authors therefore propose a generalization of traditional bar charts and x-y plots, which allows the visualization of large amounts of data. The basic idea is to use the pixels within the bars to present detailed information of the data records. The so-called pixel bar charts retain the intuitiveness of traditional bar charts while allowing very large data sets to be visualized in an effective way. It is shown that, for an effective pixel placement, a complex optimization problem has to be solved. The authors then present an algorithm which efficiently solves the problem. The application to a number of real-world ecommerce data sets shows the wide applicability and usefulness of this new idea, and a comparison to other well-known visualization techniques (parallel coordinates and spiral techniques) shows a number of clear advantages. Information Visualization (2002) 1, 20 – 34. DOI: 10.1057/palgrave/ivs/9500003", "title": "" }, { "docid": "neg:1840029_15", "text": "THE USE OF OBSERVATIONAL RESEARCH METHODS in the field of palliative care is vital to building the evidence base, identifying best practices, and understanding disparities in access to and delivery of palliative care services. As discussed in the introduction to this series, research in palliative care encompasses numerous areas in which the gold standard research design, the randomized controlled trial (RCT), is not appropriate, adequate, or even possible.1,2 The difficulties in conducting RCTs in palliative care include patient and family recruitment, gate-keeping by physicians, crossover contamination, high attrition rates, small sample sizes, and limited survival times. Furthermore, a number of important issues including variation in access to palliative care and disparities in the use and provision of palliative care simply cannot be answered without observational research methods. As research in palliative care broadens to encompass study designs other than the RCT, the collective understanding of the use, strengths, and limitations of observational research methods is critical. The goals of this first paper are to introduce the major types of observational study designs, discuss the issues of precision and validity, and provide practical insights into how to critically evaluate this literature in our field.", "title": "" }, { "docid": "neg:1840029_16", "text": "Aarskog-Scott syndrome (AAS), also known as faciogenital dysplasia (FGD, OMIM # 305400), is an X-linked disorder of recessive inheritance, characterized by short stature and facial, skeletal, and urogenital abnormalities. AAS is caused by mutations in the FGD1 gene (Xp11.22), with over 56 different mutations identified to date. We present the clinical and molecular analysis of four unrelated families of Mexican origin with an AAS phenotype, in whom FGD1 sequencing was performed. This analysis identified two stop mutations not previously reported in the literature: p.Gln664* and p.Glu380*. Phenotypically, every male patient met the clinical criteria of the syndrome, whereas discrepancies were found between phenotypes in female patients. Our results identify two novel mutations in FGD1, broadening the spectrum of reported mutations; and provide further delineation of the phenotypic variability previously described in AAS.", "title": "" }, { "docid": "neg:1840029_17", "text": "In this paper, we describe an SDN-based plastic architecture for 5G networks, designed to fulfill functional and performance requirements of new generation services and devices. The 5G logical architecture is presented in detail, and key procedures for dynamic control plane instantiation, device attachment, and service request and mobility management are specified. Key feature of the proposed architecture is flexibility, needed to support efficiently a heterogeneous set of services, including Machine Type Communication, Vehicle to X and Internet of Things traffic. These applications are imposing challenging targets, in terms of end-to-end latency, dependability, reliability and scalability. Additionally, backward compatibility with legacy systems is guaranteed by the proposed solution, and Control Plane and Data Plane are fully decoupled. The three levels of unified signaling unify Access, Non-access and Management strata, and a clean-slate forwarding layer, designed according to the software defined networking principle, replaces tunneling protocols for carrier grade mobility. Copyright © 2014 John Wiley & Sons, Ltd. *Correspondence R. Trivisonno, Huawei European Research Institute, Munich, Germany. E-mail: riccardo.trivisonno@huawei.com Received 13 October 2014; Revised 5 November 2014; Accepted 8 November 2014", "title": "" }, { "docid": "neg:1840029_18", "text": "The shared nature of the network in today's multi-tenant datacenters implies that network performance for tenants can vary significantly. This applies to both production datacenters and cloud environments. Network performance variability hurts application performance which makes tenant costs unpredictable and causes provider revenue loss. Motivated by these factors, this paper makes the case for extending the tenant-provider interface to explicitly account for the network. We argue this can be achieved by providing tenants with a virtual network connecting their compute instances. To this effect, the key contribution of this paper is the design of virtual network abstractions that capture the trade-off between the performance guarantees offered to tenants, their costs and the provider revenue.\n To illustrate the feasibility of virtual networks, we develop Oktopus, a system that implements the proposed abstractions. Using realistic, large-scale simulations and an Oktopus deployment on a 25-node two-tier testbed, we demonstrate that the use of virtual networks yields significantly better and more predictable tenant performance. Further, using a simple pricing model, we find that the our abstractions can reduce tenant costs by up to 74% while maintaining provider revenue neutrality.", "title": "" }, { "docid": "neg:1840029_19", "text": "This article demonstrates how documents prepared in hypertext or word processor format can be saved in portable document format (PDF). These files are self-contained documents that that have the same appearance on screen and in print, regardless of what kind of computer or printer are used, and regardless of what software package was originally used to for their creation. PDF files are compressed documents, invariably smaller than the original files, hence allowing rapid dissemination and download.", "title": "" } ]
1840030
Bagging, Boosting and the Random Subspace Method for Linear Classifiers
[ { "docid": "pos:1840030_0", "text": "Boosting is a general method for improving the performance of learning algorithms. A recently proposed boosting algorithm, Ada Boost, has been applied with great success to several benchmark machine learning problems using mainly decision trees as base classifiers. In this article we investigate whether Ada Boost also works as well with neural networks, and we discuss the advantages and drawbacks of different versions of the Ada Boost algorithm. In particular, we compare training methods based on sampling the training set and weighting the cost function. The results suggest that random resampling of the training data is not the main explanation of the success of the improvements brought by Ada Boost. This is in contrast to bagging, which directly aims at reducing variance and for which random resampling is essential to obtain the reduction in generalization error. Our system achieves about 1.4 error on a data set of on-line handwritten digits from more than 200 writers. A boosted multilayer network achieved 1.5 error on the UCI letters and 8.1 error on the UCI satellite data set, which is significantly better than boosted decision trees.", "title": "" } ]
[ { "docid": "neg:1840030_0", "text": "Hypothyroidism is a clinical disorder commonly encountered by the primary care physician. Untreated hypothyroidism can contribute to hypertension, dyslipidemia, infertility, cognitive impairment, and neuromuscular dysfunction. Data derived from the National Health and Nutrition Examination Survey suggest that about one in 300 persons in the United States has hypothyroidism. The prevalence increases with age, and is higher in females than in males. Hypothyroidism may occur as a result of primary gland failure or insufficient thyroid gland stimulation by the hypothalamus or pituitary gland. Autoimmune thyroid disease is the most common etiology of hypothyroidism in the United States. Clinical symptoms of hypothyroidism are nonspecific and may be subtle, especially in older persons. The best laboratory assessment of thyroid function is a serum thyroid-stimulating hormone test. There is no evidence that screening asymptomatic adults improves outcomes. In the majority of patients, alleviation of symptoms can be accomplished through oral administration of synthetic levothyroxine, and most patients will require lifelong therapy. Combination triiodothyronine/thyroxine therapy has no advantages over thyroxine monotherapy and is not recommended. Among patients with subclinical hypothyroidism, those at greater risk of progressing to clinical disease, and who may be considered for therapy, include patients with thyroid-stimulating hormone levels greater than 10 mIU per L and those who have elevated thyroid peroxidase antibody titers.", "title": "" }, { "docid": "neg:1840030_1", "text": "We present statistical analyses of the large-scale structure of 3 types of semantic networks: word associations, WordNet, and Roget's Thesaurus. We show that they have a small-world structure, characterized by sparse connectivity, short average path lengths between words, and strong local clustering. In addition, the distributions of the number of connections follow power laws that indicate a scale-free pattern of connectivity, with most nodes having relatively few connections joined together through a small number of hubs with many connections. These regularities have also been found in certain other complex natural networks, such as the World Wide Web, but they are not consistent with many conventional models of semantic organization, based on inheritance hierarchies, arbitrarily structured networks, or high-dimensional vector spaces. We propose that these structures reflect the mechanisms by which semantic networks grow. We describe a simple model for semantic growth, in which each new word or concept is connected to an existing network by differentiating the connectivity pattern of an existing node. This model generates appropriate small-world statistics and power-law connectivity distributions, and it also suggests one possible mechanistic basis for the effects of learning history variables (age of acquisition, usage frequency) on behavioral performance in semantic processing tasks.", "title": "" }, { "docid": "neg:1840030_2", "text": "Reading an article and answering questions about its content is a fundamental task for natural language understanding. While most successful neural approaches to this problem rely on recurrent neural networks (RNNs), training RNNs over long documents can be prohibitively slow. We present a novel framework for question answering that can efficiently scale to longer documents while maintaining or even improving performance. Our approach combines a coarse, inexpensive model for selecting one or more relevant sentences and a more expensive RNN that produces the answer from those sentences. A central challenge is the lack of intermediate supervision for the coarse model, which we address using reinforcement learning. Experiments demonstrate state-of-the-art performance on a challenging subset of the WIKIREADING dataset (Hewlett et al., 2016) and on a newly-gathered dataset, while reducing the number of sequential RNN steps by 88% against a standard sequence to sequence model.", "title": "" }, { "docid": "neg:1840030_3", "text": "Feature selection is an important preprocessing step in data mining. Mutual information-based feature selection is a kind of popular and effective approaches. In general, most existing mutual information-based techniques are greedy methods, which are proven to be efficient but suboptimal. In this paper, mutual information-based feature selection is transformed into a global optimization problem, which provides a new idea for solving feature selection problems. Firstly, a single-objective feature selection algorithm combining relevance and redundancy is presented, which has well global searching ability and high computational efficiency. Furthermore, to improve the performance of feature selection, we propose a multi-objective feature selection algorithm. The method can meet different requirements and achieve a tradeoff among multiple conflicting objectives. On this basis, a hybrid feature selection framework is adopted for obtaining a final solution. We compare the performance of our algorithm with related methods on both synthetic and real datasets. Simulation results show the effectiveness and practicality of the proposed method.", "title": "" }, { "docid": "neg:1840030_4", "text": "This study explores teenage girls' narrations of the relationship between self-presentation and peer comparison on social media in the context of beauty. Social media provide new platforms that manifest media and peer influences on teenage girls' understanding of beauty towards an idealized notion. Through 24 in-depth interviews, this study examines secondary school girls' self-presentation and peer comparison behaviors on social network sites where the girls posted self-portrait photographs or “selfies” and collected peer feedback in the forms of “likes,” “followers,” and comments. Results of thematic analysis reveal a gap between teenage girls' self-beliefs and perceived peer standards of beauty. Feelings of low self-esteem and insecurity underpinned their efforts in edited self-presentation and quest for peer recognition. Peers played multiple roles that included imaginary audiences, judges, vicarious learning sources, and comparison targets in shaping teenage girls' perceptions and presentation of beauty. Findings from this study reveal the struggles that teenage girls face today and provide insights for future investigations and interventions pertinent to teenage girls’ presentation and evaluation of self on", "title": "" }, { "docid": "neg:1840030_5", "text": "Heavy metals are discharged into water from various industries. They can be toxic or carcinogenic in nature and can cause severe problems for humans and aquatic ecosystems. Thus, the removal of heavy metals fromwastewater is a serious problem. The adsorption process is widely used for the removal of heavy metals from wastewater because of its low cost, availability and eco-friendly nature. Both commercial adsorbents and bioadsorbents are used for the removal of heavy metals fromwastewater, with high removal capacity. This review article aims to compile scattered information on the different adsorbents that are used for heavy metal removal and to provide information on the commercially available and natural bioadsorbents used for removal of chromium, cadmium and copper, in particular. This is an Open Access article distributed under the terms of the Creative Commons Attribution Licence (CC BY-NC-ND 4.0), which permits copying and redistribution for non-commercial purposes with no derivatives, provided the original work is properly cited (http://creativecommons.org/ licenses/by-nc-nd/4.0/). doi: 10.2166/wrd.2016.104 Renu Madhu Agarwal (corresponding author) K. Singh Department of Chemical Engineering, Malaviya National Institute of Technology, JLN Marg, Jaipur 302017, India E-mail: madhunaresh@gmail.com", "title": "" }, { "docid": "neg:1840030_6", "text": "The general employee scheduling problem extends the standard shift scheduling problem by discarding key limitations such as employee homogeneity and the absence of connections across time period blocks. The resulting increased generality yields a scheduling model that applies to real world problems confronted in a wide variety of areas. The price of the increased generality is a marked increase in size and complexity over related models reported in the literature. The integer programming formulation for the general employee scheduling problem, arising in typical real world settings, contains from one million to over four million zero~ne variables. By contrast, studies of special cases reported over the past decade have focused on problems involving between 100 and 500 variables. We characterize the relationship between the general employee scheduling problem and related problems, reporting computational results for a procedure that solves these more complex problems within 98-99 % optimality and runs on a microcomputer. We view our approach as an integration of management science and artificial intelligence techniques. The benefits of such an integration are suggested by the fact that other zero~ne scheduling implementations reported in the literature, including the one awarded the Lancaster Prize in 1984, have obtained comparable approximations of optimality only for problems from two to three orders of magnitude smaller, and then only by the use of large mainframe computers.", "title": "" }, { "docid": "neg:1840030_7", "text": "Taking into consideration both external (i.e. technology acceptance factors, website service quality) as well as internal factors (i.e. specific holdup cost) , this research explores how the customers’ satisfaction and loyalty, when shopping and purchasing on the internet , can be associated with each other and how they are affected by the above dynamics. This research adopts the Structural Equation Model (SEM) as the main analytical tool. It investigates those who used to have shopping experiences in major shopping websites of Taiwan. The research results point out the following: First, customer satisfaction will positively influence customer loyalty directly; second, technology acceptance factors will positively influence customer satisfaction and loyalty directly; third, website service quality can positively influence customer satisfaction and loyalty directly; and fourth, specific holdup cost can positively influence customer loyalty directly, but cannot positively influence customer satisfaction directly. This paper draws on the research results for implications of managerial practice, and then suggests some empirical tactics in order to help enhancing management performance for the website shopping industry.", "title": "" }, { "docid": "neg:1840030_8", "text": "Software transactions have received significant attention as a way to simplify shared-memory concurrent programming, but insufficient focus has been given to the precise meaning of software transactions or their interaction with other language features. This work begins to rectify that situation by presenting a family of formal languages that model a wide variety of behaviors for software transactions. These languages abstract away implementation details of transactional memory, providing high-level definitions suitable for programming languages. We use small-step semantics in order to represent explicitly the interleaved execution of threads that is necessary to investigate pertinent issues.\n We demonstrate the value of our core approach to modeling transactions by investigating two issues in depth. First, we consider parallel nesting, in which parallelism and transactions can nest arbitrarily. Second, we present multiple models for weak isolation, in which nontransactional code can violate the isolation of a transaction. For both, type-and-effect systems let us soundly and statically restrict what computation can occur inside or outside a transaction. We prove some key language-equivalence theorems to confirm that under sufficient static restrictions, in particular that each mutable memory location is used outside transactions or inside transactions (but not both), no program can determine whether the language implementation uses weak isolation or strong isolation.", "title": "" }, { "docid": "neg:1840030_9", "text": "Change detection is the process of finding out difference between two images taken at two different times. With the help of remote sensing the . Here we will try to find out the difference of the same image taken at different times. here we use mean ratio and log ratio to find out the difference in the images. Log is use to find background image and fore ground detected by mean ratio. A reformulated fuzzy local-information C-means clustering algorithm is proposed for classifying changed and unchanged regions in the fused difference image. It incorporates the information about spatial context in a novel fuzzy way for the purpose of enhancing the changed information and of reducing the effect of speckle noise. Experiments on real SAR images show that the image fusion strategy integrates the advantages of the log-ratio operator and the mean-ratio operator and gains a better performance. The change detection results obtained by the improved fuzzy clustering algorithm exhibited lower error than its preexistences.", "title": "" }, { "docid": "neg:1840030_10", "text": "Deep reinforcement learning (DRL) is poised to revolutionize the field of artificial intelligence (AI) and represents a step toward building autonomous systems with a higherlevel understanding of the visual world. Currently, deep learning is enabling reinforcement learning (RL) to scale to problems that were previously intractable, such as learning to play video games directly from pixels. DRL algorithms are also applied to robotics, allowing control policies for robots to be learned directly from camera inputs in the real world. In this survey, we begin with an introduction to the general field of RL, then progress to the main streams of value-based and policy-based methods. Our survey will cover central algorithms in deep RL, including the deep Q-network (DQN), trust region policy optimization (TRPO), and asynchronous advantage actor critic. In parallel, we highlight the unique advantages of deep neural networks, focusing on visual understanding via RL. To conclude, we describe several current areas of research within the field.", "title": "" }, { "docid": "neg:1840030_11", "text": "Community detection and analysis is an important methodology for understanding the organization of various real-world networks and has applications in problems as diverse as consensus formation in social communities or the identification of functional modules in biochemical networks. Currently used algorithms that identify the community structures in large-scale real-world networks require a priori information such as the number and sizes of communities or are computationally expensive. In this paper we investigate a simple label propagation algorithm that uses the network structure alone as its guide and requires neither optimization of a predefined objective function nor prior information about the communities. In our algorithm every node is initialized with a unique label and at every step each node adopts the label that most of its neighbors currently have. In this iterative process densely connected groups of nodes form a consensus on a unique label to form communities. We validate the algorithm by applying it to networks whose community structures are known. We also demonstrate that the algorithm takes an almost linear time and hence it is computationally less expensive than what was possible so far.", "title": "" }, { "docid": "neg:1840030_12", "text": "BACKGROUND/PURPOSE\nMorbidity in children treated with appendicitis results either from late diagnosis or negative appendectomy. A Prospective analysis of efficacy of Pediatric Appendicitis Score for early diagnosis of appendicitis in children was conducted.\n\n\nMETHODS\nIn the last 5 years, 1,170 children aged 4 to 15 years with abdominal pain suggestive of acute appendicitis were evaluated prospectively. Group 1 (734) were patients with appendicitis and group 2 (436) nonappendicitis. Multiple linear logistic regression analysis of all clinical and investigative parameters was performed for a model comprising 8 variables to form a diagnostic score.\n\n\nRESULTS\nLogistic regression analysis yielded a model comprising 8 variables, all statistically significant, P <.001. These variables in order of their diagnostic index were (1) cough/percussion/hopping tenderness in the right lower quadrant of the abdomen (0.96), (2) anorexia (0.88), (3) pyrexia (0.87), (4) nausea/emesis (0.86), (5) tenderness over the right iliac fossa (0.84), (6) leukocytosis (0.81), (7) polymorphonuclear neutrophilia (0.80) and (8) migration of pain (0.80). Each of these variables was assigned a score of 1, except for physical signs (1 and 5), which were scored 2 to obtain a total of 10. The Pediatric Appendicitis Score had a sensitivity of 1, specificity of 0.92, positive predictive value of 0.96, and negative predictive value of 0.99.\n\n\nCONCLUSION\nPediatric appendicitis score is a simple, relatively accurate diagnostic tool for accessing an acute abdomen and diagnosing appendicitis in children.", "title": "" }, { "docid": "neg:1840030_13", "text": "This paper introduces an active object detection and localization framework that combines a robust untextured object detection and 3D pose estimation algorithm with a novel next-best-view selection strategy. We address the detection and localization problems by proposing an edge-based registration algorithm that refines the object position by minimizing a cost directly extracted from a 3D image tensor that encodes the minimum distance to an edge point in a joint direction/location space. We face the next-best-view problem by exploiting a sequential decision process that, for each step, selects the next camera position which maximizes the mutual information between the state and the next observations. We solve the intrinsic intractability of this solution by generating observations that represent scene realizations, i.e. combination samples of object hypothesis provided by the object detector, while modeling the state by means of a set of constantly resampled particles. Experiments performed on different real world, challenging datasets confirm the effectiveness of the proposed methods.", "title": "" }, { "docid": "neg:1840030_14", "text": "Sentences with different structures may convey the same meaning. Identification of sentences with paraphrases plays an important role in text related research and applications. This work focus on the statistical measures and semantic analysis of Malayalam sentences to detect the paraphrases. The statistical similarity measures between sentences, based on symbolic characteristics and structural information, could measure the similarity between sentences without any prior knowledge but only on the statistical information of sentences. The semantic representation of Universal Networking Language(UNL), represents only the inherent meaning in a sentence without any syntactic details. Thus, comparing the UNL graphs of two sentences can give an insight into how semantically similar the two sentences are. Combination of statistical similarity and semantic similarity score results the overall similarity score. This is the first attempt towards paraphrases of malayalam sentences.", "title": "" }, { "docid": "neg:1840030_15", "text": "The capacity to identify cheaters is essential for maintaining balanced social relationships, yet humans have been shown to be generally poor deception detectors. In fact, a plethora of empirical findings holds that individuals are only slightly better than chance when discerning lies from truths. Here, we report 5 experiments showing that judges' ability to detect deception greatly increases after periods of unconscious processing. Specifically, judges who were kept from consciously deliberating outperformed judges who were encouraged to do so or who made a decision immediately; moreover, unconscious thinkers' detection accuracy was significantly above chance level. The reported experiments further show that this improvement comes about because unconscious thinking processes allow for integrating the particularly rich information basis necessary for accurate lie detection. These findings suggest that the human mind is not unfit to distinguish between truth and deception but that this ability resides in previously overlooked processes.", "title": "" }, { "docid": "neg:1840030_16", "text": "Crowdsourcing emerged with the development of Web 2.0 technologies as a distributed online practice that harnesses the collective aptitudes and skills of the crowd in order to reach specific goals. The success of crowdsourcing systems is influenced by the users’ levels of participation and interactions on the platform. Therefore, there is a need for the incorporation of appropriate incentive mechanisms that would lead to sustained user engagement and quality contributions. Accordingly, the aim of the particular paper is threefold: first, to provide an overview of user motives and incentives, second, to present the corresponding incentive mechanisms used to trigger these motives, alongside with some indicative examples of successful crowdsourcing platforms that incorporate these incentive mechanisms, and third, to provide recommendations on their careful design in order to cater to the context and goal of the platform.", "title": "" }, { "docid": "neg:1840030_17", "text": "This paper introduces a novel weighted unsupervised learning for object detection using an RGB-D camera. This technique is feasible for detecting the moving objects in the noisy environments that are captured by an RGB-D camera. The main contribution of this paper is a real-time algorithm for detecting each object using weighted clustering as a separate cluster. In a preprocessing step, the algorithm calculates the pose 3D position X, Y, Z and RGB color of each data point and then it calculates each data point’s normal vector using the point’s neighbor. After preprocessing, our algorithm calculates k-weights for each data point; each weight indicates membership. Resulting in clustered objects of the scene. Keywords—Weighted Unsupervised Learning, Object Detection, RGB-D camera, Kinect", "title": "" }, { "docid": "neg:1840030_18", "text": "Wireless Sensor Network (WSN) consists of small low-cost, low-power multifunctional nodes interconnected to efficiently aggregate and transmit data to sink. Cluster-based approaches use some nodes as Cluster Heads (CHs) and organize WSNs efficiently for aggregation of data and energy saving. A CH conveys information gathered by cluster nodes and aggregates/compresses data before transmitting it to a sink. However, this additional responsibility of the node results in a higher energy drain leading to uneven network degradation. Low Energy Adaptive Clustering Hierarchy (LEACH) offsets this by probabilistically rotating cluster heads role among nodes with energy above a set threshold. CH selection in WSN is NP-Hard as optimal data aggregation with efficient energy savings cannot be solved in polynomial time. In this work, a modified firefly heuristic, synchronous firefly algorithm, is proposed to improve the network performance. Extensive simulation shows the proposed technique to perform well compared to LEACH and energy-efficient hierarchical clustering. Simulations show the effectiveness of the proposed method in decreasing the packet loss ratio by an average of 9.63% and improving the energy efficiency of the network when compared to LEACH and EEHC.", "title": "" }, { "docid": "neg:1840030_19", "text": "AUTOSAR is a standard for the development of software for embedded devices, primarily created for the automotive domain. It specifies a software architecture with more than 80 software modules that provide services to one or more software components. With the trend towards integrating safety-relevant systems into embedded devices, conformance with standards such as ISO 26262 [ISO11] or ISO/IEC 61508 [IEC10] becomes increasingly important. This article presents an approach to providing freedom from interference between software components by using the MPU available on many modern microcontrollers. Each software component gets its own dedicated memory area, a so-called memory partition. This concept is well known in other industries like the aerospace industry, where the IMA architecture is now well established. The memory partitioning mechanism is implemented by a microkernel, which integrates seamlessly into the architecture specified by AUTOSAR. The development has been performed as SEooC as described in ISO 26262, which is a new development approach. We describe the procedure for developing an SEooC. AUTOSAR: AUTomotive Open System ARchitecture, see [ASR12]. MPU: Memory Protection Unit. 3 IMA: Integrated Modular Avionics, see [RTCA11]. 4 SEooC: Safety Element out of Context, see [ISO11].", "title": "" } ]
1840031
Real-Time Data Analytics in Sensor Networks
[ { "docid": "pos:1840031_0", "text": "Wireless sensor networks typically consist of a large number of sensor nodes embedded in a physical space. Such sensors are low-power devices that are primarily used for monitoring several physical phenomena, potentially in remote harsh environments. Spatial and temporal dependencies between the readings at these nodes highly exist in such scenarios. Statistical contextual information encodes these spatio-temporal dependencies. It enables the sensors to locally predict their current readings based on their own past readings and the current readings of their neighbors. In this paper, we introduce context-aware sensors. Specifically, we propose a technique for modeling and learning statistical contextual information in sensor networks. Our approach is based on Bayesian classifiers; we map the problem of learning and utilizing contextual information to the problem of learning the parameters of a Bayes classifier, and then making inferences, respectively. We propose a scalable and energy-efficient procedure for online learning of these parameters in-network, in a distributed fashion. We discuss applications of our approach in discovering outliers and detection of faulty sensors, approximation of missing values, and in-network sampling. We experimentally analyze our approach in two applications, tracking and monitoring.", "title": "" } ]
[ { "docid": "neg:1840031_0", "text": "Recently interest has grown in applying activity theory, the leading theoretical approach in Russian psychology, to issues of human-computer interaction. This chapter analyzes why experts in the field are looking for an alternative to the currently dominant cognitive approach. The basic principles of activity theory are presented and their implications for human-computer interaction are discussed. The chapter concludes with an outline of the potential impact of activity theory on studies and design of computer use in real-life settings.", "title": "" }, { "docid": "neg:1840031_1", "text": "Central to many text analysis methods is the notion of a concept: a set of semantically related keywords characterizing a specific object, phenomenon, or theme. Advances in word embedding allow building a concept from a small set of seed terms. However, naive application of such techniques may result in false positive errors because of the polysemy of natural language. To mitigate this problem, we present a visual analytics system called ConceptVector that guides a user in building such concepts and then using them to analyze documents. Document-analysis case studies with real-world datasets demonstrate the fine-grained analysis provided by ConceptVector. To support the elaborate modeling of concepts, we introduce a bipolar concept model and support for specifying irrelevant words. We validate the interactive lexicon building interface by a user study and expert reviews. Quantitative evaluation shows that the bipolar lexicon generated with our methods is comparable to human-generated ones.", "title": "" }, { "docid": "neg:1840031_2", "text": "Fairness is a critical trait in decision making. As machine-learning models are increasingly being used in sensitive application domains (e.g. education and employment) for decision making, it is crucial that the decisions computed by such models are free of unintended bias. But how can we automatically validate the fairness of arbitrary machine-learning models? For a given machine-learning model and a set of sensitive input parameters, our Aeqitas approach automatically discovers discriminatory inputs that highlight fairness violation. At the core of Aeqitas are three novel strategies to employ probabilistic search over the input space with the objective of uncovering fairness violation. Our Aeqitas approach leverages inherent robustness property in common machine-learning models to design and implement scalable test generation methodologies. An appealing feature of our generated test inputs is that they can be systematically added to the training set of the underlying model and improve its fairness. To this end, we design a fully automated module that guarantees to improve the fairness of the model. We implemented Aeqitas and we have evaluated it on six stateof- the-art classifiers. Our subjects also include a classifier that was designed with fairness in mind. We show that Aeqitas effectively generates inputs to uncover fairness violation in all the subject classifiers and systematically improves the fairness of respective models using the generated test inputs. In our evaluation, Aeqitas generates up to 70% discriminatory inputs (w.r.t. the total number of inputs generated) and leverages these inputs to improve the fairness up to 94%.", "title": "" }, { "docid": "neg:1840031_3", "text": "This paper presents outdoor field experimental results to clarify the 4x4 MIMO throughput performance from applying multi-point transmission in the 15 GHz frequency band in the downlink of 5G cellular radio access system. The experimental results in large-cell scenario shows that up to 30 % throughput gain compared to non-multi-point transmission is achieved although the difference for the RSRP of two TPs is over 10 dB, so that the improvement for the antenna correlation is achievable and important aspect for the multi-point transmission in the 15 GHz frequency band as well as the improvement of the RSRP. Furthermore in small-cell scenario, the throughput gain of 70% and over 5 Gbps are achieved applying multi-point transmission in the condition of two different MIMO streams transmission from a single TP as distributed MIMO instead of four MIMO streams transmission from a single TP.", "title": "" }, { "docid": "neg:1840031_4", "text": "Traditionally, mobile robot design is based on wheels, tracks or legs with their respective advantages and disadvantages. Very few groups have explored designs with spherical morphology. During the past ten years, the number of robots with spherical shape and related studies has substantially increased, and a lot of work is done in this area of mobile robotics. Interest in robots with spherical morphology has also increased, in part due to NASA's search for an alternative design for a Mars rover since the wheel-based rover Spirit is now stuck for good in soft soil. This paper presents the spherical amphibious robot Groundbot, developed by Rotundus AB in Stockholm, Sweden, and describes in detail the navigation algorithm employed in this system.", "title": "" }, { "docid": "neg:1840031_5", "text": "The striatum is thought to play an essential role in the acquisition of a wide range of motor, perceptual, and cognitive skills, but neuroimaging has not yet demonstrated striatal activation during nonmotor skill learning. Functional magnetic resonance imaging was performed while participants learned probabilistic classification, a cognitive task known to rely on procedural memory early in learning and declarative memory later in learning. Multiple brain regions were active during probabilistic classification compared with a perceptual-motor control task, including bilateral frontal cortices, occipital cortex, and the right caudate nucleus in the striatum. The left hippocampus was less active bilaterally during probabilistic classification than during the control task, and the time course of this hippocampal deactivation paralleled the expected involvement of medial temporal structures based on behavioral studies of amnesic patients. Findings provide initial evidence for the role of frontostriatal systems in normal cognitive skill learning.", "title": "" }, { "docid": "neg:1840031_6", "text": "My sincere thanks to Donald Norman and David Rumelhart for their support of many years. I also wish to acknowledge the help of The views and conclusions contained in this document are those of the author and should not be interpreted as necessarily representing the official policies, either expressed or implied, of the sponsoring agencies. Approved for public release; distribution unlimited. Reproduction in whole or in part is permitted for any purpose of the United States Government Requests for reprints should be sent to the", "title": "" }, { "docid": "neg:1840031_7", "text": "This study was conducted to identify the risk factors that are associated with neonatal mortality in lambs and kids in Jordan. The bacterial causes of mortality in lambs and kids were investigated. One hundred sheep and goat flocks were selected randomly from different areas of North Jordan at the beginning of the lambing season. The flocks were visited every other week to collect information and to take samples from freshly dead animals. By the end of the lambing season, flocks that had neonatal mortality rate ≥ 1.0% were considered as “case group” while flocks that had neonatal mortality rate less than 1.0% − as “control group”. The results indicated that neonatal mortality rate (within 4 weeks of age), in lambs and kids, was 3.2%. However, the early neonatal mortality rate (within 48 hours of age) was 2.01% and represented 62.1% of the neonatal mortalities. The following risk factors were found to be associated with the neonatal mortality in lambs and kids: not separating the neonates from adult animals; not vaccinating dams against infectious diseases (pasteurellosis, colibacillosis and enterotoxemia); walking more than 5 km and starvation-mismothering exposure. The causes of neonatal mortality in lambs and kids were: diarrhea (59.75%), respiratory diseases (13.3%), unknown causes (12.34%), and accident (8.39%). Bacteria responsible for neonatal mortality were: Escherichia coli, Pasteurella multocida, Clostridium perfringens and Staphylococcus aureus. However, E. coli was the most frequent bacterial species identified as cause of neonatal mortality in lambs and kids and represented 63.4% of all bacterial isolates. The E. coli isolates belonged to 10 serogroups, the O44 and O26 being the most frequent isolates.", "title": "" }, { "docid": "neg:1840031_8", "text": "The Asian-Pacific Association for the Study of the Liver (APASL) convened an international working party on the \"APASL consensus statements and recommendation on management of hepatitis C\" in March, 2015, in order to revise \"APASL consensus statements and management algorithms for hepatitis C virus infection (Hepatol Int 6:409-435, 2012)\". The working party consisted of expert hepatologists from the Asian-Pacific region gathered at Istanbul Congress Center, Istanbul, Turkey on 13 March 2015. New data were presented, discussed and debated to draft a revision. Participants of the consensus meeting assessed the quality of cited studies. Finalized recommendations on treatment of hepatitis C are presented in this review.", "title": "" }, { "docid": "neg:1840031_9", "text": "This study is an application of social identity theory to feminist consciousness and activism. For women, strong gender identifications may enhance support for equality struggles, whereas for men, they may contribute to backlashes against feminism. University students (N � 276), primarily Euroamerican, completed a measure of gender self-esteem (GSE, that part of one’s selfconcept derived from one’s gender), and two measures of feminism. High GSE in women and low GSE in men were related to support for feminism. Consistent with past research, women were more supportive of feminism than men, and in both genders, support for feminist ideas was greater than self-identification as a feminist.", "title": "" }, { "docid": "neg:1840031_10", "text": "Machinima is a low-cost alternative to full production filmmaking. However, creating quality cinematic visualizations with existing machinima techniques still requires a high degree of talent and effort. We introduce a lightweight artificial intelligence system, Cambot, that can be used to assist in machinima production. Cambot takes a script as input and produces a cinematic visualization. Unlike other virtual cinematography systems, Cambot favors an offline algorithm coupled with an extensible library of specific modular and reusable facets of cinematic knowledge. One of the advantages of this approach to virtual cinematography is a tight coordination between the positions and movements of the camera and the actors.", "title": "" }, { "docid": "neg:1840031_11", "text": "This paper discusses the active and reactive power control method for a modular multilevel converter (MMC) based grid-connected PV system. The voltage vector space analysis is performed by using average value models for the feasibility analysis of reactive power compensation (RPC). The proposed double-loop control strategy enables the PV system to handle unidirectional active power flow and bidirectional reactive power flow. Experiments have been performed on a laboratory-scaled modular multilevel PV inverter. The experimental results verify the correctness and feasibility of the proposed strategy.", "title": "" }, { "docid": "neg:1840031_12", "text": "Search Ranking and Recommendations are fundamental problems of crucial interest to major Internet companies, including web search engines, content publishing websites and marketplaces. However, despite sharing some common characteristics a one-size-fits-all solution does not exist in this space. Given a large difference in content that needs to be ranked, personalized and recommended, each marketplace has a somewhat unique challenge. Correspondingly, at Airbnb, a short-term rental marketplace, search and recommendation problems are quite unique, being a two-sided marketplace in which one needs to optimize for host and guest preferences, in a world where a user rarely consumes the same item twice and one listing can accept only one guest for a certain set of dates. In this paper we describe Listing and User Embedding techniques we developed and deployed for purposes of Real-time Personalization in Search Ranking and Similar Listing Recommendations, two channels that drive 99% of conversions. The embedding models were specifically tailored for Airbnb marketplace, and are able to capture guest's short-term and long-term interests, delivering effective home listing recommendations. We conducted rigorous offline testing of the embedding models, followed by successful online tests before fully deploying them into production.", "title": "" }, { "docid": "neg:1840031_13", "text": "Lacking an operational theory to explain the organization and behaviour of matter in unicellular and multicellular organisms hinders progress in biology. Such a theory should address life cycles from ontogenesis to death. This theory would complement the theory of evolution that addresses phylogenesis, and would posit theoretical extensions to accepted physical principles and default states in order to grasp the living state of matter and define proper biological observables. Thus, we favour adopting the default state implicit in Darwin’s theory, namely, cell proliferation with variation plus motility, and a framing principle, namely, life phenomena manifest themselves as non-identical iterations of morphogenetic processes. From this perspective, organisms become a consequence of the inherent variability generated by proliferation, motility and self-organization. Morphogenesis would then be the result of the default state plus physical constraints, like gravity, and those present in living organisms, like muscular tension.", "title": "" }, { "docid": "neg:1840031_14", "text": "Wireless sensor networks can be deployed in any attended or unattended environments like environmental monitoring, agriculture, military, health care etc., where the sensor nodes forward the sensing data to the gateway node. As the sensor node has very limited battery power and cannot be recharged after deployment, it is very important to design a secure, effective and light weight user authentication and key agreement protocol for accessing the sensed data through the gateway node over insecure networks. Most recently, Turkanovic et al. proposed a light weight user authentication and key agreement protocol for accessing the services of the WSNs environment and claimed that the same protocol is efficient in terms of security and complexities than related existing protocols. In this paper, we have demonstrated several security weaknesses of the Turkanovic et al. protocol. Additionally, we have also illustrated that the authentication phase of the Turkanovic et al. is not efficient in terms of security parameters. In order to fix the above mentioned security pitfalls, we have primarily designed a novel architecture for the WSNs environment and basing upon which a proposed scheme has been presented for user authentication and key agreement scheme. The security validation of the proposed protocol has done by using BAN logic, which ensures that the protocol achieves mutual authentication and session key agreement property securely between the entities involved. Moreover, the proposed scheme has simulated using well popular AVISPA security tool, whose simulation results show that the protocol is SAFE under OFMC and CL-AtSe models. Besides, several security issues informally confirm that the proposed protocol is well protected in terms of relevant security attacks including the above mentioned security pitfalls. The proposed protocol not only resists the above mentioned security weaknesses, but also achieves complete security requirements including specially energy efficiency, user anonymity, mutual authentication and user-friendly password change phase. Performance comparison section ensures that the protocol is relatively efficient in terms of complexities. The security and performance analysis makes the system so efficient that the proposed protocol can be implemented in real-life application. © 2015 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "neg:1840031_15", "text": "SIR, Arndt–Gottron scleromyxoedema is a rare fibromucinous disorder regarded as a variant of the lichen myxoedematosus. The diagnostic criteria are a generalized papular and sclerodermoid eruption, a microscopic triad of mucin deposition, fibroblast proliferation and fibrosis, a monoclonal gammopathy (mostly IgG-k paraproteinaemia) and the absence of a thyroid disorder. This disease initially presents with sclerosis of the skin and clusters of small lichenoid papules with a predilection for the face, neck and the forearm. Progressively, the skin lesions can become more widespread and the induration of skin can result in a scleroderma-like condition with sclerodactyly and microstomia, reduced mobility and disability. Systemic involvement is common, e.g. upper gastrointestinal dysmotility, proximal myopathy, joint contractures, neurological complications such as psychic disturbances and encephalopathy, obstructive ⁄restrictive lung disease, as well as renal and cardiovascular involvement. Numerous treatment options have been described in the literature. These include corticosteroids, retinoids, thalidomide, extracorporeal photopheresis (ECP), psoralen plus ultraviolet A radiation, ciclosporin, cyclophosphamide, melphalan or autologous stem cell transplantation. In September 1999, a 48-year-old white female first noticed an erythematous induration with a lichenoid papular eruption on her forehead. Three months later the lesions became more widespread including her face (Fig. 1a), neck, shoulders, forearms (Fig. 2a) and legs. When the patient first presented in our department in June 2000, she had problems opening her mouth fully as well as clenching both hands or moving her wrist. The histological examination of the skin biopsy was highly characteristic of Arndt–Gottron scleromyxoedema. Full blood count, blood morphology, bone marrow biopsy, bone scintigraphy and thyroid function tests were normal. Serum immunoelectrophoresis revealed an IgG-k paraproteinaemia. Urinary Bence-Jones proteins were negative. No systemic involvement was disclosed. We initiated ECP therapy in August 2000, initially at 2-week intervals (later monthly) on two succeeding days. When there was no improvement after 3 months, we also administered cyclophosphamide (Endoxana ; Baxter Healthcare Ltd, Newbury, U.K.) at a daily dose of 100 mg with mesna 400 mg (Uromitexan ; Baxter) prophylaxis. The response to this therapy was rather moderate. In February 2003 the patient developed a change of personality and loss of orientation and was admitted to hospital. The extensive neurological, radiological and microbiological diagnostics were unremarkable at that time. A few hours later the patient had seizures and was put on artificial ventilation in an intensive care unit. The patient was comatose for several days. A repeated magnetic resonance imaging scan was still normal, but the cerebrospinal fluid tap showed a dysfunction of the blood–cerebrospinal fluid barrier. A bilateral loss of somatosensory evoked potentials was noticeable. The neurological symptoms were classified as a ‘dermatoneuro’ syndrome, a rare extracutaneous manifestation of scleromyxoedema. After initiation of treatment with methylprednisolone (Urbason ; Aventis, Frankfurt, Germany) the neurological situation normalized in the following 2 weeks. No further medical treatment was necessary. In April 2003 therapy options were re-evaluated and the patient was started and maintained on a 7-day course of melphalan 7.5 mg daily (Alkeran ; GlaxoSmithKline, Uxbridge, U.K.) in combination with prednisolone 40 mg daily (Decortin H ; Merck, Darmstadt, Germany) every 6 weeks. This treat(a)", "title": "" }, { "docid": "neg:1840031_16", "text": "We describe isone, a tool that facilitates the visual exploration of social networks. Social network analysis is a methodological approach in the social sciences using graph-theoretic concepts to describe, understand and explain social structure. The isone software is an attempt to integrate analysis and visualization of social networks and is intended to be used in research and teaching. While we are primarily focussing on users in the social sciences, several features provided in the tool will be useful in other fields as well. In contrast to more conventional mathematical software in the social sciences that aim at providing a comprehensive suite of analytical options, our emphasis is on complementing every option we provide with tailored means of graphical interaction. We attempt to make complicated types of analysis and data handling transparent, intuitive, and more readily accessible. User feedback indicates that many who usually regard data exploration and analysis complicated and unnerving enjoy the playful nature of visual interaction. Consequently, much of the tool is about graph drawing methods specifically adapted to facilitate visual data exploration. The origins of isone lie in an interdisciplinary cooperation with researchers from political science which resulted in innovative uses of graph drawing methods for social network visualization, and prototypical implementations thereof. With the growing demand for access to these methods, we started implementing an integrated tool for public use. It should be stressed, however, that isone remains a research platform and testbed for innovative methods, and is not intended to become", "title": "" }, { "docid": "neg:1840031_17", "text": "There has been much interest in unsupervised learning of hierarchical generative models such as deep belief networks (DBNs); however, scaling such models to full-sized, high-dimensional images remains a difficult problem. To address this problem, we present the convolutional deep belief network, a hierarchical generative model that scales to realistic image sizes. This model is translation-invariant and supports efficient bottom-up and top-down probabilistic inference. Key to our approach is probabilistic max-pooling, a novel technique that shrinks the representations of higher layers in a probabilistically sound way. Our experiments show that the algorithm learns useful high-level visual features, such as object parts, from unlabeled images of objects and natural scenes. We demonstrate excellent performance on several visual recognition tasks and show that our model can perform hierarchical (bottom-up and top-down) inference over full-sized images.", "title": "" }, { "docid": "neg:1840031_18", "text": "We present a computer vision tool that analyses video from a CCTV system installed on fishing trawlers to monitor discarded fish catch. The system aims to support expert observers who review the footage and verify numbers, species and sizes of discarded fish. The operational environment presents a significant challenge for these tasks. Fish are processed below deck under fluorescent lights, they are randomly oriented and there are multiple occlusions. The scene is unstructured and complicated by the presence of fishermen processing the catch. We describe an approach to segmenting the scene and counting fish that exploits the N4-Fields algorithm. We performed extensive tests of the algorithm on a data set comprising 443 frames from 6 belts. Results indicate the relative count error (for individual fish) ranges from 2% to 16%. We believe this is the first system that is able to handle footage from operational trawlers.", "title": "" }, { "docid": "neg:1840031_19", "text": "3D object detection is an essential task in autonomous driving. Recent techniques excel with highly accurate detection rates, provided the 3D input data is obtained from precise but expensive LiDAR technology. Approaches based on cheaper monocular or stereo imagery data have, until now, resulted in drastically lower accuracies — a gap that is commonly attributed to poor image-based depth estimation. However, in this paper we argue that data representation (rather than its quality) accounts for the majority of the difference. Taking the inner workings of convolutional neural networks into consideration, we propose to convert imagebased depth maps to pseudo-LiDAR representations — essentially mimicking LiDAR signal. With this representation we can apply different existing LiDAR-based detection algorithms. On the popular KITTI benchmark, our approach achieves impressive improvements over the existing stateof-the-art in image-based performance — raising the detection accuracy of objects within 30m range from the previous state-of-the-art of 22% to an unprecedented 74%. At the time of submission our algorithm holds the highest entry on the KITTI 3D object detection leaderboard for stereo image based approaches.", "title": "" } ]
1840032
Health Media: From Multimedia Signals to Personal Health Insights
[ { "docid": "pos:1840032_0", "text": "Smart health devices monitor certain health parameters, are connected to an Internet service, and target primarily a lay consumer seeking a healthy lifestyle rather than the medical expert or the chronically ill person. These devices offer tremendous opportunities for wellbeing and self-management of health. This department reviews smart health devices from a pervasive computing perspective, discussing various devices and their functionality, limitations, and potential.", "title": "" } ]
[ { "docid": "neg:1840032_0", "text": "The ability to reason with natural language is a fundamental prerequisite for many NLP tasks such as information extraction, machine translation and question answering. To quantify this ability, systems are commonly tested whether they can recognize textual entailment, i.e., whether one sentence can be inferred from another one. However, in most NLP applications only single source sentences instead of sentence pairs are available. Hence, we propose a new task that measures how well a model can generate an entailed sentence from a source sentence. We take entailment-pairs of the Stanford Natural Language Inference corpus and train an LSTM with attention. On a manually annotated test set we found that 82% of generated sentences are correct, an improvement of 10.3% over an LSTM baseline. A qualitative analysis shows that this model is not only capable of shortening input sentences, but also inferring new statements via paraphrasing and phrase entailment. We then apply this model recursively to input-output pairs, thereby generating natural language inference chains that can be used to automatically construct an entailment graph from source sentences. Finally, by swapping source and target sentences we can also train a model that given an input sentence invents additional information to generate a new sentence.", "title": "" }, { "docid": "neg:1840032_1", "text": "Math anxiety is a negative affective reaction to situations involving math. Previous work demonstrates that math anxiety can negatively impact math problem solving by creating performance-related worries that disrupt the working memory needed for the task at hand. By leveraging knowledge about the mechanism underlying the math anxiety-performance relationship, we tested the effectiveness of a short expressive writing intervention that has been shown to reduce intrusive thoughts and improve working memory availability. Students (N = 80) varying in math anxiety were asked to sit quietly (control group) prior to completing difficulty-matched math and word problems or to write about their thoughts and feelings regarding the exam they were about to take (expressive writing group). For the control group, high math-anxious individuals (HMAs) performed significantly worse on the math problems than low math-anxious students (LMAs). In the expressive writing group, however, this difference in math performance across HMAs and LMAs was significantly reduced. Among HMAs, the use of words related to anxiety, cause, and insight in their writing was positively related to math performance. Expressive writing boosts the performance of anxious students in math-testing situations.", "title": "" }, { "docid": "neg:1840032_2", "text": "Recent approaches to Reinforcement Learning (RL) with function approximation include Neural Fitted Q Iteration and the use of Gaussian Processes. They belong to the class of fitted value iteration algorithms, which use a set of support points to fit the value-function in a batch iterative process. These techniques make efficient use of a reduced number of samples by reusing them as needed, and are appropriate for applications where the cost of experiencing a new sample is higher than storing and reusing it, but this is at the expense of increasing the computational effort, since these algorithms are not incremental. On the other hand, non-parametric models for function approximation, like Gaussian Processes, are preferred against parametric ones, due to their greater flexibility. A further advantage of using Gaussian Processes for function approximation is that they allow to quantify the uncertainty of the estimation at each point. In this paper, we propose a new approach for RL in continuous domains based on Probability Density Estimations. Our method combines the best features of the previous methods: it is non-parametric and provides an estimation of the variance of the approximated function at any point of the domain. In addition, our method is simple, incremental, and computationally efficient. All these features make this approach more appealing than Gaussian Processes and fitted value iteration algorithms in general.", "title": "" }, { "docid": "neg:1840032_3", "text": "In this paper we present a intrusion detection module capable of detecting malicious network traffic in a SCADA (Supervisory Control and Data Acquisition) system. Malicious data in a SCADA system disrupt its correct functioning and tamper with its normal operation. OCSVM (One-Class Support Vector Machine) is an intrusion detection mechanism that does not need any labeled data for training or any information about the kind of anomaly is expecting for the detection process. This feature makes it ideal for processing SCADA environment data and automate SCADA performance monitoring. The OCSVM module developed is trained by network traces off line and detect anomalies in the system real time. The module is part of an IDS (Intrusion Detection System) system developed under CockpitCI project and communicates with the other parts of the system by the exchange of IDMEF (Intrusion Detection Message Exchange Format) messages that carry information about the source of the incident, the time and a classification of the alarm.", "title": "" }, { "docid": "neg:1840032_4", "text": "Domain adaptation addresses the problem where data instances of a source domain have different distributions from that of a target domain, which occurs frequently in many real life scenarios. This work focuses on unsupervised domain adaptation, where labeled data are only available in the source domain. We propose to interpolate subspaces through dictionary learning to link the source and target domains. These subspaces are able to capture the intrinsic domain shift and form a shared feature representation for cross domain recognition. Further, we introduce a quantitative measure to characterize the shift between two domains, which enables us to select the optimal domain to adapt to the given multiple source domains. We present experiments on face recognition across pose, illumination and blur variations, cross dataset object recognition, and report improved performance over the state of the art.", "title": "" }, { "docid": "neg:1840032_5", "text": "This course will focus on describing techniques for handling datasets larger than main memory in scientific visualization and computer graphics. Recently, several external memory techniques have been developed for a wide variety of graphics and visualization problems, including surface simplification, volume rendering, isosurface generation, ray tracing, surface reconstruction, and so on. This work has had significant impact given that in recent years there has been a rapid increase in the raw size of datasets. Several technological trends are contributing to this, such as the development of high-resolution 3D scanners, and the need to visualize ASCI-size (Accelerated Strategic Computing Initiative) datasets. Another important push for this kind of technology is the growing speed gap between main memory and caches, such a gap penalizes algorithms which do not optimize for coherence of access. Because of these reasons, much research in computer graphics focuses on developing out-of-core (and often cache-friendly) techniques. This course reviews fundamental issues, current problems, and unresolved solutions, and presents an in-depth study of external memory algorithms developed in recent years. Its goal is to provide students and graphics researchers and professionals with an effective knowledge of current techniques, as well as the foundation to develop novel techniques on their own. Schedule (tentative) 5 min Introduction to the course Silva 45 min Overview of external memory algorithms Chiang 40 min Out-of-core scientific visualization Silva", "title": "" }, { "docid": "neg:1840032_6", "text": "This paper presents a single BCD technology platform with high performance power devices at a wide range of operating voltages. The platform offers 6 V to 70 V LDMOS devices. All devices offer best-in-class specific on-resistance of 20 to 40 % lower than that of the state-of-the-art IC-based LDMOS devices and robustness better than the square SOA (safe-operating-area). Fully isolated LDMOS devices, in which independent bias is capable for circuit flexibility, demonstrate superior specific on-resistance (e.g. 11.9 mΩ-mm2 for breakdown voltage of 39 V). Moreover, the unusual sudden current enhancement appeared in the ID-VD saturation region of most of the high voltage LDMOS devices is significantly suppressed.", "title": "" }, { "docid": "neg:1840032_7", "text": "Time stamping is a technique used to prove the existence of certain digital data prior to a specific point in time. With the recent development of electronic commerce, time stamping is now widely recognized as an important technique used to ensure the integrity of digital data for a long time period. Various time stamping schemes and services have been proposed. When one uses a certain time stamping service, he should confirm in advance that its security level sufficiently meets his security requirements. However, time stamping schemes are generally so complicated that it is not easy to evaluate their security levels accurately. It is important for users to have a good grasp of current studies of time stamping schemes and to make use of such studies to select an appropriate time stamping service. Une and Matsumoto [2000], [2001a], [2001b] and [2002] have proposed a method of classifying time stamping schemes and evaluating their security systematically. Their papers have clarified the objectives, functions and entities involved in time stamping schemes and have discussed the conditions sufficient to detect the alteration of a time stamp in each scheme. This paper explains existing problems regarding the security evaluation of time stamping schemes and the results of Une and Matsumoto [2000], [2001a], [2001b] and [2002]. It also applies their results to some existing time stamping schemes and indicates possible directions of further research into time stamping schemes.", "title": "" }, { "docid": "neg:1840032_8", "text": "In this paper, we are interested in developing compositional models to explicit representing pose, parts and attributes and tackling the tasks of attribute recognition, pose estimation and part localization jointly. This is different from the recent trend of using CNN-based approaches for training and testing on these tasks separately with a large amount of data. Conventional attribute models typically use a large number of region-based attribute classifiers on parts of pre-trained pose estimator without explicitly detecting the object or its parts, or considering the correlations between attributes. In contrast, our approach jointly represents both the object parts and their semantic attributes within a unified compositional hierarchy. We apply our attributed grammar model to the task of human parsing by simultaneously performing part localization and attribute recognition. We show our modeling helps performance improvements on pose-estimation task and also outperforms on other existing methods on attribute prediction task.", "title": "" }, { "docid": "neg:1840032_9", "text": "Using on-chip interconnection networks in place of ad-hoc glo-bal wiring structures the top level wires on a chip and facilitates modular design. With this approach, system modules (processors, memories, peripherals, etc...) communicate by sending packets to one another over the network. The structured network wiring gives well-controlled electrical parameters that eliminate timing iterations and enable the use of high-performance circuits to reduce latency and increase bandwidth. The area overhead required to implement an on-chip network is modest, we estimate 6.6%. This paper introduces the concept of on-chip networks, sketches a simple network, and discusses some challenges in the architecture and design of these networks.", "title": "" }, { "docid": "neg:1840032_10", "text": "Seeds of chickpea (Cicer arietinum L.) were exposed in batches to static magnetic fields of strength from 0 to 250 mT in steps of 50 mT for 1-4 h in steps of 1 h for all fields. Results showed that magnetic field application enhanced seed performance in terms of laboratory germination, speed of germination, seedling length and seedling dry weight significantly compared to unexposed control. However, the response varied with field strength and duration of exposure without any particular trend. Among the various combinations of field strength and duration, 50 mT for 2 h, 100 mT for 1 h and 150 mT for 2 h exposures gave best results. Exposure of seeds to these three magnetic fields improved seed coat membrane integrity as it reduced the electrical conductivity of seed leachate. In soil, seeds exposed to these three treatments produced significantly increased seedling dry weights of 1-month-old plants. The root characteristics of the plants showed dramatic increase in root length, root surface area and root volume. The improved functional root parameters suggest that magnetically treated chickpea seeds may perform better under rainfed (un-irrigated) conditions where there is a restrictive soil moisture regime.", "title": "" }, { "docid": "neg:1840032_11", "text": "In this paper, the permeation properties of three types of liquids into HTV silicone rubber with different Alumina Tri-hydrate (ATH) contents had been investigated by weight gain experiments. The influence of differing exposure conditions on the diffusion into silicone rubber, in particular the effect of solution type, solution concentration, and test temperature were explored. Experimental results indicated that the liquids permeation into silicone rubber obeyed anomalous diffusion ways instead of the Fick diffusion model. Moreover, higher temperature would accelerate the permeation process, and silicone rubber with higher ATH content absorbed more liquids than that with lower ATH content. Furthermore, the material properties of silicone rubber before and after liquid permeation were examined using Fourier infrared spectroscopy (FTIR), thermal gravimetric analysis (TGA) and scanning electron microscopy (SEM), respectively. The permeation mechanisms and process were discussed in depth by combining the weight gain experiment results and the material properties analyses.", "title": "" }, { "docid": "neg:1840032_12", "text": "Process mining techniques have proven to be a valuable tool for analyzing the execution of business processes. They rely on logs that identify events at an activity level, i.e., most process mining techniques assume that the information system explicitly supports the notion of activities/tasks. This is often not the case and only low-level events are being supported and logged. For example, users may provide different pieces of data which together constitute a single activity. The technique introduced in this paper uses clustering algorithms to derive activity logs from lower-level data modification logs, as produced by virtually every information system. This approach was implemented in the context of the ProM framework and its goal is to widen the scope of processes that can be analyzed using existing process mining techniques.", "title": "" }, { "docid": "neg:1840032_13", "text": "Pair-wise ranking methods have been widely used in recommender systems to deal with implicit feedback. They attempt to discriminate between a handful of observed items and the large set of unobserved items. In these approaches, however, user preferences and item characteristics cannot be estimated reliably due to overfitting given highly sparse data. To alleviate this problem, in this paper, we propose a novel hierarchical Bayesian framework which incorporates “bag-ofwords” type meta-data on items into pair-wise ranking models for one-class collaborative filtering. The main idea of our method lies in extending the pair-wise ranking with a probabilistic topic modeling. Instead of regularizing item factors through a zero-mean Gaussian prior, our method introduces item-specific topic proportions as priors for item factors. As a by-product, interpretable latent factors for users and items may help explain recommendations in some applications. We conduct an experimental study on a real and publicly available dataset, and the results show that our algorithm is effective in providing accurate recommendation and interpreting user factors and item factors.", "title": "" }, { "docid": "neg:1840032_14", "text": "This study evaluated the exploitation of unprocessed agricultural discards in the form of fresh vegetable leaves as a diet for the sea urchin Paracentrotus lividus through the assessment of their effects on gonad yield and quality. A stock of wild-caught P. lividus was fed on discarded leaves from three different species (Beta vulgaris, Brassica oleracea, and Lactuca sativa) and the macroalga Ulva lactuca for 3 months under controlled conditions. At the beginning and end of the experiment, total and gonad weight were measured, while gonad and diet total carbon (C%), nitrogen (N%), δ13C, δ15N, carbohydrates, lipids, and proteins were analyzed. The results showed that agricultural discards provided for the maintenance of gonad index and nutritional value (carbohydrate, lipid, and protein content) of initial specimens. L. sativa also improved gonadic color. The results of this study suggest that fresh vegetable discards may be successfully used in the preparation of more balanced diets for sea urchin aquaculture. The use of agricultural discards in prepared diets offers a number of advantages, including an abundant resource, the recycling of discards into new organic matter, and reduced pressure on marine organisms (i.e., macroalgae) in the production of food for cultured organisms.", "title": "" }, { "docid": "neg:1840032_15", "text": "In recent years, several noteworthy large, cross-domain, and openly available knowledge graphs (KGs) have been created. These include DBpedia, Freebase, OpenCyc, Wikidata, and YAGO. Although extensively in use, these KGs have not been subject to an in-depth comparison so far. In this survey, we provide data quality criteria according to which KGs can be analyzed and analyze and compare the above mentioned KGs. Furthermore, we propose a framework for finding the most suitable KG for a given setting.", "title": "" }, { "docid": "neg:1840032_16", "text": "Using a multimethod approach, the authors conducted 4 studies to test life span hypotheses about goal orientations across adulthood. Confirming expectations, in Studies 1 and 2 younger adults reported a primary growth orientation in their goals, whereas older adults reported a stronger orientation toward maintenance and loss prevention. Orientation toward prevention of loss correlated negatively with well-being in younger adults. In older adults, orientation toward maintenance was positively associated with well-being. Studies 3 and 4 extend findings of a self-reported shift in goal orientation to the level of behavioral choice involving cognitive and physical fitness goals. Studies 3 and 4 also examine the role of expected resource demands. The shift in goal orientation is discussed as an adaptive mechanism to manage changing opportunities and constraints across adulthood.", "title": "" }, { "docid": "neg:1840032_17", "text": "In this work we introduced SnooperTrack, an algorithm for the automatic detection and tracking of text objects — such as store names, traffic signs, license plates, and advertisements — in videos of outdoor scenes. The purpose is to improve the performances of text detection process in still images by taking advantage of the temporal coherence in videos. We first propose an efficient tracking algorithm using particle filtering framework with original region descriptors. The second contribution is our strategy to merge tracked regions and new detections. We also propose an improved version of our previously published text detection algorithm in still images. Tests indicate that SnooperTrack is fast, robust, enable false positive suppression, and achieved great performances in complex videos of outdoor scenes.", "title": "" }, { "docid": "neg:1840032_18", "text": "Neural networks are artificial learning systems. For more than two decades, they have help for detecting hostile behaviors in a computer system. This review describes those systems and theirs limits. It defines and gives neural networks characteristics. It also itemizes neural networks which are used in intrusion detection systems. The state of the art on IDS made from neural networks is reviewed. In this paper, we also make a taxonomy and a comparison of neural networks intrusion detection systems. We end this review with a set of remarks and future works that can be done in order to improve the systems that have been presented. This work is the result of a meticulous scan of the literature.", "title": "" }, { "docid": "neg:1840032_19", "text": "INTRODUCTION In recent years “big data” has become something of a buzzword in business, computer science, information studies, information systems, statistics, and many other fields. As technology continues to advance, we constantly generate an ever-increasing amount of data. This growth does not differentiate between individuals and businesses, private or public sectors, institutions of learning and commercial entities. It is nigh universal and therefore warrants further study.", "title": "" } ]
1840033
Rectangular Dielectric Resonator Antenna Array for 28 GHz Applications
[ { "docid": "pos:1840033_0", "text": "This article presents empirically-based large-scale propagation path loss models for fifth-generation cellular network planning in the millimeter-wave spectrum, based on real-world measurements at 28 GHz and 38 GHz in New York City and Austin, Texas, respectively. We consider industry-standard path loss models used for today's microwave bands, and modify them to fit the propagation data measured in these millimeter-wave bands for cellular planning. Network simulations with the proposed models using a commercial planning tool show that roughly three times more base stations are required to accommodate 5G networks (cell radii up to 200 m) compared to existing 3G and 4G systems (cell radii of 500 m to 1 km) when performing path loss simulations based on arbitrary pointing angles of directional antennas. However, when directional antennas are pointed in the single best directions at the base station and mobile, coverage range is substantially improved with little increase in interference, thereby reducing the required number of 5G base stations. Capacity gains for random pointing angles are shown to be 20 times greater than today's fourth-generation Long Term Evolution networks, and can be further improved when using directional antennas pointed in the strongest transmit and receive directions with the help of beam combining techniques.", "title": "" } ]
[ { "docid": "neg:1840033_0", "text": "Indoor positioning has grasped great attention in recent years. A number of efforts have been exerted to achieve high positioning accuracy. However, there exists no technology that proves its efficacy in various situations. In this paper, we propose a novel positioning method based on fusing trilateration and dead reckoning. We employ Kalman filtering as a position fusion algorithm. Moreover, we adopt an Android device with Bluetooth Low Energy modules as the communication platform to avoid excessive energy consumption and to improve the stability of the received signal strength. To further improve the positioning accuracy, we take the environmental context information into account while generating the position fixes. Extensive experiments in a testbed are conducted to examine the performance of three approaches: trilateration, dead reckoning and the fusion method. Additionally, the influence of the knowledge of the environmental context is also examined. Finally, our proposed fusion method outperforms both trilateration and dead reckoning in terms of accuracy: experimental results show that the Kalman-based fusion, for our settings, achieves a positioning accuracy of less than one meter.", "title": "" }, { "docid": "neg:1840033_1", "text": "BACKGROUND\nMany low- and middle-income countries are undergoing a nutrition transition associated with rapid social and economic transitions. We explore the coexistence of over and under- nutrition at the neighborhood and household level, in an urban poor setting in Nairobi, Kenya.\n\n\nMETHODS\nData were collected in 2010 on a cohort of children aged under five years born between 2006 and 2010. Anthropometric measurements of the children and their mothers were taken. Additionally, dietary intake, physical activity, and anthropometric measurements were collected from a stratified random sample of adults aged 18 years and older through a separate cross-sectional study conducted between 2008 and 2009 in the same setting. Proportions of stunting, underweight, wasting and overweight/obesity were dettermined in children, while proportions of underweight and overweight/obesity were determined in adults.\n\n\nRESULTS\nOf the 3335 children included in the analyses with a total of 6750 visits, 46% (51% boys, 40% girls) were stunted, 11% (13% boys, 9% girls) were underweight, 2.5% (3% boys, 2% girls) were wasted, while 9% of boys and girls were overweight/obese respectively. Among their mothers, 7.5% were underweight while 32% were overweight/obese. A large proportion (43% and 37%%) of overweight and obese mothers respectively had stunted children. Among the 5190 adults included in the analyses, 9% (6% female, 11% male) were underweight, and 22% (35% female, 13% male) were overweight/obese.\n\n\nCONCLUSION\nThe findings confirm an existing double burden of malnutrition in this setting, characterized by a high prevalence of undernutrition particularly stunting early in life, with high levels of overweight/obesity in adulthood, particularly among women. In the context of a rapid increase in urban population, particularly in urban poor settings, this calls for urgent action. Multisectoral action may work best given the complex nature of prevailing circumstances in urban poor settings. Further research is needed to understand the pathways to this coexistence, and to test feasibility and effectiveness of context-specific interventions to curb associated health risks.", "title": "" }, { "docid": "neg:1840033_2", "text": "BACKGROUND\nBreast cancer is by far the most frequent cancer of women. However the preventive measures for such problem are probably less than expected. The objectives of this study are to assess breast cancer knowledge and attitudes and factors associated with the practice of breast self examination (BSE) among female teachers of Saudi Arabia.\n\n\nPATIENTS AND METHODS\nWe conducted a cross-sectional survey of teachers working in female schools in Buraidah, Saudi Arabia using a self-administered questionnaire to investigate participants' knowledge about the risk factors of breast cancer, their attitudes and screening behaviors. A sample of 376 female teachers was randomly selected. Participants lived in urban areas, and had an average age of 34.7 ±5.4 years.\n\n\nRESULTS\nMore than half of the women showed a limited knowledge level. Among participants, the most frequently reported risk factors were non-breast feeding and the use of female sex hormones. The printed media was the most common source of knowledge. Logistic regression analysis revealed that high income was the most significant predictor of better knowledge level. Knowing a non-relative case with breast cancer and having a high knowledge level were identified as the significant predictors for practicing BSE.\n\n\nCONCLUSIONS\nThe study points to the insufficient knowledge of female teachers about breast cancer and identified the negative influence of low knowledge on the practice of BSE. Accordingly, relevant educational programs to improve the knowledge level of women regarding breast cancer are needed.", "title": "" }, { "docid": "neg:1840033_3", "text": "Attribute selection (AS) refers to the problem of selecting those input attributes or features that are most predictive of a given outcome; a problem encountered in many areas such as machine learning, pattern recognition and signal processing. Unlike other dimensionality reduction methods, attribute selectors preserve the original meaning of the attributes after reduction. This has found application in tasks that involve datasets containing huge numbers of attributes (in the order of tens of thousands) which, for some learning algorithms, might be impossible to process further. Recent examples include text processing and web content classification. AS techniques have also been applied to small and medium-sized datasets in order to locate the most informative attributes for later use. One of the many successful applications of rough set theory has been to this area. The rough set ideology of using only the supplied data and no other information has many benefits in AS, where most other methods require supplementary knowledge. However, the main limitation of rough set-based attribute selection in the literature is the restrictive requirement that all data is discrete. In classical rough set theory, it is not possible to consider real-valued or noisy data. This paper investigates a novel approach based on fuzzy-rough sets, fuzzy rough feature selection (FRFS), that addresses these problems and retains dataset semantics. FRFS is applied to two challenging domains where a feature reducing step is important; namely, web content classification and complex systems monitoring. The utility of this approach is demonstrated and is compared empirically with several dimensionality reducers. In the experimental studies, FRFS is shown to equal or improve classification accuracy when compared to the results from unreduced data. Classifiers that use a lower dimensional set of attributes which are retained by fuzzy-rough reduction outperform those that employ more attributes returned by the existing crisp rough reduction method. In addition, it is shown that FRFS is more powerful than the other AS techniques in the comparative study", "title": "" }, { "docid": "neg:1840033_4", "text": "Young men who have sex with men (YMSM) are increasingly using mobile smartphone applications (“apps”), such as Grindr, to meet sex partners. A probability sample of 195 Grindr-using YMSM in Southern California were administered an anonymous online survey to assess patterns of and motivations for Grindr use in order to inform development and tailoring of smartphone-based HIV prevention for YMSM. The number one reason for using Grindr (29 %) was to meet “hook ups.” Among those participants who used both Grindr and online dating sites, a statistically significantly greater percentage used online dating sites for “hook ups” (42 %) compared to Grindr (30 %). Seventy percent of YMSM expressed a willingness to participate in a smartphone app-based HIV prevention program. Development and testing of smartphone apps for HIV prevention delivery has the potential to engage YMSM in HIV prevention programming, which can be tailored based on use patterns and motivations for use. Los hombres que mantienen relaciones sexuales con hombres (YMSM por las siglas en inglés de Young Men Who Have Sex with Men) están utilizando más y más aplicaciones para teléfonos inteligentes (smartphones), como Grindr, para encontrar parejas sexuales. En el Sur de California, se administró de forma anónima un sondeo en internet a una muestra de probabilidad de 195 YMSM usuarios de Grindr, para evaluar los patrones y motivaciones del uso de Grindr, con el fin de utilizar esta información para el desarrollo y personalización de prevención del VIH entre YMSM con base en teléfonos inteligentes. La principal razón para utilizar Grindr (29 %) es para buscar encuentros sexuales casuales (hook-ups). Entre los participantes que utilizan tanto Grindr como otro sitios de citas online, un mayor porcentaje estadísticamente significativo utilizó los sitios de citas online para encuentros casuales sexuales (42 %) comparado con Grindr (30 %). Un setenta porciento de los YMSM expresó su disposición para participar en programas de prevención del VIH con base en teléfonos inteligentes. El desarrollo y evaluación de aplicaciones para teléfonos inteligentes para el suministro de prevención del VIH tiene el potencial de involucrar a los YMSM en la programación de la prevención del VIH, que puede ser adaptada según los patrones y motivaciones de uso.", "title": "" }, { "docid": "neg:1840033_5", "text": "Many difficult combinatorial optimization problems have been modeled as static problems. However, in practice, many problems are dynamic and changing, while some decisions have to be made before all the design data are known. For example, in the Dynamic Vehicle Routing Problem (DVRP), new customer orders appear over time, and new routes must be reconfigured while executing the current solution. Montemanni et al. [1] considered a DVRP as an extension to the standard vehicle routing problem (VRP) by decomposing a DVRP as a sequence of static VRPs, and then solving them with an ant colony system (ACS) algorithm. This paper presents a genetic algorithm (GA) methodology for providing solutions for the DVRP model employed in [1]. The effectiveness of the proposed GA is evaluated using a set of benchmarks found in the literature. Compared with a tabu search approach implemented herein and the aforementioned ACS, the proposed GA methodology performs better in minimizing travel costs.", "title": "" }, { "docid": "neg:1840033_6", "text": "The future of procedural content generation (PCG) lies beyond the dominant motivations of “replayability” and creating large environments for players to explore. This paper explores both the past and potential future for PCG, identifying five major lenses through which we can view PCG and its role in a game: data vs. process intensiveness, the interactive extent of the content, who has control over the generator, how many players interact with it, and the aesthetic purpose for PCG being used in the game. Using these lenses, the paper proposes several new research directions for PCG that require both deep technical research and innovative game design.", "title": "" }, { "docid": "neg:1840033_7", "text": "In satellite earth station antenna systems there is an increasing demand for complex single aperture, multi-function and multi-frequency band capable feed systems. In this work, a multi band feed system (6/12 GHz) is described which employs quadrature junctions (QJ) and supports transmit and receive functionality in the C and Ku bands respectively. This feed system is designed for a 16.4 m diameter shaped cassegrain antenna. It is a single aperture, 4 port system with transmit capability in circular polarization (CP) mode over the 6.625-6.69 GHz band and receive in the linear polarization (LP) mode over the 12.1-12.3 GHz band", "title": "" }, { "docid": "neg:1840033_8", "text": "The spread of false rumours during emergencies can jeopardise the well-being of citizens as they are monitoring the stream of news from social media to stay abreast of the latest updates. In this paper, we describe the methodology we have developed within the PHEME project for the collection and sampling of conversational threads, as well as the tool we have developed to facilitate the annotation of these threads so as to identify rumourous ones. We describe the annotation task conducted on threads collected during the 2014 Ferguson unrest and we present and analyse our findings. Our results show that we can collect effectively social media rumours and identify multiple rumours associated with a range of stories that would have been hard to identify by relying on existing techniques that need manual input of rumour-specific keywords.", "title": "" }, { "docid": "neg:1840033_9", "text": "As computer games become more complex and consumers demand more sophisticated computer controlled agents, developers are required to place a greater emphasis on the artificial intelligence aspects of their games. One source of sophisticated AI techniques is the artificial intelligence research community. This paper discusses recent efforts by our group at the University of Michigan Artificial Intelligence Lab to apply state of the art artificial intelligence techniques to computer games. Our experience developing intelligent air combat agents for DARPA training exercises, described in John Laird's lecture at the 1998 Computer Game Developer's Conference, suggested that many principles and techniques from the research community are applicable to games. A more recent project, called the Soar/Games project, has followed up on this by developing agents for computer games, including Quake II and Descent 3. The result of these two research efforts is a partially implemented design of an artificial intelligence engine for games based on well established AI systems and techniques.", "title": "" }, { "docid": "neg:1840033_10", "text": "In order to formulate a high-level understanding of driver behavior from massive naturalistic driving data, an effective approach is needed to automatically process or segregate data into low-level maneuvers. Besides traditional computer vision processing, this study addresses the lane-change detection problem by using vehicle dynamic signals (steering angle and vehicle speed) extracted from the CAN-bus, which is collected with 58 drivers around Dallas, TX area. After reviewing the literature, this study proposes a machine learning-based segmentation and classification algorithm, which is stratified into three stages. The first stage is preprocessing and prefiltering, which is intended to reduce noise and remove clear left and right turning events. Second, a spectral time-frequency analysis segmentation approach is employed to generalize all potential time-variant lane-change and lane-keeping candidates. The final stage compares two possible classification methods—1) dynamic time warping feature with k -nearest neighbor classifier and 2) hidden state sequence prediction with a combined hidden Markov model. The overall optimal classification accuracy can be obtained at 80.36% for lane-change-left and 83.22% for lane-change-right. The effectiveness and issues of failures are also discussed. With the availability of future large-scale naturalistic driving data, such as SHRP2, this proposed effective lane-change detection approach can further contribute to characterize both automatic route recognition as well as distracted driving state analysis.", "title": "" }, { "docid": "neg:1840033_11", "text": "Layered multicast is an efficient technique to deliver video to heterogeneous receivers over wired and wireless networks. In this paper, we consider such a multicast system in which the server adapts the bandwidth and forward-error correction code (FEC) of each layer so as to maximize the overall video quality, given the heterogeneous client characteristics in terms of their end-to-end bandwidth, packet drop rate over the wired network, and bit-error rate in the wireless hop. In terms of FECs, we also study the value of a gateway which “transcodes” packet-level FECs to byte-level FECs before forwarding packets from the wired network to the wireless clients. We present an analysis of the system, propose an efficient algorithm on FEC allocation for the base layer, and formulate a dynamic program with a fast and accurate approximation for the joint bandwidth and FEC allocation of the enhancement layers. Our results show that a transcoding gateway performs only slightly better than the nontranscoding one in terms of end-to-end loss rate, and our allocation is effective in terms of FEC parity and bandwidth served to each user.", "title": "" }, { "docid": "neg:1840033_12", "text": "This chapter describes the different steps of designing, building, simulating, and testing an intelligent flight control module for an increasingly popular unmanned aerial vehicle (UAV), known as a quadrotor. It presents an in-depth view of the modeling of the kinematics, dynamics, and control of such an interesting UAV. A quadrotor offers a challenging control problem due to its highly unstable nature. An effective control methodology is therefore needed for such a unique airborne vehicle. The chapter starts with a brief overview on the quadrotor's background and its applications, in light of its advantages. Comparisons with other UAVs are made to emphasize the versatile capabilities of this special design. For a better understanding of the vehicle's behavior, the quadrotor's kinematics and dynamics are then detailed. This yields the equations of motion, which are used later as a guideline for developing the proposed intelligent flight control scheme. In this chapter, fuzzy logic is adopted for building the flight controller of the quadrotor. It has been witnessed that fuzzy logic control offers several advantages over certain types of conventional control methods, specifically in dealing with highly nonlinear systems and modeling uncertainties. Two types of fuzzy inference engines are employed in the design of the flight controller, each of which is explained and evaluated. For testing the designed intelligent flight controller, a simulation environment was first developed. The simulations were made as realistic as possible by incorporating environmental disturbances such as wind gust and the ever-present sensor noise. The proposed controller was then tested on a real test-bed built specifically for this project. Both the simulator and the real quadrotor were later used for conducting different attitude stabilization experiments to evaluate the performance of the proposed control strategy. The controller's performance was also benchmarked against conventional control techniques such as input-output linearization, backstepping and sliding mode control strategies. Conclusions were then drawn based on the conducted experiments and their results.", "title": "" }, { "docid": "neg:1840033_13", "text": "Tall building developments have been rapidly increasing worldwide. This paper reviews the evolution of tall building’s structural systems and the technological driving force behind tall building developments. For the primary structural systems, a new classification – interior structures and exterior structures – is presented. While most representative structural systems for tall buildings are discussed, the emphasis in this review paper is on current trends such as outrigger systems and diagrid structures. Auxiliary damping systems controlling building motion are also discussed. Further, contemporary “out-of-the-box” architectural design trends, such as aerodynamic and twisted forms, which directly or indirectly affect the structural performance of tall buildings, are reviewed. Finally, the future of structural developments in tall buildings is envisioned briefly.", "title": "" }, { "docid": "neg:1840033_14", "text": "As creativity is increasingly recognised as a vital component of entrepreneurship, researchers and educators struggle to reform enterprise pedagogy. To help in this effort, we use a personality test and open-ended interviews to explore creativity between two groups of entrepreneurship masters’ students: one at a business school and one at an engineering school. The findings indicate that both groups had high creative potential, but that engineering students channelled this into practical and incremental efforts whereas the business students were more speculative and had a clearer market focus. The findings are drawn on to make some suggestions for entrepreneurship education.", "title": "" }, { "docid": "neg:1840033_15", "text": "The design of a planar dual-band wide-scan phased array is presented. The array uses novel dual-band comb-slot-loaded patch elements supporting two separate bands with a frequency ratio of 1.4:1. The antenna maintains consistent radiation patterns and incorporates a feeding configuration providing good bandwidths in both bands. The design has been experimentally validated with an X-band planar 9 × 9 array. The array supports wide-angle scanning up to a maximum of 60 ° and 50 ° at the low and high frequency bands respectively.", "title": "" }, { "docid": "neg:1840033_16", "text": "Pneumatic soft actuators produce flexion and meet the new needs of collaborative robotics, which is rapidly emerging in the industry landscape 4.0. The soft actuators are not only aimed at industrial progress, but their application ranges in the field of medicine and rehabilitation. Safety and reliability are the main requirements for coexistence and human-robot interaction; such requirements, together with the versatility and lightness, are the precious advantages that is offered by this new category of actuators. The objective is to develop an actuator with high compliance, low cost, high versatility and easy to produce, aimed at the realization of the fingers of a robotic hand that can faithfully reproduce the motion of a real hand. The proposed actuator is equipped with an intrinsic compliance thanks to the hyper-elastic silicone rubber used for its realization; the bending is allowed by the high compliance of the silicone and by a square-meshed gauze which contains the expansion and guides the movement through appropriate cuts in correspondence of the joints. A numerical model of the actuator is developed and an optimal configuration of the five fingers of the hand is achieved; finally, the index finger is built, on which the experimental validation tests are carried out.", "title": "" }, { "docid": "neg:1840033_17", "text": "In this paper we present BatchDB, an in-memory database engine designed for hybrid OLTP and OLAP workloads. BatchDB achieves good performance, provides a high level of data freshness, and minimizes load interaction between the transactional and analytical engines, thus enabling real time analysis over fresh data under tight SLAs for both OLTP and OLAP workloads.\n BatchDB relies on primary-secondary replication with dedicated replicas, each optimized for a particular workload type (OLTP, OLAP), and a light-weight propagation of transactional updates. The evaluation shows that for standard TPC-C and TPC-H benchmarks, BatchDB can achieve competitive performance to specialized engines for the corresponding transactional and analytical workloads, while providing a level of performance isolation and predictable runtime for hybrid workload mixes (OLTP+OLAP) otherwise unmet by existing solutions.", "title": "" }, { "docid": "neg:1840033_18", "text": "We present a neural semantic parser that translates natural language questions into executable SQL queries with two key ideas. First, we develop an encoder-decoder model, where the decoder uses a simple type system of SQL to constraint the output prediction, and propose a value-based loss when copying from input tokens. Second, we explore using the execution semantics of SQL to repair decoded programs that result in runtime error or return empty result. We propose two modelagnostics repair approaches, an ensemble model and a local program repair, and demonstrate their effectiveness over the original model. We evaluate our model on the WikiSQL dataset and show that our model achieves close to state-of-the-art results with lesser model complexity.", "title": "" }, { "docid": "neg:1840033_19", "text": "Enabled by mobile and wearable technology, personal health data delivers immense and increasing value for healthcare, benefiting both care providers and medical research. The secure and convenient sharing of personal health data is crucial to the improvement of the interaction and collaboration of the healthcare industry. Faced with the potential privacy issues and vulnerabilities existing in current personal health data storage and sharing systems, as well as the concept of self-sovereign data ownership, we propose an innovative user-centric health data sharing solution by utilizing a decentralized and permissioned blockchain to protect privacy using channel formation scheme and enhance the identity management using the membership service supported by the blockchain. A mobile application is deployed to collect health data from personal wearable devices, manual input, and medical devices, and synchronize data to the cloud for data sharing with healthcare providers and health insurance companies. To preserve the integrity of health data, within each record, a proof of integrity and validation is permanently retrievable from cloud database and is anchored to the blockchain network. Moreover, for scalable and performance considerations, we adopt a tree-based data processing and batching method to handle large data sets of personal health data collected and uploaded by the mobile platform.", "title": "" } ]
1840034
Operation of Compressor and Electronic Expansion Valve via Different Controllers
[ { "docid": "pos:1840034_0", "text": "It is demonstrated that neural networks can be used effectively for the identification and control of nonlinear dynamical systems. The emphasis is on models for both identification and control. Static and dynamic backpropagation methods for the adjustment of parameters are discussed. In the models that are introduced, multilayer and recurrent networks are interconnected in novel configurations, and hence there is a real need to study them in a unified fashion. Simulation results reveal that the identification and adaptive control schemes suggested are practically feasible. Basic concepts and definitions are introduced throughout, and theoretical questions that have to be addressed are also described.", "title": "" } ]
[ { "docid": "neg:1840034_0", "text": "Depth information plays an important role in a variety of applications, including manufacturing, medical imaging, computer vision, graphics, and virtual/augmented reality (VR/AR). Depth sensing has thus attracted sustained attention from both academia and industry communities for decades. Mainstream depth cameras can be divided into three categories: stereo, time of flight (ToF), and structured light. Stereo cameras require no active illumination and can be used outdoors, but they are fragile for homogeneous surfaces. Recently, off-the-shelf light field cameras have demonstrated improved depth estimation capability with a multiview stereo configuration. ToF cameras operate at a high frame rate and fit time-critical scenarios well, but they are susceptible to noise and limited to low resolution [3]. Structured light cameras can produce high-resolution, high-accuracy depth, provided that a number of patterns are sequentially used. Due to its promising and reliable performance, the structured light approach has been widely adopted for three-dimensional (3-D) scanning purposes. However, achieving real-time depth with structured light either requires highspeed (and thus expensive) hardware or sacrifices depth resolution and accuracy by using a single pattern instead.", "title": "" }, { "docid": "neg:1840034_1", "text": "Elephantopus scaber is an ethnomedicinal plant used by the Zhuang people in Southwest China to treat headaches, colds, diarrhea, hepatitis, and bronchitis. A new δ -truxinate derivative, ethyl, methyl 3,4,3',4'-tetrahydroxy- δ -truxinate (1), was isolated from the ethyl acetate extract of the entire plant, along with 4 known compounds. The antioxidant activity of these 5 compounds was determined by ABTS radical scavenging assay. Compound 1 was also tested for its cytotoxicity effect against HepG2 by MTT assay (IC50 = 60  μ M), and its potential anti-inflammatory, antibiotic, and antitumor bioactivities were predicted using target fishing method software.", "title": "" }, { "docid": "neg:1840034_2", "text": "The automatic identification system (AIS) tracks vessel movement by means of electronic exchange of navigation data between vessels, with onboard transceiver, terrestrial, and/or satellite base stations. The gathered data contain a wealth of information useful for maritime safety, security, and efficiency. Because of the close relationship between data and methodology in marine data mining and the importance of both of them in marine intelligence research, this paper surveys AIS data sources and relevant aspects of navigation in which such data are or could be exploited for safety of seafaring, namely traffic anomaly detection, route estimation, collision prediction, and path planning.", "title": "" }, { "docid": "neg:1840034_3", "text": "3 The Rotating Calipers Algorithm 3 3.1 Computing the Initial Rectangle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 3.2 Updating the Rectangle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 3.2.1 Distinct Supporting Vertices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 3.2.2 Duplicate Supporting Vertices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 3.2.3 Multiple Polygon Edges Attain Minimum Angle . . . . . . . . . . . . . . . . . . . . . 8 3.2.4 The General Update Step . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10", "title": "" }, { "docid": "neg:1840034_4", "text": "This paper compares two approaches in identifying outliers in multivariate datasets; Mahalanobis distance (MD) and robust distance (RD). MD has been known suffering from masking and swamping effects and RD is an approach that was developed to overcome problems that arise in MD. There are two purposes of this paper, first is to identify outliers using MD and RD and the second is to show that RD performs better than MD in identifying outliers. An observation is classified as an outlier if MD or RD is larger than a cut-off value. Outlier generating model is used to generate a set of data and MD and RD are computed from this set of data. The results showed that RD can identify outliers better than MD. However, in non-outliers data the performance for both approaches are similar. The results for RD also showed that RD can identify multivariate outliers much better when the number of dimension is large.", "title": "" }, { "docid": "neg:1840034_5", "text": "This paper presents an area-efficient ultra-low-power 32 kHz clock source for low power wireless communication systems using a temperature-compensated charge-pump-based digitally controlled oscillator (DCO). A highly efficient digital calibration method is proposed to achieve frequency stability over process variation and temperature drifts. This calibration method locks the DCO's output frequency to the reference clock of the wireless communication system during its active state. The introduced calibration scheme offers high jitter immunity and short locking periods overcoming frequency calibration errors for typical ultra-low-power DCO's. The circuit area of the proposed ultra-low-power clock source is 100μm × 140μm in a 130nm RF CMOS technology. In measurements the proposed ultra-low-power clock source achieves a frequency stability of 10 ppm/°C from 10 °C to 100 °C for temperature drifts of less than 1 °C/s with 80nW power consumption.", "title": "" }, { "docid": "neg:1840034_6", "text": "The explosive growth of the user-generated content on the Web has offered a rich data source for mining opinions. However, the large number of diverse review sources challenges the individual users and organizations on how to use the opinion information effectively. Therefore, automated opinion mining and summarization techniques have become increasingly important. Different from previous approaches that have mostly treated product feature and opinion extraction as two independent tasks, we merge them together in a unified process by using probabilistic models. Specifically, we treat the problem of product feature and opinion extraction as a sequence labeling task and adopt Conditional Random Fields models to accomplish it. As part of our work, we develop a computational approach to construct domain specific sentiment lexicon by combining semi-structured reviews with general sentiment lexicon, which helps to identify the sentiment orientations of opinions. Experimental results on two real world datasets show that the proposed method is effective.", "title": "" }, { "docid": "neg:1840034_7", "text": "For force control of robot and collision detection with humans, robots that has joint torque sensors have been developed. However, existing torque sensors cannot measure correct torque because of crosstalk error. In order to solve this problem, we proposed a novel torque sensor that can measure the pure torque without crosstalk. The hexaform of the proposed sensor with truss structure increase deformation of the sensor and restoration, and the Wheatstone bridge circuit of strain gauge removes crosstalk error. Sensor performance is verified with FEM analysis.", "title": "" }, { "docid": "neg:1840034_8", "text": "The aim of this paper is to investigate the rules and constraints of code-switching (CS) in Hindi-English mixed language data. In this paper, we’ll discuss how we collected the mixed language corpus. This corpus is primarily made up of student interview speech. The speech was manually transcribed and verified by bilingual speakers of Hindi and English. The code-switching cases in the corpus are discussed and the reasons for code-switching are explained.", "title": "" }, { "docid": "neg:1840034_9", "text": "Electronic waste (e-waste) is one of the fastest-growing pollution problems worldwide given the presence if a variety of toxic substances which can contaminate the environment and threaten human health, if disposal protocols are not meticulously managed. This paper presents an overview of toxic substances present in e-waste, their potential environmental and human health impacts together with management strategies currently being used in certain countries. Several tools including life cycle assessment (LCA), material flow analysis (MFA), multi criteria analysis (MCA) and extended producer responsibility (EPR) have been developed to manage e-wastes especially in developed countries. The key to success in terms of e-waste management is to develop eco-design devices, properly collect e-waste, recover and recycle material by safe methods, dispose of e-waste by suitable techniques, forbid the transfer of used electronic devices to developing countries, and raise awareness of the impact of e-waste. No single tool is adequate but together they can complement each other to solve this issue. A national scheme such as EPR is a good policy in solving the growing e-waste problems.", "title": "" }, { "docid": "neg:1840034_10", "text": "This paper proposes a two-layer High Dynamic Range (HDR) coding scheme using a new tone mapping. Our tone mapping method transforms an HDR image onto a Low Dynamic Range (LDR) image by using a base map that is a smoothed version of the HDR luminance. In our scheme, the HDR image can be reconstructed from the tone mapped LDR image. Our method makes use of this property to realize a two-layer HDR coding by encoding both of the tone mapped LDR image and the base map. This paper validates its effectiveness of our approach through some experiments.", "title": "" }, { "docid": "neg:1840034_11", "text": "Marine microalgae have been used for a long time as food for humans, such as Arthrospira (formerly, Spirulina), and for animals in aquaculture. The biomass of these microalgae and the compounds they produce have been shown to possess several biological applications with numerous health benefits. The present review puts up-to-date the research on the biological activities and applications of polysaccharides, active biocompounds synthesized by marine unicellular algae, which are, most of the times, released into the surrounding medium (exo- or extracellular polysaccharides, EPS). It goes through the most studied activities of sulphated polysaccharides (sPS) or their derivatives, but also highlights lesser known applications as hypolipidaemic or hypoglycaemic, or as biolubricant agents and drag-reducers. Therefore, the great potentials of sPS from marine microalgae to be used as nutraceuticals, therapeutic agents, cosmetics, or in other areas, such as engineering, are approached in this review.", "title": "" }, { "docid": "neg:1840034_12", "text": "In this paper, we propose a segmentation method based on normalized cut and superpixels. The method relies on color and texture cues for fast computation and efficient use of memory. The method is used for food image segmentation as part of a mobile food record system we have developed for dietary assessment and management. The accurate estimate of nutrients relies on correctly labelled food items and sufficiently well-segmented regions. Our method achieves competitive results using the Berkeley Segmentation Dataset and outperforms some of the most popular techniques in a food image dataset.", "title": "" }, { "docid": "neg:1840034_13", "text": "Efficient algorithms for 3D character control in continuous control setting remain an open problem in spite of the remarkable recent advances in the field. We present a sampling-based model-predictive controller that comes in the form of a Monte Carlo tree search (MCTS). The tree search utilizes information from multiple sources including two machine learning models. This allows rapid development of complex skills such as 3D humanoid locomotion with less than a million simulation steps, in less than a minute of computing on a modest personal computer. We demonstrate locomotion of 3D characters with varying topologies under disturbances such as heavy projectile hits and abruptly changing target direction. In this paper we also present a new way to combine information from the various sources such that minimal amount of information is lost. We furthermore extend the neural network, involved in the algorithm, to represent stochastic policies. Our approach yields a robust control algorithm that is easy to use. While learning, the algorithm runs in near real-time, and after learning the sampling budget can be reduced for real-time operation.", "title": "" }, { "docid": "neg:1840034_14", "text": "In this paper the transcription and evaluation of the corpus DIMEx100 for Mexican Spanish is presented. First we describe the corpus and explain the linguistic and computational motivation for its design and collection process; then, the phonetic antecedents and the alphabet adopted for the transcription task are presented; the corpus has been transcribed at three different granularity levels, which are also specified in detail. The corpus statistics for each transcription level are also presented. A set of phonetic rules describing phonetic context observed empirically in spontaneous conversation is also validated with the transcription. The corpus has been used for the construction of acoustic models and a phonetic dictionary for the construction of a speech recognition system. Initial performance results suggest that the data can be used to train good quality acoustic models.", "title": "" }, { "docid": "neg:1840034_15", "text": "A 2.5 GHz fully integrated voltage controlled oscillator (VCO) for wireless application has been designed in a 0.35μm CMOS technology. A method for compensating the effect of temperature on the carrier oscillation frequency has been presented in this work. We compare also different VCOs topologies in order to select one with low phase noise, low supply sensitivity and large tuning frequency. Good results are obtained with a simple NMOS –Gm VCO. This proposed VCO has a wide operating range from 300 MHz with a good linearity between the output frequency and the control input voltage, with a temperature coefficient of -5 ppm/°C from 20°C to 120°C range. The phase noise is about -135.2dBc/Hz at 1MHz from the carrier with a power consumption of 5mW.", "title": "" }, { "docid": "neg:1840034_16", "text": "The modernization of the US electric power infrastructure, especially in lieu of its aging, overstressed networks; shifts in social, energy and environmental policies, and also new vulnerabilities, is a national concern. Our system are required to be more adaptive and secure more than every before. Consumers are also demanding increased power quality and reliability of supply and delivery. As such, power industries, government and national laboratories and consortia have developed increased interest in what is now called the Smart Grid of the future. The paper outlines Smart Grid intelligent functions that advance interactions of agents such as telecommunication, control, and optimization to achieve adaptability, self-healing, efficiency and reliability of power systems. The author also presents a special case for the development of Dynamic Stochastic Optimal Power Flow (DSOPF) technology as a tool needed in Smart Grid design. The integration of DSOPF to achieve the design goals with advanced DMS capabilities are discussed herein. This reference paper also outlines research focus for developing next generation of advance tools for efficient and flexible power systems operation and control.", "title": "" }, { "docid": "neg:1840034_17", "text": "Chinese characters have a huge set of character categories, more than 20, 000 and the number is still increasing as more and more novel characters continue being created. However, the enormous characters can be decomposed into a compact set of about 500 fundamental and structural radicals. This paper introduces a novel radical analysis network (RAN) to recognize printed Chinese characters by identifying radicals and analyzing two-dimensional spatial structures among them. The proposed RAN first extracts visual features from input by employing convolutional neural networks as an encoder. Then a decoder based on recurrent neural networks is employed, aiming at generating captions of Chinese characters by detecting radicals and two-dimensional structures through a spatial attention mechanism. The manner of treating a Chinese character as a composition of radicals rather than a single character class largely reduces the size of vocabulary and enables RAN to possess the ability of recognizing unseen Chinese character classes, namely zero-shot learning.", "title": "" }, { "docid": "neg:1840034_18", "text": "Information visualization is a very important tool in BigData analytics. BigData, structured and unstructured data which contains images, videos, texts, audio and other forms of data, collected from multiple datasets, is too big, too complex and moves too fast to analyse using traditional methods. This has given rise to two issues; 1) how to reduce multidimensional data without the loss of any data patterns for multiple datasets, 2) how to visualize BigData patterns for analysis. In this paper, we have classified the BigData attributes into `5Ws' data dimensions, and then established a `5Ws' density approach that represents the characteristics of data flow patterns. We use parallel coordinates to display the `5Ws' sending and receiving densities which provide more analytic features for BigData analysis. The experiment shows that this new model with parallel coordinate visualization can be efficiently used for BigData analysis and visualization.", "title": "" }, { "docid": "neg:1840034_19", "text": "Deep Learning has recently been introduced as a new alternative to perform Side-Channel analysis [1]. Until now, studies have been focused on applying Deep Learning techniques to perform Profiled SideChannel attacks where an attacker has a full control of a profiling device and is able to collect a large amount of traces for different key values in order to characterize the device leakage prior to the attack. In this paper we introduce a new method to apply Deep Learning techniques in a Non-Profiled context, where an attacker can only collect a limited number of side-channel traces for a fixed unknown key value from a closed device. We show that by combining key guesses with observations of Deep Learning metrics, it is possible to recover information about the secret key. The main interest of this method, is that it is possible to use the power of Deep Learning and Neural Networks in a Non-Profiled scenario. We show that it is possible to exploit the translation-invariance property of Convolutional Neural Networks [2] against de-synchronized traces and use Data Augmentation techniques also during Non-Profiled side-channel attacks. Additionally, the present work shows that in some conditions, this method can outperform classic Non-Profiled attacks as Correlation Power Analysis. We also highlight that it is possible to target masked implementations without leakages combination pre-preprocessing and with less assumptions than classic high-order attacks. To illustrate these properties, we present a series of experiments performed on simulated data and real traces collected from the ChipWhisperer board and from the ASCAD database [3]. The results of our experiments demonstrate the interests of this new method and show that this attack can be performed in practice.", "title": "" } ]
1840035
Smart Cars on Smart Roads : Problems of Control
[ { "docid": "pos:1840035_0", "text": "The accomplishments to date on the development of automatic vehicle control (AVC) technology in the Program on Advanced Technology for the Highway (PATH) at the University of California, Berkeley, are summarized. The basic prqfiiples and assumptions underlying the PATH work are identified, ‘followed by explanations of the work on automating vehicle lateral (steering) and longitudinal (spacing and speed) control. For both lateral and longitudinal control, the modeling of plant dynamics is described first, followed by development of the additional subsystems needed (communications, reference/sensor systems) and the derivation of the control laws. Plans for testing on vehicles in both near and long term are then discussed.", "title": "" } ]
[ { "docid": "neg:1840035_0", "text": "Actuation is essential for artificial machines to interact with their surrounding environment and to accomplish the functions for which they are designed. Over the past few decades, there has been considerable progress in developing new actuation technologies. However, controlled motion still represents a considerable bottleneck for many applications and hampers the development of advanced robots, especially at small length scales. Nature has solved this problem using molecular motors that, through living cells, are assembled into multiscale ensembles with integrated control systems. These systems can scale force production from piconewtons up to kilonewtons. By leveraging the performance of living cells and tissues and directly interfacing them with artificial components, it should be possible to exploit the intricacy and metabolic efficiency of biological actuation within artificial machines. We provide a survey of important advances in this biohybrid actuation paradigm.", "title": "" }, { "docid": "neg:1840035_1", "text": "High power multi-level converters are deemed as the mainstay power conversion technology for renewable energy systems including the PV farm, energy storage system and electrical vehicle charge station. This paper is focused on the modeling and design of coupled and integrated magnetics in three-level DC/DC converter with multi-phase interleaved structure. The interleaved phase legs offer the benefit of output current ripple reduction, while inversed coupled inductors can suppress the circulating current between phase legs. To further reduce the magnetic volume, the four inductors in two-phase three-level DC/DC converter are integrated into one common structure, incorporating the negative coupling effects. Because of the nonlinearity of the inductor coupling, the equivalent circuit model is developed for the proposed interleaving structure to facilitate the design optimization of the integrated system. The model identifies the existence of multiple equivalent inductances during one switching cycle. A combination of them determines the inductor current ripple and dynamics of the system. By virtue of inverse coupling and means of controlling the coupling coefficients, one can minimize the current ripple and the unwanted circulating current. The fabricated prototype of the integrated coupled inductors is tested with a two-phase three-level DC/DC converter hardware, showing its good current ripple reduction performance as designed.", "title": "" }, { "docid": "neg:1840035_2", "text": "A 61-year-old female with long-standing constipation presented with increasing abdominal distention, pain, nausea and weight loss. She had been previously treated with intermittent fiber supplements and osmotic laxatives for chronic constipation. She did not use medications known to cause delayed bowel transit. Examination revealed a distended abdomen, hard stool in the rectum, and audible heart sounds throughout the abdomen. A CT scan showed severe colonic distention from stool (Fig. 1). She had no mechanical, infectious, metabolic, or endocrine-related etiology for constipation. After failing conservative management including laxative suppositories, enemas, manual disimpaction, methylnaltrexone and neostigmine, the patient underwent a colectomy with Hartmann pouch and terminal ileostomy. The removed colon measured 25.5 cm in largest diameter and weighed over 15 kg (Fig. 2). The histopathological examination demonstrated no neuronal degeneration, apoptosis or agangliosis to suggest Hirschprung’s disease or another intrinsic neuro-muscular disorder. Idiopathic megacolon is a relatively uncommon condition usually associated with slow-transit constipation. Although medical therapy is frequently ineffective, rectal laxatives, gentle enemas, and manual disimpaction can be attempted. Oral osmotic or secretory laxatives as well as unprepped lower endoscopy are relative contraindications as they may precipitate a perforation. Surgical therapy is often required as most cases are refractory to medical therapy.", "title": "" }, { "docid": "neg:1840035_3", "text": "For embedded high resolution successive approximation ADCs, it is necessary to determine the performance limitation of the CMOS process used for the design. This paper presents a modelling technique for major limitations, i.e. capacitor mismatch and non-linearity effects. The model is besed on Monte Carlo simulations applied to an analytical description of the ADC. Additional effects like charge injection and parasitic capacitance are included. The analytical basis covers different architectures with a fully binary weighted or series-split capacitor array. when comparing our analysis and measurement results to several conventional approaches, a significantly more realistic estimation of the attainable resolution is achieved. The presented results provide guidance in choosing process and circuit structure for the design of SAR ADCs. The model also enbles reliable capacitor sizing early in the design process, i.e. well before actual layout implementation.", "title": "" }, { "docid": "neg:1840035_4", "text": "Tourism is an important part of national economy. On the other hand it can also be a source of some negative externalities. These are mainly environmental externalities, resulting in increased pollution, aesthetic or architectural damages. High concentration of visitors may also lead to increased crime, or aggressiveness. These may have negative effects on quality of life of residents and negative experience of visitors. The paper deals with the influence of tourism on destination environment. It highlights the necessity of sustainable forms of tourism and activities to prevent negative implication of tourism, such as education activities and tourism monitoring. Key-words: Tourism, Mass Tourism, Development, Sustainability, Tourism Impact, Monitoring.", "title": "" }, { "docid": "neg:1840035_5", "text": "Analytics is a field of research and practice that aims to reveal new patterns of information through the collection of large sets of data held in previously distinct sources. Growing interest in data and analytics in education, teaching, and learning raises the priority for increased, high-quality research into the models, methods, technologies, and impact of analytics. The challenges of applying analytics on academic and ethical reliability to control over data. The other challenge is that the educational landscape is extremely turbulent at present, and key challenge is the appropriate collection, protection and use of large data sets. This paper brings out challenges of multi various pertaining to the domain by offering a big data model for higher education system.", "title": "" }, { "docid": "neg:1840035_6", "text": "The aim of this study was to present a method for endodontic management of a maxillary first molar with unusual C-shaped morphology of the buccal root verified by cone-beam computed tomography (CBCT) images. This rare anatomical variation was confirmed using CBCT, and nonsurgical endodontic treatment was performed by meticulous evaluation of the pulpal floor. Posttreatment image revealed 3 independent canals in the buccal root obturated efficiently to the accepted lengths in all 3 canals. Our study describes a unique C-shaped variation of the root canal system in a maxillary first molar, involving the 3 buccal canals. In addition, our study highlights the usefulness of CBCT imaging for accurate diagnosis and management of this unusual canal morphology.", "title": "" }, { "docid": "neg:1840035_7", "text": "The Eyelink Toolbox software supports the measurement of eye movements. The toolbox provides an interface between a high-level interpreted language (MATLAB), a visual display programming toolbox (Psychophysics Toolbox), and a video-based eyetracker (Eyelink). The Eyelink Toolbox enables experimenters to measure eye movements while simultaneously executing the stimulus presentation routines provided by the Psychophysics Toolbox. Example programs are included with the toolbox distribution. Information on the Eyelink Toolbox can be found at http://psychtoolbox.org/.", "title": "" }, { "docid": "neg:1840035_8", "text": "Low rank and sparse representation based methods, which make few specific assumptions about the background, have recently attracted wide attention in background modeling. With these methods, moving objects in the scene are modeled as pixel-wised sparse outliers. However, in many practical scenarios, the distributions of these moving parts are not truly pixel-wised sparse but structurally sparse. Meanwhile a robust analysis mechanism is required to handle background regions or foreground movements with varying scales. Based on these two observations, we first introduce a class of structured sparsity-inducing norms to model moving objects in videos. In our approach, we regard the observed sequence as being constituted of two terms, a low-rank matrix (background) and a structured sparse outlier matrix (foreground). Next, in virtue of adaptive parameters for dynamic videos, we propose a saliency measurement to dynamically estimate the support of the foreground. Experiments on challenging well known data sets demonstrate that the proposed approach outperforms the state-of-the-art methods and works effectively on a wide range of complex videos.", "title": "" }, { "docid": "neg:1840035_9", "text": "While Processing-in-Memory has been investigated for decades, it has not been embraced commercially. A number of emerging technologies have renewed interest in this topic. In particular, the emergence of 3D stacking and the imminent release of Micron's Hybrid Memory Cube device have made it more practical to move computation near memory. However, the literature is missing a detailed analysis of a killer application that can leverage a Near Data Computing (NDC) architecture. This paper focuses on in-memory MapReduce workloads that are commercially important and are especially suitable for NDC because of their embarrassing parallelism and largely localized memory accesses. The NDC architecture incorporates several simple processing cores on a separate, non-memory die in a 3D-stacked memory package; these cores can perform Map operations with efficient memory access and without hitting the bandwidth wall. This paper describes and evaluates a number of key elements necessary in realizing efficient NDC operation: (i) low-EPI cores, (ii) long daisy chains of memory devices, (iii) the dynamic activation of cores and SerDes links. Compared to a baseline that is heavily optimized for MapReduce execution, the NDC design yields up to 15X reduction in execution time and 18X reduction in system energy.", "title": "" }, { "docid": "neg:1840035_10", "text": "Efficient use of high speed hardware requires operating system components be customized to the application workload. Our general purpose operating systems are ill-suited for this task. We present EbbRT, a framework for constructing per-application library operating systems for cloud applications. The primary objective of EbbRT is to enable highperformance in a tractable and maintainable fashion. This paper describes the design and implementation of EbbRT, and evaluates its ability to improve the performance of common cloud applications. The evaluation of the EbbRT prototype demonstrates memcached, run within a VM, can outperform memcached run on an unvirtualized Linux. The prototype evaluation also demonstrates an 14% performance improvement of a V8 JavaScript engine benchmark, and a node.js webserver that achieves a 50% reduction in 99th percentile latency compared to it run on Linux.", "title": "" }, { "docid": "neg:1840035_11", "text": "Geographically annotated social media is extremely valuable for modern information retrieval. However, when researchers can only access publicly-visible data, one quickly finds that social media users rarely publish location information. In this work, we provide a method which can geolocate the overwhelming majority of active Twitter users, independent of their location sharing preferences, using only publicly-visible Twitter data. Our method infers an unknown user's location by examining their friend's locations. We frame the geotagging problem as an optimization over a social network with a total variation-based objective and provide a scalable and distributed algorithm for its solution. Furthermore, we show how a robust estimate of the geographic dispersion of each user's ego network can be used as a per-user accuracy measure which is effective at removing outlying errors. Leave-many-out evaluation shows that our method is able to infer location for 101, 846, 236 Twitter users at a median error of 6.38 km, allowing us to geotag over 80% of public tweets.", "title": "" }, { "docid": "neg:1840035_12", "text": "As the popularity of content sharing websites has increased, they have become targets for spam, phishing and the distribution of malware. On YouTube, the facility for users to post comments can be used by spam campaigns to direct unsuspecting users to malicious third-party websites. In this paper, we demonstrate how such campaigns can be tracked over time using network motif profiling, i.e. by tracking counts of indicative network motifs. By considering all motifs of up to five nodes, we identify discriminating motifs that reveal two distinctly different spam campaign strategies, and present an evaluation that tracks two corresponding active campaigns.", "title": "" }, { "docid": "neg:1840035_13", "text": "We can determine whether two texts are paraphrases of each other by finding out the extent to which the texts are similar. The typical lexical matching technique works by matching the sequence of tokens between the texts to recognize paraphrases, and fails when different words are used to convey the same meaning. We can improve this simple method by combining lexical with syntactic or semantic representations of the input texts. The present work makes use of syntactical information in the texts and computes the similarity between them using word similarity measures based on WordNet and lexical databases. The texts are converted into a unified semantic structural model through which the semantic similarity of the texts is obtained. An approach is presented to assess the semantic similarity and the results of applying this approach is evaluated using the Microsoft Research Paraphrase (MSRP) Corpus.", "title": "" }, { "docid": "neg:1840035_14", "text": "Dynamic spectrum access is the key to solving worldwide spectrum shortage. The open wireless medium subjects DSA systems to unauthorized spectrum use by illegitimate users. This paper presents SpecGuard, the first crowdsourced spectrum misuse detection framework for DSA systems. In SpecGuard, a transmitter is required to embed a spectrum permit into its physical-layer signals, which can be decoded and verified by ubiquitous mobile users. We propose three novel schemes for embedding and detecting a spectrum permit at the physical layer. Detailed theoretical analyses, MATLAB simulations, and USRP experiments confirm that our schemes can achieve correct, low-intrusive, and fast spectrum misuse detection.", "title": "" }, { "docid": "neg:1840035_15", "text": "The upcoming Internet of Things will introduce large sensor networks including devices with very different propagation characteristics and power consumption demands. 5G aims to fulfill these requirements by demanding a battery lifetime of at least 10 years. To integrate smart devices that are located in challenging propagation conditions, IoT communication technologies furthermore have to support very deep coverage. NB-IoT and eMTC are designed to meet these requirements and thus paving the way to 5G. With the power saving options extended Discontinuous Reception and Power Saving Mode as well as the usage of large numbers of repetitions, NB-IoT and eMTC introduce new techniques to meet the 5G IoT requirements. In this paper, the performance of NB-IoT and eMTC is evaluated. Therefore, data rate, power consumption, latency and spectral efficiency are examined in different coverage conditions. Although both technologies use the same power saving techniques as well as repetitions to extend the communication range, the analysis reveals a different performance in the context of data size, rate and coupling loss. While eMTC comes with a 4% better battery lifetime than NB-IoT when considering 144 dB coupling loss, NB-IoT battery lifetime raises to 18% better performance in 164 dB coupling loss scenarios. The overall analysis shows that in coverage areas with a coupling loss of 155 dB or less, eMTC performs better, but requires much more bandwidth. Taking the spectral efficiency into account, NB-IoT is in all evaluated scenarios the better choice and more suitable for future networks with massive numbers of devices.", "title": "" }, { "docid": "neg:1840035_16", "text": "We consider supervised learning in the presence of very many irrelevant features, and study two different regularization methods for preventing overfitting. Focusing on logistic regression, we show that using L1 regularization of the parameters, the sample complexity (i.e., the number of training examples required to learn \"well,\") grows only logarithmically in the number of irrelevant features. This logarithmic rate matches the best known bounds for feature selection, and indicates that L1 regularized logistic regression can be effective even if there are exponentially many irrelevant features as there are training examples. We also give a lower-bound showing that any rotationally invariant algorithm---including logistic regression with L2 regularization, SVMs, and neural networks trained by backpropagation---has a worst case sample complexity that grows at least linearly in the number of irrelevant features.", "title": "" }, { "docid": "neg:1840035_17", "text": "A blade element momentum theory propeller model is coupled with a commercial RANS solver. This allows the fully appended self propulsion of the autonomous underwater vehicle Autosub 3 to be considered. The quasi-steady propeller model has been developed to allow for circumferential and radial variations in axial and tangential inflow. The non-uniform inflow is due to control surface deflections and the bow-down pitch of the vehicle in cruise condition. The influence of propeller blade Reynolds number is included through the use of appropriate sectional lift and drag coefficients. Simulations have been performed over the vehicles operational speed range (Re = 6.8× 10 to 13.5× 10). A workstation is used for the calculations with mesh sizes up to 2x10 elements. Grid uncertainty is calculated to be 3.07% for the wake fraction. The initial comparisons with in service data show that the coupled RANS-BEMT simulation under predicts the drag of the vehicle and consequently the required propeller rpm. However, when an appropriate correction is made for the effect on resistance of various protruding sensors the predicted propulsor rpm matches well with that of in-service rpm measurements for vessel speeds (1m/s 2m/s). The developed analysis captures the important influence of the propeller blade and hull Reynolds number on overall system efficiency.", "title": "" }, { "docid": "neg:1840035_18", "text": "Graph embedding is an important branch in Data Mining and Machine Learning, and most of recent studies are focused on preserving the hierarchical structure with less dimensions. One of such models, called Poincare Embedding, achieves the goal by using Poincare Ball model to embed hierarchical structure in hyperbolic space instead of traditionally used Euclidean space. However, Poincare Embedding suffers from two major problems: (1) performance drops as depth of nodes increases since nodes tend to lay at the boundary; (2) the embedding model is constrained with pre-constructed structures and cannot be easily extended. In this paper, we first raise several techniques to overcome the problem of low performance for deep nodes, such as using partial structure, adding regularization, and exploring sibling relations in the structure. Then we also extend the Poincare Embedding model by extracting information from text corpus and propose a joint embedding model with Poincare Embedding and Word2vec.", "title": "" }, { "docid": "neg:1840035_19", "text": "Observation-Level Interaction (OLI) is a sensemaking technique relying upon the interactive semantic exploration of data. By manipulating data items within a visualization, users provide feedback to an underlying mathematical model that projects multidimensional data into a meaningful two-dimensional representation. In this work, we propose, implement, and evaluate an OLI model which explicitly defines clusters within this data projection. These clusters provide targets against which data values can be manipulated. The result is a cooperative framework in which the layout of the data affects the clusters, while user-driven interactions with the clusters affect the layout of the data points. Additionally, this model addresses the OLI \"with respect to what\" problem by providing a clear set of clusters against which interaction targets are judged and computed.", "title": "" } ]
1840036
Cooperative Co-evolution for large scale optimization through more frequent random grouping
[ { "docid": "pos:1840036_0", "text": "This report proposes 15 large-scale benchmark problems as an extension to the existing CEC’2010 large-scale global optimization benchmark suite. The aim is to better represent a wider range of realworld large-scale optimization problems and provide convenience and flexibility for comparing various evolutionary algorithms specifically designed for large-scale global optimization. Introducing imbalance between the contribution of various subcomponents, subcomponents with nonuniform sizes, and conforming and conflicting overlapping functions are among the major new features proposed in this report.", "title": "" }, { "docid": "pos:1840036_1", "text": "Evolutionary algorithms (EAs) have been applied with success to many numerical and combinatorial optimization problems in recent years. However, they often lose their effectiveness and advantages when applied to large and complex problems, e.g., those with high dimensions. Although cooperative coevolution has been proposed as a promising framework for tackling high-dimensional optimization problems, only limited studies were reported by decomposing a high-dimensional problem into single variables (dimensions). Such methods of decomposition often failed to solve nonseparable problems, for which tight interactions exist among different decision variables. In this paper, we propose a new cooperative coevolution framework that is capable of optimizing large scale nonseparable problems. A random grouping scheme and adaptive weighting are introduced in problem decomposition and coevolution. Instead of conventional evolutionary algorithms, a novel differential evolution algorithm is adopted. Theoretical analysis is presented in this paper to show why and how the new framework can be effective for optimizing large nonseparable problems. Extensive computational studies are also carried out to evaluate the performance of newly proposed algorithm on a large number of benchmark functions with up to 1000 dimensions. The results show clearly that our framework and algorithm are effective as well as efficient for large scale evolutionary optimisation problems. We are unaware of any other evolutionary algorithms that can optimize 1000-dimension nonseparable problems as effectively and efficiently as we have done.", "title": "" } ]
[ { "docid": "neg:1840036_0", "text": "We present a vector space–based model for selectional preferences that predicts plausibility scores for argument headwords. It does not require any lexical resources (such as WordNet). It can be trained either on one corpus with syntactic annotation, or on a combination of a small semantically annotated primary corpus and a large, syntactically analyzed generalization corpus. Our model is able to predict inverse selectional preferences, that is, plausibility scores for predicates given argument heads. We evaluate our model on one NLP task (pseudo-disambiguation) and one cognitive task (prediction of human plausibility judgments), gauging the influence of different parameters and comparing our model against other model classes. We obtain consistent benefits from using the disambiguation and semantic role information provided by a semantically tagged primary corpus. As for parameters, we identify settings that yield good performance across a range of experimental conditions. However, frequency remains a major influence of prediction quality, and we also identify more robust parameter settings suitable for applications with many infrequent items.", "title": "" }, { "docid": "neg:1840036_1", "text": "Alzheimer’s disease (AD) transgenic mice have been used as a standard AD model for basic mechanistic studies and drug discovery. These mouse models showed symbolic AD pathologies including β-amyloid (Aβ) plaques, gliosis and memory deficits but failed to fully recapitulate AD pathogenic cascades including robust phospho tau (p-tau) accumulation, clear neurofibrillary tangles (NFTs) and neurodegeneration, solely driven by familial AD (FAD) mutation(s). Recent advances in human stem cell and three-dimensional (3D) culture technologies made it possible to generate novel 3D neural cell culture models that recapitulate AD pathologies including robust Aβ deposition and Aβ-driven NFT-like tau pathology. These new 3D human cell culture models of AD hold a promise for a novel platform that can be used for mechanism studies in human brain-like environment and high-throughput drug screening (HTS). In this review, we will summarize the current progress in recapitulating AD pathogenic cascades in human neural cell culture models using AD patient-derived induced pluripotent stem cells (iPSCs) or genetically modified human stem cell lines. We will also explain how new 3D culture technologies were applied to accelerate Aβ and p-tau pathologies in human neural cell cultures, as compared the standard two-dimensional (2D) culture conditions. Finally, we will discuss a potential impact of the human 3D human neural cell culture models on the AD drug-development process. These revolutionary 3D culture models of AD will contribute to accelerate the discovery of novel AD drugs.", "title": "" }, { "docid": "neg:1840036_2", "text": "This study looked at the individual difference correlates of self-rated character strengths and virtues. In all, 280 adults completed a short 24-item measure of strengths, a short personality measure of the Big Five traits and a fluid intelligence test. The Cronbach alphas for the six higher order virtues were satisfactory but factor analysis did not confirm the a priori classification yielding five interpretable factors. These factors correlated significantly with personality and intelligence. Intelligence and neuroticism were correlated negatively with all the virtues, while extraversion and conscientiousness were positively correlated with all virtues. Structural equation modeling showed personality and religiousness moderated the effect of intelligence on the virtues. Extraversion and openness were the largest correlates of the virtues. The use of shortened measured in research is discussed.", "title": "" }, { "docid": "neg:1840036_3", "text": "The Great Gatsby Curve, the observation that for OECD countries, greater crosssectional income inequality is associated with lower mobility, has become a prominent part of scholarly and policy discussions because of its implications for the relationship between inequality of outcomes and inequality of opportunities. We explore this relationship by focusing on evidence and interpretation of an intertemporal Gatsby Curve for the United States. We consider inequality/mobility relationships that are derived from nonlinearities in the transmission process of income from parents to children and the relationship that is derived from the effects of inequality of socioeconomic segregation, which then affects children. Empirical evidence for the mechanisms we identify is strong. We find modest reduced form evidence and structural evidence of an intertemporal Gatsby Curve for the US as mediated by social influences. Steven N. Durlauf Ananth Seshadri Department of Economics Department of Economics University of Wisconsin University of Wisconsin 1180 Observatory Drive 1180 Observatory Drive Madison WI, 53706 Madison WI, 53706 durlauf@gmail.com aseshadr@ssc.wisc.edu", "title": "" }, { "docid": "neg:1840036_4", "text": "Reinforcement learning is considered as a promising direction for driving policy learning. However, training autonomous driving vehicle with reinforcement learning in real environment involves non-affordable trial-and-error. It is more desirable to first train in a virtual environment and then transfer to the real environment. In this paper, we propose a novel realistic translation network to make model trained in virtual environment be workable in real world. The proposed network can convert non-realistic virtual image input into a realistic one with similar scene structure. Given realistic frames as input, driving policy trained by reinforcement learning can nicely adapt to real world driving. Experiments show that our proposed virtual to real (VR) reinforcement learning (RL) works pretty well. To our knowledge, this is the first successful case of driving policy trained by reinforcement learning that can adapt to real world driving data.", "title": "" }, { "docid": "neg:1840036_5", "text": "This project explores a novel experimental setup towards building spoken, multi-modally rich, and human-like multiparty tutoring agent. A setup is developed and a corpus is collected that targets the development of a dialogue system platform to explore verbal and nonverbal tutoring strategies in multiparty spoken interactions with embodied agents. The dialogue task is centered on two participants involved in a dialogue aiming to solve a card-ordering game. With the participants sits a tutor that helps the participants perform the task and organizes and balances their interaction. Different multimodal signals captured and auto-synchronized by different audio-visual capture technologies were coupled with manual annotations to build a situated model of the interaction based on the participants personalities, their temporally-changing state of attention, their conversational engagement and verbal dominance, and the way these are correlated with the verbal and visual feedback, turn-management, and conversation regulatory actions generated by the tutor. At the end of this chapter we discuss the potential areas of research and developments this work opens and some of the challenges that lie in the road ahead.", "title": "" }, { "docid": "neg:1840036_6", "text": "Service-dominant logic continues its evolution, facilitated by an active community of scholars throughout the world. Along its evolutionary path, there has been increased recognition of the need for a crisper and more precise delineation of the foundational premises and specification of the axioms of S-D logic. It also has become apparent that a limitation of the current foundational premises/axioms is the absence of a clearly articulated specification of the mechanisms of (often massive-scale) coordination and cooperation involved in the cocreation of value through markets and, more broadly, in society. This is especially important because markets are even more about cooperation than about the competition that is more frequently discussed. To alleviate this limitation and facilitate a better understanding of cooperation (and coordination), an eleventh foundational premise (fifth axiom) is introduced, focusing on the role of institutions and institutional arrangements in systems of value cocreation: service ecosystems. Literature on institutions across multiple social disciplines, including marketing, is briefly reviewed and offered as further support for this fifth axiom.", "title": "" }, { "docid": "neg:1840036_7", "text": "As a commentary to Juhani Iivari’s insightful essay, I briefly analyze design science research as an embodiment of three closely related cycles of activities. The Relevance Cycle inputs requirements from the contextual environment into the research and introduces the research artifacts into environmental field testing. The Rigor Cycle provides grounding theories and methods along with domain experience and expertise from the foundations knowledge base into the research and adds the new knowledge generated by the research to the growing knowledge base. The central Design Cycle supports a tighter loop of research activity for the construction and evaluation of design artifacts and processes. The recognition of these three cycles in a research project clearly positions and differentiates design science from other research paradigms. The commentary concludes with a claim to the pragmatic nature of design science.", "title": "" }, { "docid": "neg:1840036_8", "text": "The efficiency of two biomass pretreatment technologies, dilute acid hydrolysis and dissolution in an ionic liquid, are compared in terms of delignification, saccharification efficiency and saccharide yields with switchgrass serving as a model bioenergy crop. When subject to ionic liquid pretreatment (dissolution and precipitation of cellulose by anti-solvent) switchgrass exhibited reduced cellulose crystallinity, increased surface area, and decreased lignin content compared to dilute acid pretreatment. Pretreated material was characterized by powder X-ray diffraction, scanning electron microscopy, Fourier transform infrared spectroscopy, Raman spectroscopy and chemistry methods. Ionic liquid pretreatment enabled a significant enhancement in the rate of enzyme hydrolysis of the cellulose component of switchgrass, with a rate increase of 16.7-fold, and a glucan yield of 96.0% obtained in 24h. These results indicate that ionic liquid pretreatment may offer unique advantages when compared to the dilute acid pretreatment process for switchgrass. However, the cost of the ionic liquid process must also be taken into consideration.", "title": "" }, { "docid": "neg:1840036_9", "text": "Th e paper presents a literature review of the main concepts of hotel revenue management (RM) and current state-of-the-art of its theoretical research. Th e article emphasises on the diff erent directions of hotel RM research and is structured around the elements of the hotel RM system and the stages of RM process. Th e elements of the hotel RM system discussed in the paper include hotel RM centres (room division, F&B, function rooms, spa & fi tness facilities, golf courses, casino and gambling facilities, and other additional services), data and information, the pricing (price discrimination, dynamic pricing, lowest price guarantee) and non-pricing (overbookings, length of stay control, room availability guarantee) RM tools, the RM software, and the RM team. Th e stages of RM process have been identifi ed as goal setting, collection of data and information, data analysis, forecasting, decision making, implementation and monitoring. Additionally, special attention is paid to ethical considerations in RM practice, the connections between RM and customer relationship management, and the legal aspect of RM. Finally, the article outlines future research perspectives and discloses potential evolution of RM in future.", "title": "" }, { "docid": "neg:1840036_10", "text": "In this paper we describe an application of our approach to temporal text mining in Competitive Intelligence for the biotechnology and pharmaceutical industry. The main objective is to identify changes and trends of associations among entities of interest that appear in text over time. Text Mining (TM) exploits information contained in textual data in various ways, including the type of analyses that are typically performed in Data Mining [17]. Information Extraction (IE) facilitates the semi-automatic creation of metadata repositories from text. Temporal Text mining combines Information Extraction and Data Mining techniques upon textual repositories and incorporates time and ontologies‟ issues. It consists of three main phases; the Information Extraction phase, the ontology driven generalisation of templates and the discovery of associations over time. Treatment of the temporal dimension is essential to our approach since it influences both the annotation part (IE) of the system as well as the mining part.", "title": "" }, { "docid": "neg:1840036_11", "text": "We propose a novel approach for using unsupervised boosting to create an ensemble of generative models, where models are trained in sequence to correct earlier mistakes. Our metaalgorithmic framework can leverage any existing base learner that permits likelihood evaluation, including recent deep expressive models. Further, our approach allows the ensemble to include discriminative models trained to distinguish real data from model-generated data. We show theoretical conditions under which incorporating a new model in the ensemble will improve the fit and empirically demonstrate the effectiveness of our black-box boosting algorithms on density estimation, classification, and sample generation on benchmark datasets for a wide range of generative models.", "title": "" }, { "docid": "neg:1840036_12", "text": "As retrieval systems become more complex, learning to rank approaches are being developed to automatically tune their parameters. Using online learning to rank, retrieval systems can learn directly from implicit feedback inferred from user interactions. In such an online setting, algorithms must obtain feedback for effective learning while simultaneously utilizing what has already been learned to produce high quality results. We formulate this challenge as an exploration–exploitation dilemma and propose two methods for addressing it. By adding mechanisms for balancing exploration and exploitation during learning, each method extends a state-of-the-art learning to rank method, one based on listwise learning and the other on pairwise learning. Using a recently developed simulation framework that allows assessment of online performance, we empirically evaluate both methods. Our results show that balancing exploration and exploitation can substantially and significantly improve the online retrieval performance of both listwise and pairwise approaches. In addition, the results demonstrate that such a balance affects the two approaches in different ways, especially when user feedback is noisy, yielding new insights relevant to making online learning to rank effective in practice.", "title": "" }, { "docid": "neg:1840036_13", "text": "Context: The pull-based model, widely used in distributed software development, offers an extremely low barrier to entry for potential contributors (anyone can submit of contributions to any project, through pull-requests). Meanwhile, the project’s core team must act as guardians of code quality, ensuring that pull-requests are carefully inspected before being merged into the main development line. However, with pull-requests becoming increasingly popular, the need for qualified reviewers also increases. GitHub facilitates this, by enabling the crowd-sourcing of pull-request reviews to a larger community of coders than just the project’s core team, as a part of their social coding philosophy. However, having access to more potential reviewers does not necessarily mean that it’s easier to find the right ones (the “needle in a haystack” problem). If left unsupervised, this process may result in communication overhead and delayed pull-request processing. Objective: This study aims to investigate whether and how previous approaches used in bug triaging and code review can be adapted to recommending reviewers for pull-requests, and how to improve the recommendation performance. Method: First, we extend three typical approaches used in bug triaging and code review for the new challenge of assigning reviewers to pull-requests. Second, we analyze social relations between contributors and reviewers, and propose a novel approach by mining each project’s comment networks (CNs). Finally, we combine the CNs with traditional approaches, and evaluate the effectiveness of all these methods on 84 GitHub projects through both quantitative and qualitative analysis. Results: We find that CN-based recommendation can achieve, by itself, similar performance as the traditional approaches. However, the mixed approaches can achieve significant improvements compared to using either of them independently. Conclusion: Our study confirms that traditional approaches to bug triaging and code review are feasible for pull-request reviewer recommendations on GitHub. Furthermore, their performance can be improved significantly by combining them with information extracted from prior social interactions between developers on GitHub. These results prompt for novel tools to support process automation in social coding platforms, that combine social (e.g., common interests among developers) and technical factors (e.g., developers’ expertise). © 2016 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "neg:1840036_14", "text": "One of the challenges with research on student engagement is the large variation in the measurement of this construct, which has made it challenging to compare fi ndings across studies. This chapter contributes to our understanding of the measurement of student in engagement in three ways. First, we describe strengths and limitations of different methods for assessing student engagement (i.e., self-report measures, experience sampling techniques, teacher ratings, interviews, and observations). Second, we compare and contrast 11 self-report survey measures of student engagement that have been used in prior research. Across these 11 measures, we describe what is measured (scale name and items), use of measure, samples, and the extent of reliability and validity information available on each measure. Finally, we outline limitations with current approaches to measurement and promising future directions. Researchers, educators, and policymakers are increasingly focused on student engagement as the key to addressing problems of low achievement, high levels of student boredom, alienation, and high dropout rates (Fredricks, Blumenfeld, & Paris, 2004 ) . Students become more disengaged as they progress from elementary to middle school, with some estimates that 25–40% of youth are showing signs of disengagement (i.e., uninvolved, apathetic, not trying very hard, and not paying attention) (Steinberg, Brown, & Dornbush, 1996 ; Yazzie-Mintz, 2007 ) . The consequences of disengagement for middle and high school youth from disadvantaged backgrounds are especially severe; these youth are less likely to graduate from high school and face limited employment prospects, increasing their risk for poverty, poorer health, and involvement in the criminal justice system (National Research Council and the Institute of Medicine, 2004 ) . Although there is growing interest in student engagement, there has been considerable variation in how this construct has been conceptualized over time (Appleton, Christenson, & Furlong, 2008 ; Fredricks et al., 2004 ; Jimerson, Campos, & Grief, 2003 ) . Scholars have used a broad range J. A. Fredricks , Ph.D. (*) Human Development , Connecticut College , New London , CT , USA e-mail: jfred@conncoll.edu W. McColskey , Ph.D. SERVE Center , University of North Carolina , Greensboro , NC , USA e-mail: wmccolsk@serve.org The Measurement of Student Engagement: A Comparative Analysis of Various Methods and Student Self-report Instruments Jennifer A. Fredricks and Wendy McColskey 764 J.A. Fredricks and W. McColskey of terms including student engagement, school engagement, student engagement in school, academic engagement, engagement in class, and engagement in schoolwork. In addition, there has been variation in the number of subcomponents of engagement including different conceptualizations. Some scholars have proposed a two-dimensional model of engagement which includes behavior (e.g., participation, effort, and positive conduct) and emotion (e.g., interest, belonging, value, and positive emotions) (Finn, 1989 ; Marks, 2000 ; Skinner, Kindermann, & Furrer, 2009b ) . More recently, others have outlined a three-component model of engagement that includes behavior, emotion, and a cognitive dimension (i.e., self-regulation, investment in learning, and strategy use) (e.g., Archaumbault, 2009 ; Fredricks et al., 2004 ; Jimerson et al., 2003 ; Wigfi eld et al., 2008 ) . Finally, Christenson and her colleagues (Appleton, Christenson, Kim, & Reschly, 2006 ; Reschly & Christenson, 2006 ) conceptualized engagement as having four dimensions: academic, behavioral, cognitive, and psychological (subsequently referred to as affective) engagement. In this model, aspects of behavior are separated into two different components: academics, which includes time on task, credits earned, and homework completion, and behavior, which includes attendance, class participation, and extracurricular participation. One commonality across the myriad of conceptualizations is that engagement is multidimensional. However, further theoretical and empirical work is needed to determine the extent to which these different dimensions are unique constructs and whether a three or four component model more accurately describes the construct of student engagement. Even when scholars have similar conceptualizations of engagement, there has been considerable variability in the content of items used in instruments. This has made it challenging to compare fi ndings from different studies. This chapter expands on our understanding of the measurement of student engagement in three ways. First, the strengths and limitations of different methods for assessing student engagement are described. Second, 11 self-report survey measures of student engagement that have been used in prior research are compared and contrasted on several dimensions (i.e., what is measured, purposes and uses, samples, and psychometric properties). Finally, we discuss limitations with current approaches to measurement. What is Student Engagement We defi ne student engagement as a meta-construct that includes behavioral, emotional, and cognitive engagement (Fredricks et al., 2004 ) . Although there are large individual bodies of literature on behavioral (i.e., time on task), emotional (i.e., interest and value), and cognitive engagement (i.e., self-regulation and learning strategies), what makes engagement unique is its potential as a multidimensional or “meta”-construct that includes these three dimensions. Behavioral engagement draws on the idea of participation and includes involvement in academic, social, or extracurricular activities and is considered crucial for achieving positive academic outcomes and preventing dropping out (Connell & Wellborn, 1991 ; Finn, 1989 ) . Other scholars defi ne behavioral engagement in terms of positive conduct, such as following the rules, adhering to classroom norms, and the absence of disruptive behavior such as skipping school or getting into trouble (Finn, Pannozzo, & Voelkl, 1995 ; Finn & Rock, 1997 ) . Emotional engagement focuses on the extent of positive (and negative) reactions to teachers, classmates, academics, or school. Others conceptualize emotional engagement as identifi cation with the school, which includes belonging, or a feeling of being important to the school, and valuing, or an appreciation of success in school-related outcomes (Finn, 1989 ; Voelkl, 1997 ) . Positive emotional engagement is presumed to create student ties to the institution and infl uence their willingness to do the work (Connell & Wellborn, 1991 ; Finn, 1989 ) . Finally, cognitive engagement is defi ned as student’s level of investment in learning. It includes being thoughtful, strategic, and willing to exert the necessary effort for comprehension of complex ideas or mastery of diffi cult skills (Corno & Mandinach, 1983 ; Fredricks et al., 2004 ; Meece, Blumenfeld, & Hoyle, 1988 ) . 765 37 The Measurement of Student Engagement... An important question is how engagement differs from motivation. Although the terms are used interchangeably by some, they are different and the distinctions between them are important. Motivation refers to the underlying reasons for a given behavior and can be conceptualized in terms of the direction, intensity, quality, and persistence of one’s energies (Maehr & Meyer, 1997 ) . A proliferation of motivational constructs (e.g., intrinsic motivation, goal theory, and expectancy-value models) have been developed to answer two broad questions “Can I do this task” and “Do I want to do this task and why?” ( Eccles, Wigfi eld, & Schiefele, 1998 ) . One commonality across these different motivational constructs is an emphasis on individual differences and underlying psychological processes. In contrast, engagement tends to be thought of in terms of action, or the behavioral, emotional, and cognitive manifestations of motivation (Skinner, Kindermann, Connell, & Wellborn, 2009a ) . An additional difference is that engagement refl ects an individual’s interaction with context (Fredricks et al., 2004 ; Russell, Ainsley, & Frydenberg, 2005 ) . In other words, an individual is engaged in something (i.e., task, activity, and relationship), and their engagement cannot be separated from their environment. This means that engagement is malleable and is responsive to variations in the context that schools can target in interventions (Fredricks et al., 2004 ; Newmann, Wehlage, & Lamborn, 1992 ). The self-system model of motivational development (Connell, 1990 ; Connell & Wellborn, 1991 ; Deci & Ryan, 1985 ) provides one theoretical model for studying motivation and engagement. This model is based on the assumption that individuals have three fundamental motivational needs: autonomy, competence, and relatedness. If schools provide children with opportunities to meet these three needs, students will be more engaged. Students’ need for relatedness is more likely to occur in classrooms where teachers and peers create a caring and supportive environment; their need for autonomy is met when they feel like they have a choice and when they are motivated by internal rather than external factors; and their need for competence is met when they experience the classroom as optimal in structure and feel like they can achieve desired ends (Fredricks et al., 2004 ) . In contrast, if students experience schools as uncaring, coercive, and unfair, they will become disengaged or disaffected (Skinner et al., 2009a, 2009b ) . This model assumes that motivation is a necessary but not suffi cient precursor to engagement (Appleton et al., 2008 ; Connell & Wellborn, 1991 ) . Methods for Studying Engagement", "title": "" }, { "docid": "neg:1840036_15", "text": "This work presents a study of current and future bus systems with respect to their security against various malicious attacks. After a brief description of the most well-known and established vehicular communication systems, we present feasible attacks and potential exposures for these automotive networks. We also provide an approach for secured automotive communication based on modern cryptographic mechanisms that provide secrecy, manipulation prevention and authentication to solve most of the vehicular bus security issues.", "title": "" }, { "docid": "neg:1840036_16", "text": "Introduction: Actinic cheilitis (AC) is a lesion potentially malignant that affects the lips after prolonged exposure to solar ultraviolet (UV) radiation. The present study aimed to assess and describe the proliferative cell activity, using silver-stained nucleolar organizer region (AgNOR) quantification proteins, and to investigate the potential associations between AgNORs and the clinical aspects of AC lesions. Materials and methods: Cases diagnosed with AC were selected and reviewed from Center of Histopathological Diagnosis of the Institute of Biological Sciences, Passo Fundo University, Brazil. Clinical data including clinical presentation of the patients affected with AC were collected. The AgNOR techniques were performed in all recovered cases. The different microscopic areas of interest were printed with magnification of *1000, and in each case, 200 epithelial cell nuclei were randomly selected. The mean quantity in each nucleus for NORs was recorded. One-way analysis of variance was used for statistical analysis. Results: A total of 22 cases of AC were diagnosed. The patients were aged between 46 and 75 years (mean age: 55 years). Most of the patients affected were males presenting asymptomatic white plaque lesions in the lower lip. The mean value quantified for AgNORs was 2.4 ± 0.63, ranging between 1.49 and 3.82. No statistically significant difference was observed associating the quantity of AgNORs with the clinical aspects collected from the patients (p > 0.05). Conclusion: The present study reports the lack of association between the proliferative cell activity and the clinical aspects observed in patients affected by AC through the quantification of AgNORs. Clinical significance: Knowing the potential relation between the clinical aspects of AC and the proliferative cell activity quantified by AgNORs could play a significant role toward the early diagnosis of malignant lesions in the clinical practice. Keywords: Actinic cheilitis, Proliferative cell activity, Silver-stained nucleolar organizer regions.", "title": "" }, { "docid": "neg:1840036_17", "text": "Create a short summary of your paper (200 words), double-spaced. Your summary will say something like: In this action research study of my classroom of 7 grade mathematics, I investigated ______. I discovered that ____________. As a result of this research, I plan to ___________. You now begin your paper. Pages should be numbered, with the first page of text following the abstract as page one. (In Microsoft Word: after your abstract, rather than inserting a “page break” insert a “section break” to start on the next page; this will allow you to start the 3 page being numbered as page 1). You should divide this report of your research into sections. We should be able to identity the following sections and you may use these headings (headings should be bold, centered, and capitalized). Consider the page length to be a minimum.", "title": "" }, { "docid": "neg:1840036_18", "text": "Metastasis the spread of cancer cells to distant organs, is the main cause of death for cancer patients. Metastasis is often mediated by lymphatic vessels that invade the primary tumor, and an early sign of metastasis is the presence of cancer cells in the regional lymph node (the first lymph node colonized by metastasizing cancer cells from a primary tumor). Understanding the interplay between tumorigenesis and lymphangiogenesis (the formation of lymphatic vessels associated with tumor growth) will provide us with new insights into mechanisms that modulate metastatic spread. In the long term, these insights will help to define new molecular targets that could be used to block lymphatic vessel-mediated metastasis and increase patient survival. Here, we review the molecular mechanisms of embryonic lymphangiogenesis and those that are recapitulated in tumor lymphangiogenesis, with a view to identifying potential targets for therapies designed to suppress tumor lymphangiogenesis and hence metastasis.", "title": "" }, { "docid": "neg:1840036_19", "text": "In this paper we outline the nature of Neuro-linguistic Programming and explore its potential for learning and teaching. The paper draws on current research by Mathison (2003) to illustrate the role of language and internal imagery in teacherlearner interactions, and the way language influences beliefs about learning. Neuro-linguistic Programming (NLP) developed in the USA in the 1970's. It has achieved widespread popularity as a method for communication and personal development. The title, coined by the founders, Bandler and Grinder (1975a), refers to purported systematic, cybernetic links between a person's internal experience (neuro), their language (linguistic) and their patterns of behaviour (programming). In essence NLP is a form of modelling that offers potential for systematic and detailed understanding of people's subjective experience. NLP is eclectic, drawing on models and strategies from a wide range of sources. We outline NLP's approach to teaching and learning, and explore applications through illustrative data from Mathison's study. A particular implication for the training of educators is that of attention to communication skills. Finally we summarise criticisms of NLP that may represent obstacles to its acceptance by academe.", "title": "" } ]
1840037
Blockchain distributed ledger technologies for biomedical and health care applications
[ { "docid": "pos:1840037_0", "text": "A long-standing focus on compliance has traditionally constrained development of fundamental design changes for Electronic Health Records (EHRs). We now face a critical need for such innovation, as personalization and data science prompt patients to engage in the details of their healthcare and restore agency over their medical data. In this paper, we propose MedRec: a novel, decentralized record management system to handle EHRs, using blockchain technology. Our system gives patients a comprehensive, immutable log and easy access to their medical information across providers and treatment sites. Leveraging unique blockchain properties, MedRec manages authentication, confidentiality, accountability and data sharing—crucial considerations when handling sensitive information. A modular design integrates with providers' existing, local data storage solutions, facilitating interoperability and making our system convenient and adaptable. We incentivize medical stakeholders (researchers, public health authorities, etc.) to participate in the network as blockchain “miners”. This provides them with access to aggregate, anonymized data as mining rewards, in return for sustaining and securing the network via Proof of Work. MedRec thus enables the emergence of data economics, supplying big data to empower researchers while engaging patients and providers in the choice to release metadata. The purpose of this paper is to expose, in preparation for field tests, a working prototype through which we analyze and discuss our approach and the potential for blockchain in health IT and research.", "title": "" }, { "docid": "pos:1840037_1", "text": "A new mechanism is proposed for securing a blockchain applied to contracts management such as digital rights management. This mechanism includes a new consensus method using a credibility score and creates a hybrid blockchain by alternately using this new method and proof-of-stake. This makes it possible to prevent an attacker from monopolizing resources and to keep securing blockchains.", "title": "" } ]
[ { "docid": "neg:1840037_0", "text": "In this paper, we consider the Direct Perception approach for autonomous driving. Previous efforts in this field focused more on feature extraction of the road markings and other vehicles in the scene rather than on the autonomous driving algorithm and its performance under realistic assumptions. Our main contribution in this paper is introducing a new, more robust, and more realistic Direct Perception framework and corresponding algorithm for autonomous driving. First, we compare the top 3 Convolutional Neural Networks (CNN) models in the feature extraction competitions and test their performance for autonomous driving. The experimental results showed that GoogLeNet performs the best in this application. Subsequently, we propose a deep learning based algorithm for autonomous driving, and we refer to our algorithm as GoogLenet for Autonomous Driving (GLAD). Unlike previous efforts, GLAD makes no unrealistic assumptions about the autonomous vehicle or its surroundings, and it uses only five affordance parameters to control the vehicle as compared to the 14 parameters used by prior efforts. Our simulation results show that the proposed GLAD algorithm outperforms previous Direct Perception algorithms both on empty roads and while driving with other surrounding vehicles.", "title": "" }, { "docid": "neg:1840037_1", "text": "Recent years have seen a tremendous increase in the demand for wireless bandwidth. To support this demand by innovative and resourceful use of technology, future communication systems will have to shift towards higher carrier frequencies. Due to the tight regulatory situation, frequencies in the atmospheric attenuation window around 300 GHz appear very attractive to facilitate an indoor, short range, ultra high speed THz communication system. In this paper, we investigate the influence of diffuse scattering at such high frequencies on the characteristics of the communication channel and its implications on the non-line-of-sight propagation path. The Kirchhoff approach is verified by an experimental study of diffuse scattering from randomly rough surfaces commonly encountered in indoor environments using a fiber-coupled terahertz time-domain spectroscopy system to perform angle- and frequency-dependent measurements. Furthermore, we integrate the Kirchhoff approach into a self-developed ray tracing algorithm to model the signal coverage of a typical office scenario.", "title": "" }, { "docid": "neg:1840037_2", "text": "The growing movement of biologically inspired design is driven in part by the need for sustainable development and in part by the recognition that nature could be a source of innovation. Biologically inspired design by definition entails cross-domain analogies from biological systems to problems in engineering and other design domains. However, the practice of biologically inspired design at present typically is ad hoc, with little systemization of either biological knowledge for the purposes of engineering design or the processes of transferring knowledge of biological designs to engineering problems. In this paper we present an intricate episode of biologically inspired engineering design that unfolded over an extended period of time. We then analyze our observations in terms of why, what, how, and when questions of analogy. This analysis contributes toward a content theory of creative analogies in the context of biologically inspired design.", "title": "" }, { "docid": "neg:1840037_3", "text": "Self adaptive video games are important for rehabilitation at home. Recent works have explored different techniques with satisfactory results but these have a poor use of game design concepts like Challenge and Conservative Handling of Failure. Dynamic Difficult Adjustment with Help (DDA-Help) approach is presented as a new point of view for self adaptive video games for rehabilitation. Procedural Content Generation (PCG) and automatic helpers are used to a different work on Conservative Handling of Failure and Challenge. An experience with amblyopic children showed the proposal effectiveness, increasing the visual acuity 2-3 level following the Snellen Vision Test and improving the performance curve during the game time.", "title": "" }, { "docid": "neg:1840037_4", "text": "This paper presents the recent development in automatic vision based technology. Use of this technology is increasing in agriculture and fruit industry. An automatic fruit quality inspection system for sorting and grading of tomato fruit and defected tomato detection discussed here. The main aim of this system is to replace the manual inspection system. This helps in speed up the process improve accuracy and efficiency and reduce time. This system collect image from camera which is placed on conveyor belt. Then image processing is done to get required features of fruits such as texture, color and size. Defected fruit is detected based on blob detection, color detection is done based on thresholding and size detection is based on binary image of tomato. Sorting is done based on color and grading is done based on size.", "title": "" }, { "docid": "neg:1840037_5", "text": "NOTICE This report was prepared by Columbia University in the course of performing work contracted for and sponsored by the New York State Energy Research and Development Authority (hereafter \" NYSERDA \"). The opinions expressed in this report do not necessarily reflect those of NYSERDA or the State of New York, and reference to any specific product, service, process, or method does not constitute an implied or expressed recommendation or endorsement of it. Further, NYSERDA, the State of New York, and the contractor make no warranties or representations, expressed or implied, as to the fitness for particular purpose or merchantability of any product, apparatus, or service, or the usefulness, completeness, or accuracy of any processes, methods, or other information contained, described, disclosed, or referred to in this report. NYSERDA, the State of New York, and the contractor make no representation that the use of any product, apparatus, process, method, or other information will not infringe privately owned rights and will assume no liability for any loss, injury, or damage resulting from, or occurring in connection with, the use of information contained, described, disclosed, or referred to in this report. iii ABSTRACT A research project was conducted to develop a concrete material that contains recycled waste glass and reprocessed carpet fibers and would be suitable for precast concrete wall panels. Post-consumer glass and used carpets constitute major solid waste components. Therefore their beneficial use will reduce the pressure on scarce landfills and the associated costs to taxpayers. By identifying and utilizing the special properties of these recycled materials, it is also possible to produce concrete elements with improved esthetic and thermal insulation properties. Using recycled waste glass as substitute for natural aggregate in commodity products such as precast basement wall panels brings only modest economic benefits at best, because sand, gravel, and crushed stone are fairly inexpensive. However, if the esthetic properties of the glass are properly exploited, such as in building façade elements with architectural finishes, the resulting concrete panels can compete very effectively with other building materials such as natural stone. As for recycled carpet fibers, the intent of this project was to exploit their thermal properties in order to increase the thermal insulation of concrete wall panels. In this regard, only partial success was achieved, because commercially reprocessed carpet fibers improve the thermal properties of concrete only marginally, as compared with other methods, such as the use of …", "title": "" }, { "docid": "neg:1840037_6", "text": "We address the problem of dense visual-semantic embedding that maps not only full sentences and whole images but also phrases within sentences and salient regions within images into a multimodal embedding space. Such dense embeddings, when applied to the task of image captioning, enable us to produce several region-oriented and detailed phrases rather than just an overview sentence to describe an image. Specifically, we present a hierarchical structured recurrent neural network (RNN), namely Hierarchical Multimodal LSTM (HM-LSTM). Compared with chain structured RNN, our proposed model exploits the hierarchical relations between sentences and phrases, and between whole images and image regions, to jointly establish their representations. Without the need of any supervised labels, our proposed model automatically learns the fine-grained correspondences between phrases and image regions towards the dense embedding. Extensive experiments on several datasets validate the efficacy of our method, which compares favorably with the state-of-the-art methods.", "title": "" }, { "docid": "neg:1840037_7", "text": "There is increasing evidence that gardening provides substantial human health benefits. However, no formal statistical assessment has been conducted to test this assertion. Here, we present the results of a meta-analysis of research examining the effects of gardening, including horticultural therapy, on health. We performed a literature search to collect studies that compared health outcomes in control (before participating in gardening or non-gardeners) and treatment groups (after participating in gardening or gardeners) in January 2016. The mean difference in health outcomes between the two groups was calculated for each study, and then the weighted effect size determined both across all and sets of subgroup studies. Twenty-two case studies (published after 2001) were included in the meta-analysis, which comprised 76 comparisons between control and treatment groups. Most studies came from the United States, followed by Europe, Asia, and the Middle East. Studies reported a wide range of health outcomes, such as reductions in depression, anxiety, and body mass index, as well as increases in life satisfaction, quality of life, and sense of community. Meta-analytic estimates showed a significant positive effect of gardening on the health outcomes both for all and sets of subgroup studies, whilst effect sizes differed among eight subgroups. Although Egger's test indicated the presence of publication bias, significant positive effects of gardening remained after adjusting for this using trim and fill analysis. This study has provided robust evidence for the positive effects of gardening on health. A regular dose of gardening can improve public health.", "title": "" }, { "docid": "neg:1840037_8", "text": "We present a new, robust and computationally efficient Hierarchical Bayesian model for effective topic correlation modeling. We model the prior distribution of topics by a Generalized Dirichlet distribution (GD) rather than a Dirichlet distribution as in Latent Dirichlet Allocation (LDA). We define this model as GD-LDA. This framework captures correlations between topics, as in the Correlated Topic Model (CTM) and Pachinko Allocation Model (PAM), and is faster to infer than CTM and PAM. GD-LDA is effective to avoid over-fitting as the number of topics is increased. As a tree model, it accommodates the most important set of topics in the upper part of the tree based on their probability mass. Thus, GD-LDA provides the ability to choose significant topics effectively. To discover topic relationships, we perform hyper-parameter estimation based on Monte Carlo EM Estimation. We provide results using Empirical Likelihood(EL) in 4 public datasets from TREC and NIPS. Then, we present the performance of GD-LDA in ad hoc information retrieval (IR) based on MAP, P@10, and Discounted Gain. We discuss an empirical comparison of the fitting time. We demonstrate significant improvement over CTM, LDA, and PAM for EL estimation. For all the IR measures, GD-LDA shows higher performance than LDA, the dominant topic model in IR. All these improvements with a small increase in fitting time than LDA, as opposed to CTM and PAM.", "title": "" }, { "docid": "neg:1840037_9", "text": "We investigate the training and performance of generative adversarial networks using the Maximum Mean Discrepancy (MMD) as critic, termed MMD GANs. As our main theoretical contribution, we clarify the situation with bias in GAN loss functions raised by recent work: we show that gradient estimators used in the optimization process for both MMD GANs and Wasserstein GANs are unbiased, but learning a discriminator based on samples leads to biased gradients for the generator parameters. We also discuss the issue of kernel choice for the MMD critic, and characterize the kernel corresponding to the energy distance used for the Cramér GAN critic. Being an integral probability metric, the MMD benefits from training strategies recently developed for Wasserstein GANs. In experiments, the MMD GAN is able to employ a smaller critic network than the Wasserstein GAN, resulting in a simpler and faster-training algorithm with matching performance. We also propose an improved measure of GAN convergence, the Kernel Inception Distance, and show how to use it to dynamically adapt learning rates during GAN training.", "title": "" }, { "docid": "neg:1840037_10", "text": "An emerging Internet application, IPTV, has the potential to flood Internet access and backbone ISPs with massive amounts of new traffic. Although many architectures are possible for IPTV video distribution, several mesh-pull P2P architectures have been successfully deployed on the Internet. In order to gain insights into mesh-pull P2P IPTV systems and the traffic loads they place on ISPs, we have undertaken an in-depth measurement study of one of the most popular IPTV systems, namely, PPLive. We have developed a dedicated PPLive crawler, which enables us to study the global characteristics of the mesh-pull PPLive system. We have also collected extensive packet traces for various different measurement scenarios, including both campus access networks and residential access networks. The measurement results obtained through these platforms bring important insights into P2P IPTV systems. Specifically, our results show the following. 1) P2P IPTV users have the similar viewing behaviors as regular TV users. 2) During its session, a peer exchanges video data dynamically with a large number of peers. 3) A small set of super peers act as video proxy and contribute significantly to video data uploading. 4) Users in the measured P2P IPTV system still suffer from long start-up delays and playback lags, ranging from several seconds to a couple of minutes. Insights obtained in this study will be valuable for the development and deployment of future P2P IPTV systems.", "title": "" }, { "docid": "neg:1840037_11", "text": "The application of a recently developed broadband beamformer to distinguish audio signals received from different directions is experimentally tested. The beamformer combines spatial and temporal subsampling using a nested array and multirate techniques which leads to the same region of support in the frequency domain for all subbands. This allows using the same beamformer for all subbands. The experimental set-up is presented and the recorded signals are analyzed. Results indicate that the proposed approach can be used to distinguish plane waves propagating with different direction of arrivals.", "title": "" }, { "docid": "neg:1840037_12", "text": "The PEP-R (psychoeducational profile revised) is an instrument that has been used in many countries to assess abilities and formulate treatment programs for children with autism and related developmental disorders. To the end to provide further information on the PEP-R's psychometric properties, a large sample (N = 137) of children presenting Autistic Disorder symptoms under the age of 12 years, including low-functioning individuals, was examined. Results yielded data of interest especially in terms of: Cronbach's alpha, interrater reliability, and validation with the Vineland Adaptive Behavior Scales. These findings help complete the instrument's statistical description and augment its usefulness, not only in designing treatment programs for these individuals, but also as an instrument for verifying the efficacy of intervention.", "title": "" }, { "docid": "neg:1840037_13", "text": "Unity am e Deelopm nt w ith C# Alan Thorn In Pro Unity Game Development with C#, Alan Thorn, author of Learn Unity for 2D` Game Development and experienced game developer, takes you through the complete C# workflow for developing a cross-platform first person shooter in Unity. C# is the most popular programming language for experienced Unity developers, helping them get the most out of what Unity offers. If you’re already using C# with Unity and you want to take the next step in becoming an experienced, professional-level game developer, this is the book you need. Whether you are a student, an indie developer, or a seasoned game dev professional, you’ll find helpful C# examples of how to build intelligent enemies, create event systems and GUIs, develop save-game states, and lots more. You’ll understand and apply powerful programming concepts such as singleton classes, component based design, resolution independence, delegates, and event driven programming.", "title": "" }, { "docid": "neg:1840037_14", "text": "We present a real-time deep learning framework for video-based facial performance capture---the dense 3D tracking of an actor's face given a monocular video. Our pipeline begins with accurately capturing a subject using a high-end production facial capture pipeline based on multi-view stereo tracking and artist-enhanced animations. With 5--10 minutes of captured footage, we train a convolutional neural network to produce high-quality output, including self-occluded regions, from a monocular video sequence of that subject. Since this 3D facial performance capture is fully automated, our system can drastically reduce the amount of labor involved in the development of modern narrative-driven video games or films involving realistic digital doubles of actors and potentially hours of animated dialogue per character. We compare our results with several state-of-the-art monocular real-time facial capture techniques and demonstrate compelling animation inference in challenging areas such as eyes and lips.", "title": "" }, { "docid": "neg:1840037_15", "text": "In this paper a method for holographic localization of passive UHF-RFID transponders is presented. It is shown how persons or devices that are equipped with a RFID reader and that are moving along a trajectory can be enabled to locate tagged objects reliably. The localization method is based on phase values sampled from a synthetic aperture by a RFID reader. The calculated holographic image is a spatial probability density function that reveals the actual RFID tag position. Experimental results are presented which show that the holographically measured positions are in good agreement with the real position of the tag. Additional simulations have been carried out to investigate the positioning accuracy of the proposed method depending on different distortion parameters and measuring conditions. The effect of antenna phase center displacement is briefly discussed and measurements are shown that quantify the influence on the phase measurement.", "title": "" }, { "docid": "neg:1840037_16", "text": "Mobile wellness application is widely used for assisting self-monitoring practice to monitor user's daily food intake and physical activities. Although these mostly free downloadable mobile application is easy to use and covers many aspects of wellness routines, there is no proof of prolonged use. Previous research reported that user will stop using the application and turned back into their old attitude of food consumptions. The purpose of this study is to examine the factors that influence the continuance intention to adopt a mobile phone wellness application. Review of Information System Continuance Model in the areas such as mobile health, mobile phone wellness application, social network and web 2.0, were done to examine the existing factors. From the critical review, two external factors namely Social Norm and Perceive Interactivity is believed to have the ability to explain the social perspective behavior and also the effect of perceiving interactivity towards prolong usage of wellness mobile application. These findings contribute to the development of the Mobile Phones Wellness Application Continuance Use theoretical model.", "title": "" }, { "docid": "neg:1840037_17", "text": "The proliferation of MP3 players and the exploding amount of digital music content call for novel ways of music organization and retrieval to meet the ever-increasing demand for easy and effective information access. As almost every music piece is created to convey emotion, music organization and retrieval by emotion is a reasonable way of accessing music information. A good deal of effort has been made in the music information retrieval community to train a machine to automatically recognize the emotion of a music signal. A central issue of machine recognition of music emotion is the conceptualization of emotion and the associated emotion taxonomy. Different viewpoints on this issue have led to the proposal of different ways of emotion annotation, model training, and result visualization. This article provides a comprehensive review of the methods that have been proposed for music emotion recognition. Moreover, as music emotion recognition is still in its infancy, there are many open issues. We review the solutions that have been proposed to address these issues and conclude with suggestions for further research.", "title": "" }, { "docid": "neg:1840037_18", "text": "Computer programming is being introduced in schools worldwide as part of a movement that promotes Computational Thinking (CT) skills among young learners. In general, learners use visual, block-based programming languages to acquire these skills, with Scratch being one of the most popular ones. Similar to professional developers, learners also copy and paste their code, resulting in duplication. In this paper we present the findings of correlating the assessment of the CT skills of learners with the presence of software clones in over 230,000 projects obtained from the Scratch platform. Specifically, we investigate i) if software cloning is an extended practice in Scratch projects, ii) if the presence of code cloning is independent of the programming mastery of learners, iii) if code cloning can be found more frequently in Scratch projects that require specific skills (as parallelism or logical thinking), and iv) if learners who have the skills to avoid software cloning really do so. The results show that i) software cloning can be commonly found in Scratch projects, that ii) it becomes more frequent as learners work on projects that require advanced skills, that iii) no CT dimension is to be found more related to the absence of software clones than others, and iv) that learners -even if they potentially know how to avoid cloning- still copy and paste frequently. The insights from this paper could be used by educators and learners to determine when it is pedagogically more effective to address software cloning, by educational programming platform developers to adapt their systems, and by learning assessment tools to provide better evaluations.", "title": "" }, { "docid": "neg:1840037_19", "text": "This research is on the use of a decision tree approach for predicting students‟ academic performance. Education is the platform on which a society improves the quality of its citizens. To improve on the quality of education, there is a need to be able to predict academic performance of the students. The IBM Statistical Package for Social Studies (SPSS) is used to apply the Chi-Square Automatic Interaction Detection (CHAID) in producing the decision tree structure. Factors such as the financial status of the students, motivation to learn, gender were discovered to affect the performance of the students. 66.8% of the students were predicted to have passed while 33.2% were predicted to fail. It is observed that much larger percentage of the students were likely to pass and there is also a higher likely of male students passing than female students.", "title": "" } ]
1840038
GPS Spoofing Detection Based on Decision Fusion with a K-out-of-N Rule
[ { "docid": "pos:1840038_0", "text": "The area under the ROC curve, or the equivalent Gini index, is a widely used measure of performance of supervised classification rules. It has the attractive property that it side-steps the need to specify the costs of the different kinds of misclassification. However, the simple form is only applicable to the case of two classes. We extend the definition to the case of more than two classes by averaging pairwise comparisons. This measure reduces to the standard form in the two class case. We compare its properties with the standard measure of proportion correct and an alternative definition of proportion correct based on pairwise comparison of classes for a simple artificial case and illustrate its application on eight data sets. On the data sets we examined, the measures produced similar, but not identical results, reflecting the different aspects of performance that they were measuring. Like the area under the ROC curve, the measure we propose is useful in those many situations where it is impossible to give costs for the different kinds of misclassification.", "title": "" }, { "docid": "pos:1840038_1", "text": "This paper presents an S-Transform based probabilistic neural network (PNN) classifier for recognition of power quality (PQ) disturbances. The proposed method requires less number of features as compared to wavelet based approach for the identification of PQ events. The features extracted through the S-Transform are trained by a PNN for automatic classification of the PQ events. Since the proposed methodology can reduce the features of the disturbance signal to a great extent without losing its original property, less memory space and learning PNN time are required for classification. Eleven types of disturbances are considered for the classification problem. The simulation results reveal that the combination of S-Transform and PNN can effectively detect and classify different PQ events. The classification performance of PNN is compared with a feedforward multilayer (FFML) neural network (NN) and learning vector quantization (LVQ) NN. It is found that the classification performance of PNN is better than both FFML and LVQ.", "title": "" } ]
[ { "docid": "neg:1840038_0", "text": "The random forest (RF) classifier is an ensemble classifier derived from decision tree idea. However the parallel operations of several classifiers along with use of randomness in sample and feature selection has made the random forest a very strong classifier with accuracy rates comparable to most of currently used classifiers. Although, the use of random forest on handwritten digits has been considered before, in this paper RF is applied in recognizing Persian handwritten characters. Trying to improve the recognition rate, we suggest converting the structure of decision trees from a binary tree to a multi branch tree. The improvement gained this way proves the applicability of the idea.", "title": "" }, { "docid": "neg:1840038_1", "text": "Most experiments are done in laboratories. However, there is also a theory and practice of field experimentation. It has had its successes and failures over the past four decades but is now increasingly used for answering causal questions. This is true for both randomized and-perhaps more surprisingly-nonrandomized experiments. In this article, we review the history of the use of field experiments, discuss some of the reasons for their current renaissance, and focus the bulk of the article on the particular technical developments that have made this renaissance possible across four kinds of widely used experimental and quasi-experimental designs-randomized experiments, regression discontinuity designs in which those units above a cutoff get one treatment and those below get another, short interrupted time series, and nonrandomized experiments using a nonequivalent comparison group. We focus this review on some of the key technical developments addressing problems that previously stymied accurate effect estimation, the solution of which opens the way for accurate estimation of effects under the often difficult conditions of field implementation-the estimation of treatment effects under partial treatment implementation, the prevention and analysis of attrition, analysis of nested designs, new analytic developments for both regression discontinuity designs and short interrupted time series, and propensity score analysis. We also cover the key empirical evidence showing the conditions under which some nonrandomized experiments may be able to approximate results from randomized experiments.", "title": "" }, { "docid": "neg:1840038_2", "text": "Perception of universal facial beauty has long been debated amongst psychologists and anthropologists. In this paper, we perform experiments to evaluate the extent of universal beauty by surveying a number of diverse human referees to grade a collection of female facial images. Results obtained show that there exists a strong central tendency in the human grades, thus exhibiting agreement on beauty assessment. We then trained an automated classifier using the average human grades as the ground truth and used it to classify an independent test set of facial images. The high accuracy achieved proves that this classifier can be used as a general, automated tool for objective classification of female facial beauty. Potential applications exist in the entertainment industry, cosmetic industry, virtual media, and plastic surgery.", "title": "" }, { "docid": "neg:1840038_3", "text": "Both the industrial organization theory (IO) and the resource-based view of the firm (RBV) have advanced our understanding of the antecedents of competitive advantage but few have attempted to verify the outcome variables of competitive advantage and the persistence of such outcome variables. Here by integrating both IO and RBV perspectives in the analysis of competitive advantage at the firm level, our study clarifies a conceptual distinction between two types of competitive advantage: temporary competitive advantage and sustainable competitive advantage, and explores how firms transform temporary competitive advantage into sustainable competitive advantage. Testing of the developed hypotheses, based on a survey of 165 firms from Taiwan’s information and communication technology industry, suggests that firms with a stronger market position can only attain a better outcome of temporary competitive advantage whereas firms possessing a superior position in technological resources or capabilities can attain a better outcome of sustainable competitive advantage. More importantly, firms can leverage a temporary competitive advantage as an outcome of market position, to improving their technological resource and capability position, which in turn can enhance their sustainable competitive advantage.", "title": "" }, { "docid": "neg:1840038_4", "text": "Skill prerequisite information is useful for tutoring systems that assess student knowledge or that provide remediation. These systems often encode prerequisites as graphs designed by subject matter experts in a costly and time-consuming process. In this paper, we introduce Combined student Modeling and prerequisite Discovery (COMMAND), a novel algorithm for jointly inferring a prerequisite graph and a student model from data. Learning a COMMAND model requires student performance data and a mapping of items to skills (Q-matrix). COMMAND learns the skill prerequisite relations as a Bayesian network (an encoding of the probabilistic dependence among the skills) via a two-stage learning process. In the first stage, it uses an algorithm called Structural Expectation Maximization to select a class of equivalent Bayesian networks; in the second stage, it uses curriculum information to select a single Bayesian network. Our experiments on simulations and real student data suggest that COMMAND is better than prior methods in the literature.", "title": "" }, { "docid": "neg:1840038_5", "text": "Primary syphilis with oropharyngeal manifestations should be kept in mind, though. Lips and tongue ulcers are the most frequently reported lesions and tonsillar ulcers are much more rare. We report the case of a 24-year-old woman with a syphilitic ulcer localized in her left tonsil.", "title": "" }, { "docid": "neg:1840038_6", "text": "A knowledgeable observer of a game of football (soccer) can make a subjective evaluation of the quality of passes made between players during the game, such as rating them as Good, OK, or Bad. In this article, we consider the problem of producing an automated system to make the same evaluation of passes and present a model to solve this problem.\n Recently, many professional football leagues have installed object tracking systems in their stadiums that generate high-resolution and high-frequency spatiotemporal trajectories of the players and the ball. Beginning with the thesis that much of the information required to make the pass ratings is available in the trajectory signal, we further postulated that using complex data structures derived from computational geometry would enable domain football knowledge to be included in the model by computing metric variables in a principled and efficient manner. We designed a model that computes a vector of predictor variables for each pass made and uses machine learning techniques to determine a classification function that can accurately rate passes based only on the predictor variable vector.\n Experimental results show that the learned classification functions can rate passes with 90.2% accuracy. The agreement between the classifier ratings and the ratings made by a human observer is comparable to the agreement between the ratings made by human observers, and suggests that significantly higher accuracy is unlikely to be achieved. Furthermore, we show that the predictor variables computed using methods from computational geometry are among the most important to the learned classifiers.", "title": "" }, { "docid": "neg:1840038_7", "text": "SR-IOV capable network devices offer the benefits of direct I/O throughput and reduced CPU utilization while greatly increasing the scalability and sharing capabilities of the device. SR-IOV allows the benefits of the paravirtualized driver’s throughput increase and additional CPU usage reductions in HVMs (Hardware Virtual Machines). SR-IOV uses direct I/O assignment of a network device to multiple VMs, maximizing the potential for using the full bandwidth capabilities of the network device, as well as enabling unmodified guest OS based device drivers which will work for different underlying VMMs. Drawing on our recent experience in developing an SR-IOV capable networking solution for the Xen hypervisor we discuss the system level requirements and techniques for SR-IOV enablement on the platform. We discuss PCI configuration considerations, direct MMIO, interrupt handling and DMA into an HVM using an IOMMU (I/O Memory Management Unit). We then explain the architectural, design and implementation considerations for SR-IOV networking in Xen in which the Physical Function has a driver running in the driver domain that serves as a “master” and each Virtual Function exposed to a guest VM has its own virtual driver.", "title": "" }, { "docid": "neg:1840038_8", "text": "A common assumption in studies of interruptions is that one is focused in an activity and then distracted by other stimuli. We take the reverse perspective and examine whether one might first be in an attentional state that makes one susceptible to communications typically associated with distraction. We explore the confluence of multitasking and workplace communications from three temporal perspectives -- prior to an interaction, when tasks and communications are interleaved, and at the end of the day. Using logging techniques and experience sampling, we observed 32 employees in situ for five days. We found that certain attentional states lead people to be more susceptible to particular types of interaction. Rote work is followed by more Facebook or face-to-face interaction. Focused and aroused states are followed by more email. The more time in email and face-fo-face interaction, and the more total screen switches, the less productive people feel at the day's end. We present the notion of emotional homeostasis along with new directions for multitasking research.", "title": "" }, { "docid": "neg:1840038_9", "text": "Reproducibility of computational studies is a hallmark of scientific methodology. It enables researchers to build with confidence on the methods and findings of others, reuse and extend computational pipelines, and thereby drive scientific progress. Since many experimental studies rely on computational analyses, biologists need guidance on how to set up and document reproducible data analyses or simulations. In this paper, we address several questions about reproducibility. For example, what are the technical and non-technical barriers to reproducible computational studies? What opportunities and challenges do computational notebooks offer to overcome some of these barriers? What tools are available and how can they be used effectively? We have developed a set of rules to serve as a guide to scientists with a specific focus on computational notebook systems, such as Jupyter Notebooks, which have become a tool of choice for many applications. Notebooks combine detailed workflows with narrative text and visualization of results. Combined with software repositories and open source licensing, notebooks are powerful tools for transparent, collaborative, reproducible, and reusable data analyses.", "title": "" }, { "docid": "neg:1840038_10", "text": "In this paper, we present a study on learning visual recognition models from large scale noisy web data. We build a new database called WebVision, which contains more than 2.4 million web images crawled from the Internet by using queries generated from the 1, 000 semantic concepts of the ILSVRC 2012 benchmark. Meta information along with those web images (e.g., title, description, tags, etc.) are also crawled. A validation set and test set containing human annotated images are also provided to facilitate algorithmic development. Based on our new database, we obtain a few interesting observations: 1) the noisy web images are sufficient for training a good deep CNN model for visual recognition; 2) the model learnt from our WebVision database exhibits comparable or even better generalization ability than the one trained from the ILSVRC 2012 dataset when being transferred to new datasets and tasks; 3) a domain adaptation issue (a.k.a., dataset bias) is observed, which means the dataset can be used as the largest benchmark dataset for visual domain adaptation. Our new WebVision database and relevant studies in this work would benefit the advance of learning state-of-the-art visual models with minimum supervision based on web data.", "title": "" }, { "docid": "neg:1840038_11", "text": "Skills like computational thinking, problem solving, handling complexity, team-work and project management are essential for future careers and needs to be taught to students at the elementary level itself. Computer programming knowledge and skills, experiencing technology and conducting science and engineering experiments are also important for students at elementary level. However, teaching such skills effectively through active learning can be challenging for educators. In this paper, we present our approach and experiences in teaching such skills to several elementary level children using Lego Mindstorms EV3 robotics education kit. We describe our learning environment consisting of lessons, worksheets, hands-on activities and assessment. We taught students how to design, construct and program robots using components such as motors, sensors, wheels, axles, beams, connectors and gears. Students also gained knowledge on basic programming constructs such as control flow, loops, branches and conditions using a visual programming environment. We carefully observed how students performed various tasks and solved problems. We present experimental results which demonstrates that our teaching methodology consisting of both the course content and pedagogy was effective in imparting the desired skills and knowledge to elementary level children. The students also participated in a competitive World Robot Olympiad India event and qualified during the regional round which is an evidence of the effectiveness of the approach.", "title": "" }, { "docid": "neg:1840038_12", "text": "This paper proposes a novel single-stage high-power-factor ac/dc converter with symmetrical topology. The circuit topology is derived from the integration of two buck-boost power-factor-correction (PFC) converters and a full-bridge series resonant dc/dc converter. The switch-utilization factor is improved by using two active switches to serve in the PFC circuits. A high power factor at the input line is assured by operating the buck-boost converters at discontinuous conduction mode. With symmetrical operation and elaborately designed circuit parameters, zero-voltage switching on all the active power switches of the converter can be retained to achieve high circuit efficiency. The operation modes, design equations, and design steps for the circuit parameters are proposed. A prototype circuit designed for a 200-W dc output was built and tested to verify the analytical predictions. Satisfactory performances are obtained from the experimental results.", "title": "" }, { "docid": "neg:1840038_13", "text": "Consider a biped evolving in the sagittal plane. The unexpected rotation of the supporting foot can be avoided by controlling the zero moment point (ZMP). The objective of this study is to propose and analyze a control strategy for simultaneously regulating the position of the ZMP and the joints of the robot. If the tracking requirements were posed in the time domain, the problem would be underactuated in the sense that the number of inputs would be less than the number of outputs. To get around this issue, the proposed controller is based on a path-following control strategy, previously developed for dealing with the underactuation present in planar robots without actuated ankles. In particular, the control law is defined in such a way that only the kinematic evolution of the robot's state is regulated, but not its temporal evolution. The asymptotic temporal evolution of the robot is completely defined through a one degree-of-freedom subsystem of the closed-loop model. Since the ZMP is controlled, bipedal walking that includes a prescribed rotation of the foot about the toe can also be considered. Simple analytical conditions are deduced that guarantee the existence of a periodic motion and the convergence toward this motion.", "title": "" }, { "docid": "neg:1840038_14", "text": "Endowing a chatbot with personality is challenging but significant to deliver more realistic and natural conversations. In this paper, we address the issue of generating responses that are coherent to a pre-specified personality or profile. We present a method that uses generic conversation data from social media (without speaker identities) to generate profile-coherent responses. The central idea is to detect whether a profile should be used when responding to a user post (by a profile detector), and if necessary, select a key-value pair from the profile to generate a response forward and backward (by a bidirectional decoder) so that a personalitycoherent response can be generated. Furthermore, in order to train the bidirectional decoder with generic dialogue data, a position detector is designed to predict a word position from which decoding should start given a profile value. Manual and automatic evaluation shows that our model can deliver more coherent, natural, and diversified responses.", "title": "" }, { "docid": "neg:1840038_15", "text": "This work uses deep learning models for daily directional movements prediction of a stock price using financial news titles and technical indicators as input. A comparison is made between two different sets of technical indicators, set 1: Stochastic %K, Stochastic %D, Momentum, Rate of change, William’s %R, Accumulation/Distribution (A/D) oscillator and Disparity 5; set 2: Exponential Moving Average, Moving Average Convergence-Divergence, Relative Strength Index, On Balance Volume and Bollinger Bands. Deep learning methods can detect and analyze complex patterns and interactions in the data allowing a more precise trading process. Experiments has shown that Convolutional Neural Network (CNN) can be better than Recurrent Neural Networks (RNN) on catching semantic from texts and RNN is better on catching the context information and modeling complex temporal characteristics for stock market forecasting. So, there are two models compared in this paper: a hybrid model composed by a CNN for the financial news and a Long Short-Term Memory (LSTM) for technical indicators, named as SI-RCNN; and a LSTM network only for technical indicators, named as I-RNN. The output of each model is used as input for a trading agent that buys stocks on the current day and sells the next day when the model predicts that the price is going up, otherwise the agent sells stocks on the current day and buys the next day. The proposed method shows a major role of financial news in stabilizing the results and almost no improvement when comparing different sets of technical indicators.", "title": "" }, { "docid": "neg:1840038_16", "text": "We present a diff algorithm for XML data. This work is motivated by the support for change control in the context of the Xyleme project that is investigating dynamic warehouses capable of storing massive volume of XML data. Because of the context, our algorithm has to be very efficient in terms of speed and memory space even at the cost of some loss of “quality”. Also, it considers, besides insertions, deletions and updates (standard in diffs), a move operation on subtrees that is essential in the context of XML. Intuitively, our diff algorithm uses signatures to match (large) subtrees that were left unchanged between the old and new versions. Such exact matchings are then possibly propagated to ancestors and descendants to obtain more matchings. It also uses XML specific information such as ID attributes. We provide a performance analysis of the algorithm. We show that it runs in average in linear time vs. quadratic time for previous algorithms. We present experiments on synthetic data that confirm the analysis. Since this problem is NPhard, the linear time is obtained by trading some quality. We present experiments (again on synthetic data) that show that the output of our algorithm is reasonably close to the “optimal” in terms of quality. Finally we present experiments on a small sample of XML pages found on the Web.", "title": "" }, { "docid": "neg:1840038_17", "text": "Load Balancing is essential for efficient operations indistributed environments. As Cloud Computing is growingrapidly and clients are demanding more services and betterresults, load balancing for the Cloud has become a veryinteresting and important research area. Many algorithms weresuggested to provide efficient mechanisms and algorithms forassigning the client's requests to available Cloud nodes. Theseapproaches aim to enhance the overall performance of the Cloudand provide the user more satisfying and efficient services. Inthis paper, we investigate the different algorithms proposed toresolve the issue of load balancing and task scheduling in CloudComputing. We discuss and compare these algorithms to providean overview of the latest approaches in the field.", "title": "" }, { "docid": "neg:1840038_18", "text": "Body temperature is one of the key parameters for health monitoring of premature infants at the neonatal intensive care unit (NICU). In this paper, we propose and demonstrate a design of non-invasive neonatal temperature monitoring with wearable sensors. A negative temperature coefficient (NTC) resistor is applied as the temperature sensor due to its accuracy and small size. Conductive textile wires are used to make the sensor integration compatible for a wearable non-invasive monitoring platform, such as a neonatal smart jacket. Location of the sensor, materials and appearance are designed to optimize the functionality, patient comfort and the possibilities for aesthetic features. A prototype belt is built of soft bamboo fabrics with NTC sensor integrated to demonstrate the temperature monitoring. Experimental results from the testing on neonates at NICU of Máxima Medical Center (MMC), Veldhoven, the Netherlands, show the accurate temperature monitoring by the prototype belt comparing with the standard patient monitor.", "title": "" }, { "docid": "neg:1840038_19", "text": "Despite significant progress in the development of human action detection datasets and algorithms, no current dataset is representative of real-world aerial view scenarios. We present Okutama-Action, a new video dataset for aerial view concurrent human action detection. It consists of 43 minute-long fully-annotated sequences with 12 action classes. Okutama-Action features many challenges missing in current datasets, including dynamic transition of actions, significant changes in scale and aspect ratio, abrupt camera movement, as well as multi-labeled actors. As a result, our dataset is more challenging than existing ones, and will help push the field forward to enable real-world applications.", "title": "" } ]
1840039
Actions speak as loud as words: predicting relationships from social behavior data
[ { "docid": "pos:1840039_0", "text": "As user-generated content and interactions have overtaken the web as the default mode of use, questions of whom and what to trust have become increasingly important. Fortunately, online social networks and social media have made it easy for users to indicate whom they trust and whom they do not. However, this does not solve the problem since each user is only likely to know a tiny fraction of other users, we must have methods for inferring trust - and distrust - between users who do not know one another. In this paper, we present a new method for computing both trust and distrust (i.e., positive and negative trust). We do this by combining an inference algorithm that relies on a probabilistic interpretation of trust based on random graphs with a modified spring-embedding algorithm. Our algorithm correctly classifies hidden trust edges as positive or negative with high accuracy. These results are useful in a wide range of social web applications where trust is important to user behavior and satisfaction.", "title": "" }, { "docid": "pos:1840039_1", "text": "Social media is a place where users present themselves to the world, revealing personal details and insights into their lives. We are beginning to understand how some of this information can be utilized to improve the users' experiences with interfaces and with one another. In this paper, we are interested in the personality of users. Personality has been shown to be relevant to many types of interactions, it has been shown to be useful in predicting job satisfaction, professional and romantic relationship success, and even preference for different interfaces. Until now, to accurately gauge users' personalities, they needed to take a personality test. This made it impractical to use personality analysis in many social media domains. In this paper, we present a method by which a user's personality can be accurately predicted through the publicly available information on their Twitter profile. We will describe the type of data collected, our methods of analysis, and the machine learning techniques that allow us to successfully predict personality. We then discuss the implications this has for social media design, interface design, and broader domains.", "title": "" } ]
[ { "docid": "neg:1840039_0", "text": "Social networking sites (SNS) are especially attractive for adolescents, but it has also been shown that these users can suffer from negative psychological consequences when using these sites excessively. We analyze the role of fear of missing out (FOMO) and intensity of SNS use for explaining the link between psychopathological symptoms and negative consequences of SNS use via mobile devices. In an online survey, 1468 Spanish-speaking Latin-American social media users between 16 and 18 years old completed the Hospital Anxiety and Depression Scale (HADS), the Social Networking Intensity scale (SNI), the FOMO scale (FOMOs), and a questionnaire on negative consequences of using SNS via mobile device (CERM). Using structural equation modeling, it was found that both FOMO and SNI mediate the link between psychopathology and CERM, but by different mechanisms. Additionally, for girls, feeling depressed seems to trigger higher SNS involvement. For boys, anxiety triggers higher SNS involvement.", "title": "" }, { "docid": "neg:1840039_1", "text": "Globally modeling and reasoning over relations between regions can be beneficial for many computer vision tasks on both images and videos. Convolutional Neural Networks (CNNs) excel at modeling local relations by convolution operations, but they are typically inefficient at capturing global relations between distant regions and require stacking multiple convolution layers. In this work, we propose a new approach for reasoning globally in which a set of features are globally aggregated over the coordinate space and then projected to an interaction space where relational reasoning can be efficiently computed. After reasoning, relation-aware features are distributed back to the original coordinate space for down-stream tasks. We further present a highly efficient instantiation of the proposed approach and introduce the Global Reasoning unit (GloRe unit) that implements the coordinate-interaction space mapping by weighted global pooling and weighted broadcasting, and the relation reasoning via graph convolution on a small graph in interaction space. The proposed GloRe unit is lightweight, end-to-end trainable and can be easily plugged into existing CNNs for a wide range of tasks. Extensive experiments show our GloRe unit can consistently boost the performance of state-of-the-art backbone architectures, including ResNet [15, 16], ResNeXt [33], SE-Net [18] and DPN [9], for both 2D and 3D CNNs, on image classification, semantic segmentation and video action recognition task.", "title": "" }, { "docid": "neg:1840039_2", "text": "This paper proposes an ontology-based approach to analyzing and assessing the security posture for software products. It provides measurements of trust for a software product based on its security requirements and evidence of assurance, which are retrieved from an ontology built for vulnerability management. Our approach differentiates with the previous work in the following aspects: (1) It is a holistic approach emphasizing that the system assurance cannot be determined or explained by its component assurance alone. Instead, the software system as a whole determines its assurance level. (2) Our approach is based on widely accepted standards such as CVSS, CVE, CWE, CPE, and CAPEC. Our ontology integrated these standards seamlessly thus provides a solid foundation for security assessment. (3) Automated tools have been built to support our approach, delivering the environmental scores for software products.", "title": "" }, { "docid": "neg:1840039_3", "text": "Considerable research has been devoted to utilizing multimodal features for better understanding multimedia data. However, two core research issues have not yet been adequately addressed. First, given a set of features extracted from multiple media sources (e.g., extracted from the visual, audio, and caption track of videos), how do we determine the best modalities? Second, once a set of modalities has been identified, how do we best fuse them to map to semantics? In this paper, we propose a two-step approach. The first step finds <i>statistically independent modalities</i> from raw features. In the second step, we use <i>super-kernel fusion</i> to determine the optimal combination of individual modalities. We carefully analyze the tradeoffs between three design factors that affect fusion performance: <i>modality independence</i>, <i>curse of dimensionality</i>, and <i>fusion-model complexity</i>. Through analytical and empirical studies, we demonstrate that our two-step approach, which achieves a careful balance of the three design factors, can improve class-prediction accuracy over traditional techniques.", "title": "" }, { "docid": "neg:1840039_4", "text": "Fuzzy logic controllers have gained popularity in the past few decades with highly successful implementation in many fields. Fuzzy logic enables designers to control complex systems more effectively than traditional methods. Teaching students fuzzy logic in a laboratory can be a time-consuming and an expensive task. This paper presents a low-cost educational microcontroller-based tool for fuzzy logic controlled line following mobile robot. The robot is used in the second year of undergraduate teaching in an elective course in the department of computer engineering of the Near East University. Hardware details of the robot and the software implementing the fuzzy logic control algorithm are given in the paper. 2009 Wiley Periodicals, Inc. Comput Appl Eng Educ; Published online in Wiley InterScience (www.interscience.wiley.com); DOI 10.1002/cae.20347", "title": "" }, { "docid": "neg:1840039_5", "text": "We present Listen, Attend and Spell (LAS), a neural speech recognizer that transcribes speech utterances directly to characters without pronunciation models, HMMs or other components of traditional speech recognizers. In LAS, the neural network architecture subsumes the acoustic, pronunciation and language models making it not only an end-to-end trained system but an end-to-end model. In contrast to DNN-HMM, CTC and most other models, LAS makes no independence assumptions about the probability distribution of the output character sequences given the acoustic sequence. Our system has two components: a listener and a speller. The listener is a pyramidal recurrent network encoder that accepts filter bank spectra as inputs. The speller is an attention-based recurrent network decoder that emits each character conditioned on all previous characters, and the entire acoustic sequence. On a Google voice search task, LAS achieves a WER of 14.1% without a dictionary or an external language model and 10.3% with language model rescoring over the top 32 beams. In comparison, the state-of-the-art CLDNN-HMM model achieves a WER of 8.0% on the same set.", "title": "" }, { "docid": "neg:1840039_6", "text": "The recent outbreak of indie games has popularized volumetric terrains to a new level, although video games have used them for decades. These terrains contain geological data, such as materials or cave systems. To improve the exploration experience and due to the large amount of data needed to construct volumetric terrains, industry uses procedural methods to generate them. However, they use their own methods, which are focused on their specific problem domains, lacking customization features. Besides, the evaluation of the procedural terrain generators remains an open issue in this field since no standard metrics have been established yet. In this paper, we propose a new approach to procedural volumetric terrains. It generates completely customizable volumetric terrains with layered materials and other features (e.g., mineral veins, underground caves, material mixtures and underground material flow). The method allows the designer to specify the characteristics of the terrain using intuitive parameters. Additionally, it uses a specific representation for the terrain based on stacked material structures, reducing memory requirements. To overcome the problem in the evaluation of the generators, we propose a new set of metrics for the generated content.", "title": "" }, { "docid": "neg:1840039_7", "text": "Gallium nitride high-electron mobility transistors (GaN HEMTs) have attractive properties, low on-resistances and fast switching speeds. This paper presents the characteristics of a normally-on GaN HEMT that we fabricated. Further, the circuit operation of a Class-E amplifier is analyzed. Experimental results demonstrate the excellent performance of the gate drive circuit for the normally-on GaN HEMT and the 13.56MHz radio frequency (RF) power amplifier.", "title": "" }, { "docid": "neg:1840039_8", "text": "Secondary nocturnal enuresis accounts for about one quarter of patients with bed-wetting. Although a psychological cause is responsible in some children, various other causes are possible and should be considered. This article reviews the epidemiology, psychological and social impact, causes, investigation, management, and prognosis of secondary nocturnal enuresis.", "title": "" }, { "docid": "neg:1840039_9", "text": "Krill Herd (KH) optimization algorithm was recently proposed based on herding behavior of krill individuals in the nature for solving optimization problems. In this paper, we develop Standard Krill Herd (SKH) algorithm and propose Fuzzy Krill Herd (FKH) optimization algorithm which is able to dynamically adjust the participation amount of exploration and exploitation by looking the progress of solving the problem in each step. In order to evaluate the proposed FKH algorithm, we utilize some standard benchmark functions and also Inventory Control Problem. Experimental results indicate the superiority of our proposed FKH optimization algorithm in comparison with the standard KH optimization algorithm.", "title": "" }, { "docid": "neg:1840039_10", "text": "Organizations around the world have called for the responsible development of nanotechnology. The goals of this approach are to emphasize the importance of considering and controlling the potential adverse impacts of nanotechnology in order to develop its capabilities and benefits. A primary area of concern is the potential adverse impact on workers, since they are the first people in society who are exposed to the potential hazards of nanotechnology. Occupational safety and health criteria for defining what constitutes responsible development of nanotechnology are needed. This article presents five criterion actions that should be practiced by decision-makers at the business and societal levels-if nanotechnology is to be developed responsibly. These include (1) anticipate, identify, and track potentially hazardous nanomaterials in the workplace; (2) assess workers' exposures to nanomaterials; (3) assess and communicate hazards and risks to workers; (4) manage occupational safety and health risks; and (5) foster the safe development of nanotechnology and realization of its societal and commercial benefits. All these criteria are necessary for responsible development to occur. Since it is early in the commercialization of nanotechnology, there are still many unknowns and concerns about nanomaterials. Therefore, it is prudent to treat them as potentially hazardous until sufficient toxicology, and exposure data are gathered for nanomaterial-specific hazard and risk assessments. In this emergent period, it is necessary to be clear about the extent of uncertainty and the need for prudent actions.", "title": "" }, { "docid": "neg:1840039_11", "text": "Network Function Virtualization (NFV) is emerging as one of the most innovative concepts in the networking landscape. By migrating network functions from dedicated mid-dleboxes to general purpose computing platforms, NFV can effectively reduce the cost to deploy and to operate large networks. However, in order to achieve its full potential, NFV needs to encompass also the radio access network allowing Mobile Virtual Network Operators to deploy custom resource allocation solutions within their virtual radio nodes. Such requirement raises several challenges in terms of performance isolation and resource provisioning. In this work we formalize the Virtual Network Function (VNF) placement problem for radio access networks as an integer linear programming problem and we propose a VNF placement heuristic. Moreover, we also present a proof-of-concept implementation of an NFV management and orchestration framework for Enterprise WLANs. The proposed architecture builds upon a programmable network fabric where pure forwarding nodes are mixed with radio and packet processing nodes leveraging on general computing platforms.", "title": "" }, { "docid": "neg:1840039_12", "text": "The majority of smart devices used nowadays (e.g., smartphones, laptops, tablets) is capable of both Wi-Fi and Bluetooth wireless communications. Both network interfaces are identified by a unique 48-bits MAC address, assigned during the manufacturing process and unique worldwide. Such addresses, fundamental for link-layer communications and contained in every frame transmitted by the device, can be easily collected through packet sniffing and later used to perform higher level analysis tasks (user tracking, crowd density estimation, etc.). In this work we propose a system to pair the Wi-Fi and Bluetooth MAC addresses belonging to a physical unique device, starting from packets captured through a network of wireless sniffers. We propose several algorithms to perform such a pairing and we evaluate their performance through experiments in a controlled scenario. We show that the proposed algorithms can pair the MAC addresses with good accuracy. The findings of this paper may be useful to improve the precision of indoor localization and crowd density estimation systems and open some questions on the privacy issues of Wi-Fi and Bluetooth enabled devices.", "title": "" }, { "docid": "neg:1840039_13", "text": "This paper presents a scalable 28-GHz phased-array architecture suitable for fifth-generation (5G) communication links based on four-channel ( $2\\times 2$ ) transmit/receive (TRX) quad-core chips in SiGe BiCMOS with flip-chip packaging. Each channel of the quad-core beamformer chip has 4.6-dB noise figure (NF) in the receive (RX) mode and 10.5-dBm output 1-dB compression point (OP1dB) in the transmit (TX) mode with 6-bit phase control and 14-dB gain control. The phase change with gain control is only ±3°, allowing orthogonality between the variable gain amplifier and the phase shifter. The chip has high RX linearity (IP1dB = −22 dBm/channel) and consumes 130 mW in the RX mode and 200 mW in the TX mode at P1dB per channel. Advantages of the scalable all-RF beamforming architecture and circuit design techniques are discussed in detail. 4- and 32-element phased-arrays are demonstrated with detailed data link measurements using a single or eight of the four-channel TRX core chips on a low-cost printed circuit board with microstrip antennas. The 32-element array achieves an effective isotropic radiated power (EIRP) of 43 dBm at P1dB, a 45-dBm saturated EIRP, and a record-level system NF of 5.2 dB when the beamformer loss and transceiver NF are taken into account and can scan to ±50° in azimuth and ±25° in elevation with < −12-dB sidelobes and without any phase or amplitude calibration. A wireless link is demonstrated using two 32-element phased-arrays with a state-of-the-art data rate of 1.0–1.6 Gb/s in a single beam using 16-QAM waveforms over all scan angles at a link distance of 300 m.", "title": "" }, { "docid": "neg:1840039_14", "text": "Cancer is second only to heart disease as a cause of death in the US, with a further negative economic impact on society. Over the past decade, details have emerged which suggest that different glycosylphosphatidylinositol (GPI)-anchored proteins are fundamentally involved in a range of cancers. This post-translational glycolipid modification is introduced into proteins via the action of the enzyme GPI transamidase (GPI-T). In 2004, PIG-U, one of the subunits of GPI-T, was identified as an oncogene in bladder cancer, offering a direct connection between GPI-T and cancer. GPI-T is a membrane-bound, multi-subunit enzyme that is poorly understood, due to its structural complexity and membrane solubility. This review is divided into three sections. First, we describe our current understanding of GPI-T, including what is known about each subunit and their roles in the GPI-T reaction. Next, we review the literature connecting GPI-T to different cancers with an emphasis on the variations in GPI-T subunit over-expression. Finally, we discuss some of the GPI-anchored proteins known to be involved in cancer onset and progression and that serve as potential biomarkers for disease-selective therapies. Given that functions for only one of GPI-T's subunits have been robustly assigned, the separation between healthy and malignant GPI-T activity is poorly defined.", "title": "" }, { "docid": "neg:1840039_15", "text": "BACKGROUND\nPossible associations between television viewing and video game playing and children's aggression have become public health concerns. We did a systematic review of studies that examined such associations, focussing on children and young people with behavioural and emotional difficulties, who are thought to be more susceptible.\n\n\nMETHODS\nWe did computer-assisted searches of health and social science databases, gateways, publications from relevant organizations and for grey literature; scanned bibliographies; hand-searched key journals; and corresponded with authors. We critically appraised all studies.\n\n\nRESULTS\nA total of 12 studies: three experiments with children with behavioural and emotional difficulties found increased aggression after watching aggressive as opposed to low-aggressive content television programmes, one found the opposite and two no clear effect, one found such children no more likely than controls to imitate aggressive television characters. One case-control study and one survey found that children and young people with behavioural and emotional difficulties watched more television than controls; another did not. Two studies found that children and young people with behavioural and emotional difficulties viewed more hours of aggressive television programmes than controls. One study on video game use found that young people with behavioural and emotional difficulties viewed more minutes of violence and played longer than controls. In a qualitative study children with behavioural and emotional difficulties, but not their parents, did not associate watching television with aggression. All studies had significant methodological flaws. None was based on power calculations.\n\n\nCONCLUSION\nThis systematic review found insufficient, contradictory and methodologically flawed evidence on the association between television viewing and video game playing and aggression in children and young people with behavioural and emotional difficulties. If public health advice is to be evidence-based, good quality research is needed.", "title": "" }, { "docid": "neg:1840039_16", "text": "UNLABELLED\nThe limit of the Colletotrichum gloeosporioides species complex is defined genetically, based on a strongly supported clade within the Colletotrichum ITS gene tree. All taxa accepted within this clade are morphologically more or less typical of the broadly defined C. gloeosporioides, as it has been applied in the literature for the past 50 years. We accept 22 species plus one subspecies within the C. gloeosporioides complex. These include C. asianum, C. cordylinicola, C. fructicola, C. gloeosporioides, C. horii, C. kahawae subsp. kahawae, C. musae, C. nupharicola, C. psidii, C. siamense, C. theobromicola, C. tropicale, and C. xanthorrhoeae, along with the taxa described here as new, C. aenigma, C. aeschynomenes, C. alatae, C. alienum, C. aotearoa, C. clidemiae, C. kahawae subsp. ciggaro, C. salsolae, and C. ti, plus the nom. nov. C. queenslandicum (for C. gloeosporioides var. minus). All of the taxa are defined genetically on the basis of multi-gene phylogenies. Brief morphological descriptions are provided for species where no modern description is available. Many of the species are unable to be reliably distinguished using ITS, the official barcoding gene for fungi. Particularly problematic are a set of species genetically close to C. musae and another set of species genetically close to C. kahawae, referred to here as the Musae clade and the Kahawae clade, respectively. Each clade contains several species that are phylogenetically well supported in multi-gene analyses, but within the clades branch lengths are short because of the small number of phylogenetically informative characters, and in a few cases individual gene trees are incongruent. Some single genes or combinations of genes, such as glyceraldehyde-3-phosphate dehydrogenase and glutamine synthetase, can be used to reliably distinguish most taxa and will need to be developed as secondary barcodes for species level identification, which is important because many of these fungi are of biosecurity significance. In addition to the accepted species, notes are provided for names where a possible close relationship with C. gloeosporioides sensu lato has been suggested in the recent literature, along with all subspecific taxa and formae speciales within C. gloeosporioides and its putative teleomorph Glomerella cingulata.\n\n\nTAXONOMIC NOVELTIES\nName replacement - C. queenslandicum B. Weir & P.R. Johnst. New species - C. aenigma B. Weir & P.R. Johnst., C. aeschynomenes B. Weir & P.R. Johnst., C. alatae B. Weir & P.R. Johnst., C. alienum B. Weir & P.R. Johnst, C. aotearoa B. Weir & P.R. Johnst., C. clidemiae B. Weir & P.R. Johnst., C. salsolae B. Weir & P.R. Johnst., C. ti B. Weir & P.R. Johnst. New subspecies - C. kahawae subsp. ciggaro B. Weir & P.R. Johnst. Typification: Epitypification - C. queenslandicum B. Weir & P.R. Johnst.", "title": "" }, { "docid": "neg:1840039_17", "text": "In recent years, the videogame industry has been characterized by a great boost in gesture recognition and motion tracking, following the increasing request of creating immersive game experiences. The Microsoft Kinect sensor allows acquiring RGB, IR and depth images with a high frame rate. Because of the complementary nature of the information provided, it has proved an attractive resource for researchers with very different backgrounds. In summer 2014, Microsoft launched a new generation of Kinect on the market, based on time-of-flight technology. This paper proposes a calibration of Kinect for Xbox One imaging sensors, focusing on the depth camera. The mathematical model that describes the error committed by the sensor as a function of the distance between the sensor itself and the object has been estimated. All the analyses presented here have been conducted for both generations of Kinect, in order to quantify the improvements that characterize every single imaging sensor. Experimental results show that the quality of the delivered model improved applying the proposed calibration procedure, which is applicable to both point clouds and the mesh model created with the Microsoft Fusion Libraries.", "title": "" }, { "docid": "neg:1840039_18", "text": "Aging is very often associated with magnesium (Mg) deficit. Total plasma magnesium concentrations are remarkably constant in healthy subjects throughout life, while total body Mg and Mg in the intracellular compartment tend to decrease with age. Dietary Mg deficiencies are common in the elderly population. Other frequent causes of Mg deficits in the elderly include reduced Mg intestinal absorption, reduced Mg bone stores, and excess urinary loss. Secondary Mg deficit in aging may result from different conditions and diseases often observed in the elderly (i.e. insulin resistance and/or type 2 diabetes mellitus) and drugs (i.e. use of hypermagnesuric diuretics). Chronic Mg deficits have been linked to an increased risk of numerous preclinical and clinical outcomes, mostly observed in the elderly population, including hypertension, stroke, atherosclerosis, ischemic heart disease, cardiac arrhythmias, glucose intolerance, insulin resistance, type 2 diabetes mellitus, endothelial dysfunction, vascular remodeling, alterations in lipid metabolism, platelet aggregation/thrombosis, inflammation, oxidative stress, cardiovascular mortality, asthma, chronic fatigue, as well as depression and other neuropsychiatric disorders. Both aging and Mg deficiency have been associated to excessive production of oxygen-derived free radicals and low-grade inflammation. Chronic inflammation and oxidative stress are also present in several age-related diseases, such as many vascular and metabolic conditions, as well as frailty, muscle loss and sarcopenia, and altered immune responses, among others. Mg deficit associated to aging may be at least one of the pathophysiological links that may help to explain the interactions between inflammation and oxidative stress with the aging process and many age-related diseases.", "title": "" }, { "docid": "neg:1840039_19", "text": "Acne is a common inflammatory disease. Scarring is an unwanted end point of acne. Both atrophic and hypertrophic scar types occur. Soft-tissue augmentation aims to improve atrophic scars. In this review, we will focus on the use of dermal fillers for acne scar improvement. Therefore, various filler types are characterized, and available data on their use in acne scar improvement are analyzed.", "title": "" } ]
1840040
The dawn of the liquid biopsy in the fight against cancer
[ { "docid": "pos:1840040_0", "text": "Cancer is associated with mutated genes, and analysis of tumour-linked genetic alterations is increasingly used for diagnostic, prognostic and treatment purposes. The genetic profile of solid tumours is currently obtained from surgical or biopsy specimens; however, the latter procedure cannot always be performed routinely owing to its invasive nature. Information acquired from a single biopsy provides a spatially and temporally limited snap-shot of a tumour and might fail to reflect its heterogeneity. Tumour cells release circulating free DNA (cfDNA) into the blood, but the majority of circulating DNA is often not of cancerous origin, and detection of cancer-associated alleles in the blood has long been impossible to achieve. Technological advances have overcome these restrictions, making it possible to identify both genetic and epigenetic aberrations. A liquid biopsy, or blood sample, can provide the genetic landscape of all cancerous lesions (primary and metastases) as well as offering the opportunity to systematically track genomic evolution. This Review will explore how tumour-associated mutations detectable in the blood can be used in the clinic after diagnosis, including the assessment of prognosis, early detection of disease recurrence, and as surrogates for traditional biopsies with the purpose of predicting response to treatments and the development of acquired resistance.", "title": "" } ]
[ { "docid": "neg:1840040_0", "text": "In this paper, methods are shown how to adapt invertible two-dimensional chaotic maps on a torus or on a square to create new symmetric block encryption schemes. A chaotic map is first generalized by introducing parameters and then discretized to a finite square lattice of points which represent pixels or some other data items. Although the discretized map is a permutation and thus cannot be chaotic, it shares certain properties with its continuous counterpart as long as the number of iterations remains small. The discretized map is further extended to three dimensions and composed with a simple diffusion mechanism. As a result, a symmetric block product encryption scheme is obtained. To encrypt an N × N image, the ciphering map is iteratively applied to the image. The construction of the cipher and its security is explained with the two-dimensional Baker map. It is shown that the permutations induced by the Baker map behave as typical random permutations. Computer simulations indicate that the cipher has good diffusion properties with respect to the plain-text and the key. A nontraditional pseudo-random number generator based on the encryption scheme is described and studied. Examples of some other two-dimensional chaotic maps are given and their suitability for secure encryption is discussed. The paper closes with a brief discussion of a possible relationship between discretized chaos and cryptosystems.", "title": "" }, { "docid": "neg:1840040_1", "text": "BACKGROUND\nSkeletal muscle is key to motor development and represents a major metabolic end organ that aids glycaemic regulation.\n\n\nOBJECTIVES\nTo create gender-specific reference curves for fat-free mass (FFM) and appendicular (limb) skeletal muscle mass (SMMa) in children and adolescents. To examine the muscle-to-fat ratio in relation to body mass index (BMI) for age and gender.\n\n\nMETHODS\nBody composition was measured by segmental bioelectrical impedance (BIA, Tanita BC418) in 1985 Caucasian children aged 5-18.8 years. Skeletal muscle mass data from the four limbs were used to derive smoothed centile curves and the muscle-to-fat ratio.\n\n\nRESULTS\nThe centile curves illustrate the developmental patterns of %FFM and SMMa. While the %FFM curves differ markedly between boys and girls, the SMMa (kg), %SMMa and %SMMa/FFM show some similarities in shape and variance, together with some gender-specific characteristics. Existing BMI curves do not reveal these gender differences. Muscle-to-fat ratio showed a very wide range with means differing between boys and girls and across fifths of BMI z-score.\n\n\nCONCLUSIONS\nBIA assessment of %FFM and SMMa represents a significant advance in nutritional assessment since these body composition components are associated with metabolic health. Muscle-to-fat ratio has the potential to provide a better index of future metabolic health.", "title": "" }, { "docid": "neg:1840040_2", "text": "Asphalt pavement distresses have significant importance in roads and highways. This paper addresses the detection and localization of one of the key pavement distresses, the potholes using computer vision. Different kinds of pothole and non-pothole images from asphalt pavement are considered for experimentation. Considering the appearance-shape based nature of the potholes, Histograms of oriented gradients (HOG) features are computed for the input images. Features are trained and classified using Naïve Bayes classifier resulting in labeling of the input as pothole or non-pothole image. To locate the pothole in the detected pothole images, normalized graph cut segmentation scheme is employed. Proposed scheme is tested on a dataset having broad range of pavement images. Experimentation results showed 90 % accuracy for the detection of pothole images and high recall for the localization of pothole in the detected images.", "title": "" }, { "docid": "neg:1840040_3", "text": "Segmentation of novel or dynamic objects in a scene, often referred to as “background subtraction” or “foreground segmentation”, is a critical early in step in most computer vision applications in domains such as surveillance and human-computer interaction. All previously described, real-time methods fail to handle properly one or more common phenomena, such as global illumination changes, shadows, inter-reflections, similarity of foreground color to background, and non-static backgrounds (e.g. active video displays or trees waving in the wind). The recent advent of hardware and software for real-time computation of depth imagery makes better approaches possible. We propose a method for modeling the background that uses per-pixel, time-adaptive, Gaussian mixtures in the combined input space of depth and luminance-invariant color. This combination in itself is novel, but we further improve it by introducing the ideas of 1) modulating the background model learning rate based on scene activity, and 2) making colorbased segmentation criteria dependent on depth observations. Our experiments show that the method possesses much greater robustness to problematic phenomena than the prior state-of-the-art, without sacrificing real-time performance, making it well-suited for a wide range of practical applications in video event detection and recognition.", "title": "" }, { "docid": "neg:1840040_4", "text": "As a newly developing academic domain, researches on Mobile learning are still in their initial stage. Meanwhile, M-blackboard comes from Mobile learning. This study attempts to discover the factors impacting the intention to adopt mobile blackboard. Eleven selected model on the Mobile learning adoption were comprehensively reviewed. From the reviewed articles, the most factors are identified. Also, from the frequency analysis, the most frequent factors in the Mobile blackboard or Mobile learning adoption studies are performance expectancy, effort expectancy, perceived playfulness, facilitating conditions, self-management, cost and past experiences. The descriptive statistic was performed to gather the respondents’ demographic information. It also shows that the respondents agreed on nearly every statement item. Pearson correlation and regression analysis were also conducted.", "title": "" }, { "docid": "neg:1840040_5", "text": "Today, among other challenges, teaching students how to write computer programs for the first time can be an important criterion for whether students in computing will remain in their program of study, i.e. Computer Science or Information Technology. Not learning to program a computer as a computer scientist or information technologist can be compared to a mathematician not learning algebra. For a mathematician this would be an extremely limiting situation. For a computer scientist, not learning to program imposes a similar severe limitation on the budding computer scientist. Therefore it is not a question as to whether programming should be taught rather it is a question of how to maximize aspects of teaching programming so that students are less likely to be discouraged when learning to program. Different criteria have been used to select first programming languages. Computer scientists have attempted to establish criteria for selecting the first programming language to teach a student. This paper examines the criteria used to select first programming languages and the issues that novices face when learning to program in an effort to create a more comprehensive model for selecting first programming languages.", "title": "" }, { "docid": "neg:1840040_6", "text": "Recurrent neural networks have recently shown significant potential in different language applications, ranging from natural language processing to language modelling. This paper introduces a research effort to use such networks to develop and evaluate natural language acquisition on a humanoid robot. Here, the problem is twofold. First, the focus will be put on using the gesture-word combination stage observed in infants to transition from single to multi-word utterances. Secondly, research will be carried out in the domain of connecting action learning with language learning. In the former, the long-short term memory architecture will be implemented, whilst in the latter multiple time-scale recurrent neural networks will be used. This will allow for comparison between the two architectures, whilst highlighting the strengths and shortcomings of both with respect to the language learning problem. Here, the main research efforts, challenges and expected outcomes are described.", "title": "" }, { "docid": "neg:1840040_7", "text": "Sample entropy (SampEn) has been proposed as a method to overcome limitations associated with approximate entropy (ApEn). The initial paper describing the SampEn metric included a characterization study comparing both ApEn and SampEn against theoretical results and concluded that SampEn is both more consistent and agrees more closely with theory for known random processes than ApEn. SampEn has been used in several studies to analyze the regularity of clinical and experimental time series. However, questions regarding how to interpret SampEn in certain clinical situations and its relationship to classical signal parameters remain unanswered. In this paper we report the results of a characterization study intended to provide additional insights regarding the interpretability of SampEn in the context of biomedical signal analysis.", "title": "" }, { "docid": "neg:1840040_8", "text": "This paper investigates the possibility of communicating through vibrations. By modulating the vibration motors available in all mobile phones, and decoding them through accelerometers, we aim to communicate small packets of information. Of course, this will not match the bit rates available through RF modalities, such as NFC or Bluetooth, which utilize a much larger bandwidth. However, where security is vital, vibratory communication may offer advantages. We develop Ripple, a system that achieves up to 200 bits/s of secure transmission using off-the-shelf vibration motor chips, and 80 bits/s on Android smartphones. This is an outcome of designing and integrating a range of techniques, including multicarrier modulation, orthogonal vibration division, vibration braking, side-channel jamming, etc. Not all these techniques are novel; some are borrowed and suitably modified for our purposes, while others are unique to this relatively new platform of vibratory communication.", "title": "" }, { "docid": "neg:1840040_9", "text": "This paper discusses the relationship between concepts of narrative, patterns of interaction within computer games constituting gameplay gestalts, and the relationship between narrative and the gameplay gestalt. The repetitive patterning involved in gameplay gestalt formation is found to undermine deep narrative immersion. The creation of stronger forms of interactive narrative in games requires the resolution of this confl ict. The paper goes on to describe the Purgatory Engine, a game engine based upon more fundamentally dramatic forms of gameplay and interaction, supporting a new game genre referred to as the fi rst-person actor. The fi rst-person actor does not involve a repetitive gestalt mode of gameplay, but defi nes gameplay in terms of character development and dramatic interaction.", "title": "" }, { "docid": "neg:1840040_10", "text": "Many fundamental image-related problems involve deconvolution operators. Real blur degradation seldom complies with an ideal linear convolution model due to camera noise, saturation, image compression, to name a few. Instead of perfectly modeling outliers, which is rather challenging from a generative model perspective, we develop a deep convolutional neural network to capture the characteristics of degradation. We note directly applying existing deep neural networks does not produce reasonable results. Our solution is to establish the connection between traditional optimization-based schemes and a neural network architecture where a novel, separable structure is introduced as a reliable support for robust deconvolution against artifacts. Our network contains two submodules, both trained in a supervised manner with proper initialization. They yield decent performance on non-blind image deconvolution compared to previous generative-model based methods.", "title": "" }, { "docid": "neg:1840040_11", "text": "Little is known about the perception of artificial spatial hearing by hearing-impaired subjects. The purpose of this study was to investigate how listeners with hearing disorders perceived the effect of a spatialization feature designed for wireless microphone systems. Forty listeners took part in the experiments. They were arranged in four groups: normal-hearing, moderate, severe, and profound hearing loss. Their performance in terms of speech understanding and speaker localization was assessed with diotic and binaural stimuli. The results of the speech intelligibility experiment revealed that the subjects presenting a moderate or severe hearing impairment better understood speech with the spatialization feature. Thus, it was demonstrated that the conventional diotic binaural summation operated by current wireless systems can be transformed to reproduce the spatial cues required to localize the speaker, without any loss of intelligibility. The speaker localization experiment showed that a majority of the hearing-impaired listeners had similar performance with natural and artificial spatial hearing, contrary to the normal-hearing listeners. This suggests that certain subjects with hearing impairment preserve their localization abilities with approximated generic head-related transfer functions in the frontal horizontal plane.", "title": "" }, { "docid": "neg:1840040_12", "text": "This paper demonstrates that it is possible to construct the Stochastic flash ADC using standard digital cells. In order to minimize the analog circuit requirements which cost high, it is appropriate to begin the architecture with highly digital. The proposed Stochastic flash ADC uses a random comparator offset to set the trip points. Since the comparator are no longer sized for small offset, they can be shrunk down into digital cells. Using comparators that are implemented as digital cells produces a large variation of comparator offset. Typically, this is considered a disadvantage, but in our case, this large standard deviation of offset is used to set the input signal range. By designing an ADC that is made up entirely of digital cells, it is natural candidate for a synthesizable ADC. The analog comparator which is used in this ADC is constructed from standard digital NAND gates connected with SR latch to minimize the memory effects. A Wallace tree adder is used to sum the total number of comparator output, since the order of comparator output is random. Thus, all the components including the comparator and Wallace tree adder can be implemented using standard digital cells. [1] INTRODUCTION As CMOS designs are scaled to smaller technology nodes, many benefits arise, as well as challenges. There are benefits in speed and power due to decreased capacitance and lower supply voltage, yet reduction in intrinsic device gain and lower supply voltage make it difficult to migrate previous analog designs to smaller scaled processes. Moreover, as scaling trends continue, the analog portion of a mixed-signal system tends to consume proportionally more power and area and have a higher design cost than the digital counterpart. This tends to increase the overall design cost of the mixed-signal design. Automatically synthesized digital circuits get all the benefits of scaling, but analog circuits get these benefits at a large cost. The most essential component of ADC is the comparator, which translates from the analog world to digital world. Since comparator defines the boundary between analog and digital realms, the flash ADC architecture will be considered, as it places the comparator as close to the analog input signal. Flash ADCs use a reference ladder to generate the comparator trip points that correspond to each digital code. Typically the references are either generated by a resistor ladder or some form of analog interpolation, but the effect is the same: a …", "title": "" }, { "docid": "neg:1840040_13", "text": "Traditionally, many clinicians tend to forego esthetic considerations when full-coverage restorations are indicated for pediatric patients with primary dentitions. However, the availability of new zirconia pediatric crowns and reliable techniques for cementation makes esthetic outcomes practical and consistent when restoring primary dentition. Two cases are described: a 3-year-old boy who presented with severe early childhood caries affecting both anterior and posterior teeth, and a 6-year-old boy who presented with extensive caries of his primary posterior dentition, including a molar requiring full coverage. The parents of both boys were concerned about esthetics, and the extent of decay indicated the need for full-coverage restorations. This led to the boys receiving treatment using a restorative procedure in which the carious teeth were prepared for and restored with esthetic tooth-colored zirconia crowns. In both cases, comfortable function and pleasing esthetics were achieved.", "title": "" }, { "docid": "neg:1840040_14", "text": "Tumour-associated viruses produce antigens that, on the face of it, are ideal targets for immunotherapy. Unfortunately, these viruses are experts at avoiding or subverting the host immune response. Cervical-cancer-associated human papillomavirus (HPV) has a battery of immune-evasion mechanisms at its disposal that could confound attempts at HPV-directed immunotherapy. Other virally associated human cancers might prove similarly refractive to immuno-intervention unless we learn how to circumvent their strategies for immune evasion.", "title": "" }, { "docid": "neg:1840040_15", "text": "EEG desynchronization is a reliable correlate of excited neural structures of activated cortical areas. EEG synchronization within the alpha band may be an electrophysiological correlate of deactivated cortical areas. Such areas are not processing sensory information or motor output and can be considered to be in an idling state. One example of such an idling cortical area is the enhancement of mu rhythms in the primary hand area during visual processing or during foot movement. In both circumstances, the neurons in the hand area are not needed for visual processing or preparation for foot movement. As a result of this, an enhanced hand area mu rhythm can be observed.", "title": "" }, { "docid": "neg:1840040_16", "text": "Using a common set of attributes to determine which methodology to use in a particular data warehousing project.", "title": "" }, { "docid": "neg:1840040_17", "text": "The introduction of convolutional layers greatly advanced the performance of neural networks on image tasks due to innately capturing a way of encoding and learning translation-invariant operations, matching one of the underlying symmetries of the image domain. In comparison, there are a number of problems in which there are a number of different inputs which are all ’of the same type’ — multiple particles, multiple agents, multiple stock prices, etc. The corresponding symmetry to this is permutation symmetry, in that the algorithm should not depend on the specific ordering of the input data. We discuss a permutation-invariant neural network layer in analogy to convolutional layers, and show the ability of this architecture to learn to predict the motion of a variable number of interacting hard discs in 2D. In the same way that convolutional layers can generalize to different image sizes, the permutation layer we describe generalizes to different numbers of objects.", "title": "" }, { "docid": "neg:1840040_18", "text": "Three-phase dc/dc converters have the superior characteristics including lower current rating of switches, the reduced output filter requirement, and effective utilization of transformers. To further reduce the voltage stress on switches, three-phase three-level (TPTL) dc/dc converters have been investigated recently; however, numerous active power switches result in a complicated configuration in the available topologies. Therefore, a novel TPTL dc/dc converter adopting a symmetrical duty cycle control is proposed in this paper. Compared with the available TPTL converters, the proposed converter has fewer switches and simpler configuration. The voltage stress on all switches can be reduced to the half of the input voltage. Meanwhile, the ripple frequency of output current can be increased significantly, resulting in a reduced filter requirement. Experimental results from a 540-660-V input and 48-V/20-A output are presented to verify the theoretical analysis and the performance of the proposed converter.", "title": "" }, { "docid": "neg:1840040_19", "text": "Speech is a common and effective way of communication between humans, and modern consumer devices such as smartphones and home hubs are equipped with deep learning based accurate automatic speech recognition to enable natural interaction between humans and machines. Recently, researchers have demonstrated powerful attacks against machine learning models that can fool them to produce incorrect results. However, nearly all previous research in adversarial attacks has focused on image recognition and object detection models. In this short paper, we present a first of its kind demonstration of adversarial attacks against speech classification model. Our algorithm performs targeted attacks with 87% success by adding small background noise without having to know the underlying model parameter and architecture. Our attack only changes the least significant bits of a subset of audio clip samples, and the noise does not change 89% the human listener’s perception of the audio clip as evaluated in our human study.", "title": "" } ]
1840041
How to Make a Digital Currency on a Blockchain Stable
[ { "docid": "pos:1840041_0", "text": "Blockchain is a distributed timestamp server technology introduced for realization of Bitcoin, a digital cash system. It has been attracting much attention especially in the areas of financial and legal applications. But such applications would fail if they are designed without knowledge of the fundamental differences in blockchain from existing technology. We show that blockchain is a probabilistic state machine in which participants can never commit on decisions, we also show that this probabilistic nature is necessarily deduced from the condition where the number of participants remains unknown. This work provides useful abstractions to think about blockchain, and raises discussion for promoting the better use of the technology.", "title": "" } ]
[ { "docid": "neg:1840041_0", "text": "The problem of efficiently finding the best match for a query in a given set with respect to the Euclidean distance or the cosine similarity has been extensively studied. However, the closely related problem of efficiently finding the best match with respect to the inner-product has never been explored in the general setting to the best of our knowledge. In this paper we consider this problem and contrast it with the previous problems considered. First, we propose a general branch-and-bound algorithm based on a (single) tree data structure. Subsequently, we present a dual-tree algorithm for the case where there are multiple queries. Our proposed branch-and-bound algorithms are based on novel inner-product bounds. Finally we present a new data structure, the cone tree, for increasing the efficiency of the dual-tree algorithm. We evaluate our proposed algorithms on a variety of data sets from various applications, and exhibit up to five orders of magnitude improvement in query time over the naive search technique in some cases.", "title": "" }, { "docid": "neg:1840041_1", "text": "This paper proposes a distributed discrete-time algorithm to solve an additive cost optimization problem over undirected deterministic or time-varying graphs. Different from most previous methods that require to exchange exact states between nodes, each node in our algorithm needs only the sign of the relative state between its neighbors, which is clearly one bit of information. Our analysis is based on optimization theory rather than Lyapunov theory or algebraic graph theory. The latter is commonly used in existing literature, especially in the continuous-time algorithm design, and is difficult to apply in our case. Besides, an optimization-theory-based analysis may make our results more extendible. In particular, our convergence proofs are based on the convergences of the subgradient method and the stochastic subgradient method. Moreover, the convergence rate of our algorithm can vary from $O(1/\\ln(k))$ to $O(1/\\sqrt{k})$, depending on the choice of the stepsize. A quantile regression problem is included to illustrate the performance of our algorithm using simulations.", "title": "" }, { "docid": "neg:1840041_2", "text": "BACKGROUND\nTreatments for alopecia are in high demand, but not all are safe and reliable. Dalteparin and protamine microparticles (D/P MPs) can effectively carry growth factors (GFs) in platelet-rich plasma (PRP).\n\n\nOBJECTIVE\nTo identify the effects of PRP-containing D/P MPs (PRP&D/P MPs) on hair growth.\n\n\nMETHODS & MATERIALS\nParticipants were 26 volunteers with thin hair who received five local treatments of 3 mL of PRP&D/P MPs (13 participants) or PRP and saline (control, 13 participants) at 2- to 3-week intervals and were evaluated for 12 weeks. Injected areas comprised frontal or parietal sites with lanugo-like hair. Experimental and control areas were photographed. Consenting participants underwent biopsies for histologic examination.\n\n\nRESULTS\nD/P MPs bind to various GFs contained in PRP. Significant differences were seen in hair cross-section but not in hair numbers in PRP and PRP&D/P MP injections. The addition of D/P MPs to PRP resulted in significant stimulation in hair cross-section. Microscopic findings showed thickened epithelium, proliferation of collagen fibers and fibroblasts, and increased vessels around follicles.\n\n\nCONCLUSION\nPRP&D/P MPs and PRP facilitated hair growth but D/P MPs provided additional hair growth. The authors have indicated no significant interest with commercial supporters.", "title": "" }, { "docid": "neg:1840041_3", "text": "To examine the pattern of injuries in cases of fatal shark attack in South Australian waters, the authors examined the files of their institution for all cases of shark attack in which full autopsies had been performed over the past 25 years, from 1974 to 1998. Of the seven deaths attributed to shark attack during this period, full autopsies were performed in only two cases. In the remaining five cases, bodies either had not been found or were incomplete. Case 1 was a 27-year-old male surfer who had been attacked by a shark. At autopsy, the main areas of injury involved the right thigh, which displayed characteristic teeth marks, extensive soft tissue damage, and incision of the femoral artery. There were also incised wounds of the right wrist. Bony injury was minimal, and no shark teeth were recovered. Case 2 was a 26-year-old male diver who had been attacked by a shark. At autopsy, the main areas of injury involved the left thigh and lower leg, which displayed characteristic teeth marks, extensive soft tissue damage, and incised wounds of the femoral artery and vein. There was also soft tissue trauma to the left wrist, with transection of the radial artery and vein. Bony injury was minimal, and no shark teeth were recovered. In both cases, death resulted from exsanguination following a similar pattern of soft tissue and vascular damage to a leg and arm. This type of injury is in keeping with predator attack from underneath or behind, with the most severe injuries involving one leg. Less severe injuries to the arms may have occurred during the ensuing struggle. Reconstruction of the damaged limb in case 2 by sewing together skin, soft tissue, and muscle bundles not only revealed that no soft tissue was missing but also gave a clearer picture of the pattern of teeth marks, direction of the attack, and species of predator.", "title": "" }, { "docid": "neg:1840041_4", "text": "Abnormal condition in a power system generally leads to a fall in system frequency, and it leads to system blackout in an extreme condition. This paper presents a technique to develop an auto load shedding and islanding scheme for a power system to prevent blackout and to stabilize the system under any abnormal condition. The technique proposes the sequence and conditions of the applications of different load shedding schemes and islanding strategies. It is developed based on the international current practices. It is applied to the Bangladesh Power System (BPS), and an auto load-shedding and islanding scheme is developed. The effectiveness of the developed scheme is investigated simulating different abnormal conditions in BPS.", "title": "" }, { "docid": "neg:1840041_5", "text": "In this paper we present our approach of solving the PAN 2016 Author Profiling Task. It involves classifying users’ gender and age using social media posts. We used SVM classifiers and neural networks on TF-IDF and verbosity features. Results showed that SVM classifiers are better for English datasets and neural networks perform better for Dutch and Spanish datasets.", "title": "" }, { "docid": "neg:1840041_6", "text": "Interpreting predictions from tree ensemble methods such as gradient boosting machines and random forests is important, yet feature attribution for trees is often heuristic and not individualized for each prediction. Here we show that popular feature attribution methods are inconsistent, meaning they can lower a feature’s assigned importance when the true impact of that feature actually increases. This is a fundamental problem that casts doubt on any comparison between features. To address it we turn to recent applications of game theory and develop fast exact tree solutions for SHAP (SHapley Additive exPlanation) values, which are the unique consistent and locally accurate attribution values. We then extend SHAP values to interaction effects and define SHAP interaction values. We propose a rich visualization of individualized feature attributions that improves over classic attribution summaries and partial dependence plots, and a unique “supervised” clustering (clustering based on feature attributions). We demonstrate better agreement with human intuition through a user study, exponential improvements in run time, improved clustering performance, and better identification of influential features. An implementation of our algorithm has also been merged into XGBoost and LightGBM, see http://github.com/slundberg/shap for details. ACM Reference Format: Scott M. Lundberg, Gabriel G. Erion, and Su-In Lee. 2018. Consistent Individualized Feature Attribution for Tree Ensembles. In Proceedings of ACM (KDD’18). ACM, New York, NY, USA, 9 pages. https://doi.org/none", "title": "" }, { "docid": "neg:1840041_7", "text": "In this paper, a novel segmentation and recognition approach to automatically extract street lighting poles from mobile LiDAR data is proposed. First, points on or around the ground are extracted and removed through a piecewise elevation histogram segmentation method. Then, a new graph-cut-based segmentation method is introduced to extract the street lighting poles from each cluster obtained through a Euclidean distance clustering algorithm. In addition to the spatial information, the street lighting pole's shape and the point's intensity information are also considered to formulate the energy function. Finally, a Gaussian-mixture-model-based method is introduced to recognize the street lighting poles from the candidate clusters. The proposed approach is tested on several point clouds collected by different mobile LiDAR systems. Experimental results show that the proposed method is robust to noises and achieves an overall performance of 90% in terms of true positive rate.", "title": "" }, { "docid": "neg:1840041_8", "text": "In this paper, a double L-slot microstrip patch antenna array using Coplanar waveguide feed for Wireless Local Area Network (WLAN) and Worldwide Interoperability for Microwave Access (WiMAX) frequency bands are presented. The proposed antenna is fabricated on Aluminum Nitride Ceramic substrate with dielectric constant 8.8 and thickness of 1.5mm. The key feature of this substrate is that it can withstand in high temperature. The return loss is about -31dB at the operating frequency of 3.6GHz with 50Ω input impedance. The basic parameters of the proposed antenna such as return loss, VSWR, and radiation pattern are simulated using Ansoft HFSS. Simulation results of antenna parameters of single patch and double patch antenna array are analyzed and presented.", "title": "" }, { "docid": "neg:1840041_9", "text": "Digital painters commonly use a tablet and stylus to drive software like Adobe Photoshop. A high quality stylus with 6 degrees of freedom (DOFs: 2D position, pressure, 2D tilt, and 1D rotation) coupled to a virtual brush simulation engine allows skilled users to produce expressive strokes in their own style. However, such devices are difficult for novices to control, and many people draw with less expensive (lower DOF) input devices. This paper presents a data-driven approach for synthesizing the 6D hand gesture data for users of low-quality input devices. Offline, we collect a library of strokes with 6D data created by trained artists. Online, given a query stroke as a series of 2D positions, we synthesize the 4D hand pose data at each sample based on samples from the library that locally match the query. This framework optionally can also modify the stroke trajectory to match characteristic shapes in the style of the library. Our algorithm outputs a 6D trajectory that can be fed into any virtual brush stroke engine to make expressive strokes for novices or users of limited hardware.", "title": "" }, { "docid": "neg:1840041_10", "text": "Electroencephalographic measurements are commonly used in medical and research areas. This review article presents an introduction into EEG measurement. Its purpose is to help with orientation in EEG field and with building basic knowledge for performing EEG recordings. The article is divided into two parts. In the first part, background of the subject, a brief historical overview, and some EEG related research areas are given. The second part explains EEG recording.", "title": "" }, { "docid": "neg:1840041_11", "text": "Deep learning has recently become very popular on account of its incredible success in many complex datadriven applications, including image classification and speech recognition. The database community has worked on data-driven applications for many years, and therefore should be playing a lead role in supporting this new wave. However, databases and deep learning are different in terms of both techniques and applications. In this paper, we discuss research problems at the intersection of the two fields. In particular, we discuss possible improvements for deep learning systems from a database perspective, and analyze database applications that may benefit from deep learning techniques.", "title": "" }, { "docid": "neg:1840041_12", "text": "A new approach to the online classification of streaming data is introduced in this paper. It is based on a self-developing (evolving) fuzzy-rule-based (FRB) classifier system of Takagi-Sugeno ( eTS) type. The proposed approach, called eClass (evolving class ifier), includes different architectures and online learning methods. The family of alternative architectures includes: 1) eClass0, with the classifier consequents representing class label and 2) the newly proposed method for regression over the features using a first-order eTS fuzzy classifier, eClass1. An important property of eClass is that it can start learning ldquofrom scratch.rdquo Not only do the fuzzy rules not need to be prespecified, but neither do the number of classes for eClass (the number may grow, with new class labels being added by the online learning process). In the event that an initial FRB exists, eClass can evolve/develop it further based on the newly arrived data. The proposed approach addresses the practical problems of the classification of streaming data (video, speech, sensory data generated from robotic, advanced industrial applications, financial and retail chain transactions, intruder detection, etc.). It has been successfully tested on a number of benchmark problems as well as on data from an intrusion detection data stream to produce a comparison with the established approaches. The results demonstrate that a flexible (with evolving structure) FRB classifier can be generated online from streaming data achieving high classification rates and using limited computational resources.", "title": "" }, { "docid": "neg:1840041_13", "text": "We consider a stochastic bandit problem with infinitely many arms. In this setting, the learner has no chance of trying all the arms even once and has to dedicate its limited number of samples only to a certain number of arms. All previous algorithms for this setting were designed for minimizing the cumulative regret of the learner. In this paper, we propose an algorithm aiming at minimizing the simple regret. As in the cumulative regret setting of infinitely many armed bandits, the rate of the simple regret will depend on a parameter β characterizing the distribution of the near-optimal arms. We prove that depending on β, our algorithm is minimax optimal either up to a multiplicative constant or up to a log(n) factor. We also provide extensions to several important cases: when β is unknown, in a natural setting where the near-optimal arms have a small variance, and in the case of unknown time horizon.", "title": "" }, { "docid": "neg:1840041_14", "text": "Reef-building corals occur as a range of colour morphs because of varying types and concentrations of pigments within the host tissues, but little is known about their physiological or ecological significance. Here, we examined whether specific host pigments act as an alternative mechanism for photoacclimation in the coral holobiont. We used the coral Montipora monasteriata (Forskål 1775) as a case study because it occurs in multiple colour morphs (tan, blue, brown, green and red) within varying light-habitat distributions. We demonstrated that two of the non-fluorescent host pigments are responsive to changes in external irradiance, with some host pigments up-regulating in response to elevated irradiance. This appeared to facilitate the retention of antennal chlorophyll by endosymbionts and hence, photosynthetic capacity. Specifically, net P(max) Chl a(-1) correlated strongly with the concentration of an orange-absorbing non-fluorescent pigment (CP-580). This had major implications for the energetics of bleached blue-pigmented (CP-580) colonies that maintained net P(max) cm(-2) by increasing P(max) Chl a(-1). The data suggested that blue morphs can bleach, decreasing their symbiont populations by an order of magnitude without compromising symbiont or coral health.", "title": "" }, { "docid": "neg:1840041_15", "text": "Ultra low quiescent, wide output current range low-dropout regulators (LDO) are in high demand in portable applications to extend battery lives. This paper presents a 500 nA quiescent, 0 to 100 mA load, 3.5–7 V input to 3 V output LDO in a digital 0.35 μm 2P3M CMOS technology. The challenges in designing with nano-ampere of quiescent current are discussed, namely the leakage, the parasitics, and the excessive DC gain. CMOS super source follower voltage buffer and input excessive gain reduction are then proposed. The LDO is internally compensated using Ahuja method with a minimum phase margin of 55° across all load conditions. The maximum transient voltage variation is less than 150 and 75 mV when used with 1 and 10 μF external capacitor. Compared with existing work, this LDO achieves the best transient flgure-of-merit with close to best dynamic current efficiency (maximum-to-quiescent current ratio).", "title": "" }, { "docid": "neg:1840041_16", "text": "OBJECTIVES\nThis article presents a new tool that helps systematic reviewers to extract and compare implementation data across primary trials. Currently, systematic review guidance does not provide guidelines for the identification and extraction of data related to the implementation of the underlying interventions.\n\n\nSTUDY DESIGN AND SETTING\nA team of systematic reviewers used a multistaged consensus development approach to develop this tool. First, a systematic literature search on the implementation and synthesis of clinical trial evidence was performed. The team then met in a series of subcommittees to develop an initial draft index. Drafts were presented at several research conferences and circulated to methodological experts in various health-related disciplines for feedback. The team systematically recorded, discussed, and incorporated all feedback into further revisions. A penultimate draft was discussed at the 2010 Cochrane-Campbell Collaboration Colloquium to finalize its content.\n\n\nRESULTS\nThe Oxford Implementation Index provides a checklist of implementation data to extract from primary trials. Checklist items are organized into four domains: intervention design, actual delivery by trial practitioners, uptake of the intervention by participants, and contextual factors. Systematic reviewers piloting the index at the Cochrane-Campbell Colloquium reported that the index was helpful for the identification of implementation data.\n\n\nCONCLUSION\nThe Oxford Implementation Index provides a framework to help reviewers assess implementation data across trials. Reviewers can use this tool to identify implementation data, extract relevant information, and compare features of implementation across primary trials in a systematic review. The index is a work-in-progress, and future efforts will focus on refining the index, improving usability, and integrating the index with other guidance on systematic reviewing.", "title": "" }, { "docid": "neg:1840041_17", "text": "The theory and construction of the HP-1430A feed-through sampling head are reviewed, and a model for the sampling head is developed from dimensional and electrical measurements in conjunction with electromagnetic, electronic, and network theory. The model was used to predict the sampling-head step response needed for the deconvolution of true input waveforms. The dependence of the sampling-head step response on the sampling diode bias is investigated. Calculations based on the model predict step response transition durations of 27.5 to 30.5 ps for diode reverse bias values of -1.76 to -1.63 V.", "title": "" }, { "docid": "neg:1840041_18", "text": "8.5 Printing 304 8.5.1 Overview 304 8.5.2 Inks and subtractive color calculations 304 8.5.2.1 Density 305 8.5.3 Continuous tone printing 306 8.5.4 Halftoning 307 8.5.4.1 Traditional halftoning 307 8.5.5 Digital halftoning 308 8.5.5.1 Cluster dot dither 310 8.5.5.2 Bayer dither and void and cluster dither 310 8.5.5.3 Error diffusion 311 8.5.5.4 Color digital halftoning 312 8.5.6 Print characterization 313 8.5.6.1 Transduction: the tone reproduction curve 313 8.6", "title": "" }, { "docid": "neg:1840041_19", "text": "A class of binary quasi-cyclic burst error-correcting codes based upon product codes is studied. An expression for the maximum burst error-correcting capability for each code in the class is given. In certain cases the codes reduce to Gilbert codes, which are cyclic. Often codes exist in the class which have the same block length and number of check bits as the Gilbert codes but correct longer bursts of errors than Gilbert codes. By shortening the codes, it is possible to design codes which achieve the Reiger bound.", "title": "" } ]
1840042
A new sentence similarity measure and sentence based extractive technique for automatic text summarization
[ { "docid": "pos:1840042_0", "text": "Measuring the semantic similarity between words is an important component in various tasks on the web such as relation extraction, community mining, document clustering, and automatic metadata extraction. Despite the usefulness of semantic similarity measures in these applications, accurately measuring semantic similarity between two words (or entities) remains a challenging task. We propose an empirical method to estimate semantic similarity using page counts and text snippets retrieved from a web search engine for two words. Specifically, we define various word co-occurrence measures using page counts and integrate those with lexical patterns extracted from text snippets. To identify the numerous semantic relations that exist between two given words, we propose a novel pattern extraction algorithm and a pattern clustering algorithm. The optimal combination of page counts-based co-occurrence measures and lexical pattern clusters is learned using support vector machines. The proposed method outperforms various baselines and previously proposed web-based semantic similarity measures on three benchmark data sets showing a high correlation with human ratings. Moreover, the proposed method significantly improves the accuracy in a community mining task.", "title": "" }, { "docid": "pos:1840042_1", "text": "Topic-focused multi-document summarization aims to produce a summary biased to a given topic or user profile. This paper presents a novel extractive approach based on manifold-ranking of sentences to this summarization task. The manifold-ranking process can naturally make full use of both the relationships among all the sentences in the documents and the relationships between the given topic and the sentences. The ranking score is obtained for each sentence in the manifold-ranking process to denote the biased information richness of the sentence. Then the greedy algorithm is employed to impose diversity penalty on each sentence. The summary is produced by choosing the sentences with both high biased information richness and high information novelty. Experiments on DUC2003 and DUC2005 are performed and the ROUGE evaluation results show that the proposed approach can significantly outperform existing approaches of the top performing systems in DUC tasks and baseline approaches.", "title": "" } ]
[ { "docid": "neg:1840042_0", "text": "A three-phase current source gate turn-off (GTO) thyristor rectifier is described with a high power factor, low line current distortion, and a simple main circuit. It adopts pulse-width modulation (PWM) control techniques obtained by analyzing the PWM patterns of three-phase current source rectifiers/inverters, and it uses a method of generating such patterns. In addition, by using an optimum set-up of the circuit constants, the GTO switching frequency is reduced to 500 Hz. This rectifier is suitable for large power conversion, because it can reduce GTO switching loss and its snubber loss.<<ETX>>", "title": "" }, { "docid": "neg:1840042_1", "text": "Find loads of the research methods in the social sciences book catalogues in this site as the choice of you visiting this page. You can also join to the website book library that will show you numerous books from any types. Literature, science, politics, and many more catalogues are presented to offer you the best book to find. The book that really makes you feels satisfied. Or that's the book that will save you from your job deadline.", "title": "" }, { "docid": "neg:1840042_2", "text": "This paper presents a novel integrated approach for efficient optimization based online trajectory planning of topologically distinctive mobile robot trajectories. Online trajectory optimization deforms an initial coarse path generated by a global planner by minimizing objectives such as path length, transition time or control effort. Kinodynamic motion properties of mobile robots and clearance from obstacles impose additional equality and inequality constraints on the trajectory optimization. Local planners account for efficiency by restricting the search space to locally optimal solutions only. However, the objective function is usually non-convex as the presence of obstacles generates multiple distinctive local optima. The proposed method maintains and simultaneously optimizes a subset of admissible candidate trajectories of distinctive topologies and thus seeking the overall best candidate among the set of alternative local solutions. Time-optimal trajectories for differential-drive and carlike robots are obtained efficiently by adopting the Timed-Elastic-Band approach for the underlying trajectory optimization problem. The investigation of various example scenarios and a comparative analysis with conventional local planners confirm the advantages of integrated exploration, maintenance and optimization of topologically distinctive trajectories. ∗Corresponding author Email address: christoph.roesmann@tu-dortmund.de (Christoph Rösmann) Preprint submitted to Robotics and Autonomous Systems November 12, 2016", "title": "" }, { "docid": "neg:1840042_3", "text": "Curcumin, the yellow color pigment of turmeric, is produced industrially from turmeric oleoresin. The mother liquor after isolation of curcumin from oleoresin contains approximately 40% oil. The oil was extracted from the mother liquor using hexane at 60 degrees C, and the hexane extract was separated into three fractions using silica gel column chromatography. These fractions were tested for antibacterial activity by pour plate method against Bacillus cereus, Bacillus coagulans, Bacillus subtilis, Staphylococcus aureus, Escherichia coli, and Pseudomonas aeruginosa. Fraction II eluted with 5% ethyl acetate in hexane was found to be most active fraction. The turmeric oil, fraction I, and fraction II were analyzed by GC and GC-MS. ar-Turmerone, turmerone, and curlone were found to be the major compounds present in these fractions along with other oxygenated compounds.", "title": "" }, { "docid": "neg:1840042_4", "text": "GitHub is the most widely used social, distributed version control system. It has around 10 million registered users and hosts over 16 million public repositories. Its user base is also very active as GitHub ranks in the top 100 Alexa most popular websites. In this study, we collect GitHub’s state in its entirety. Doing so, allows us to study new aspects of the ecosystem. Although GitHub is the home to millions of users and repositories, the analysis of users’ activity time-series reveals that only around 10% of them can be considered active. The collected dataset allows us to investigate the popularity of programming languages and existence of pattens in the relations between users, repositories, and programming languages. By, applying a k-means clustering method to the usersrepositories commits matrix, we find that two clear clusters of programming languages separate from the remaining. One cluster forms for “web programming” languages (Java Script, Ruby, PHP, CSS), and a second for “system oriented programming” languages (C, C++, Python). Further classification, allow us to build a phylogenetic tree of the use of programming languages in GitHub. Additionally, we study the main and the auxiliary programming languages of the top 1000 repositories in more detail. We provide a ranking of these auxiliary programming languages using various metrics, such as percentage of lines of code, and PageRank.", "title": "" }, { "docid": "neg:1840042_5", "text": "Triticum aestivum (Wheat grass juice) has high concentrations of chlorophyll, amino acids, minerals, vitamins, and enzymes. Fresh juice has been shown to possess anti-cancer activity, anti-ulcer activity, anti-inflammatory, antioxidant activity, anti-arthritic activity, and blood building activity in Thalassemia. It has been argued that wheat grass helps blood flow, digestion, and general detoxification of the body due to the presence of biologically active compounds and minerals in it and due to its antioxidant potential which is derived from its high content of bioflavonoids such as apigenin, quercitin, luteoline. Furthermore, indole compounds, amely choline, which known for antioxidants and also possess chelating property for iron overload disorders. The presence of 70% chlorophyll, which is almost chemically identical to haemoglobin. The only difference is that the central element in chlorophyll is magnesium and in hemoglobin it is iron. In wheat grass makes it more useful in various clinical conditions involving hemoglobin deficiency and other chronic disorders ultimately considered as green blood.", "title": "" }, { "docid": "neg:1840042_6", "text": "The term ‘vulnerability’ is used in many different ways by various scholarly communities. The resulting disagreement about the appropriate definition of vulnerability is a frequent cause for misunderstanding in interdisciplinary research on climate change and a challenge for attempts to develop formal models of vulnerability. Earlier attempts at reconciling the various conceptualizations of vulnerability were, at best, partly successful. This paper presents a generally applicable conceptual framework of vulnerability that combines a nomenclature of vulnerable situations and a terminology of vulnerability concepts based on the distinction of four fundamental groups of vulnerability factors. This conceptual framework is applied to characterize the vulnerability concepts employed by the main schools of vulnerability research and to review earlier attempts at classifying vulnerability concepts. None of these onedimensional classification schemes reflects the diversity of vulnerability concepts identified in this review. The wide range of policy responses available to address the risks from global climate change suggests that climate impact, vulnerability, and adaptation assessments will continue to apply a variety of vulnerability concepts. The framework presented here provides the much-needed conceptual clarity and facilitates bridging the various approaches to researching vulnerability to climate change. r 2006 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "neg:1840042_7", "text": "To clarify the effects of changing shift schedules from a full-day to a half-day before a night shift, 12 single nurses and 18 married nurses with children that engaged in night shift work in a Japanese hospital were investigated. Subjects worked 2 different shift patterns consisting of a night shift after a half-day shift (HF-N) and a night shift after a day shift (D-N). Physical activity levels were recorded with a physical activity volume meter to measure sleep/wake time more precisely without restricting subjects' activities. The duration of sleep before a night shift of married nurses was significantly shorter than that of single nurses for both shift schedules. Changing shift from the D-N to the HF-N increased the duration of sleep before a night shift for both groups, and made wake-up time earlier for single nurses only. Repeated ANCOVA of the series of physical activities showed significant differences with shift (p < 0.01) and marriage (p < 0.01) for variances, and age (p < 0.05) for a covariance. The paired t-test to compare the effects of changing shift patterns in each subject group and ANCOVA for examining the hourly activity differences between single and married nurses showed that the effects of a change in shift schedules seemed to have less effect on married nurses than single nurses. These differences might due to the differences of their family/home responsibilities.", "title": "" }, { "docid": "neg:1840042_8", "text": "A new computational imaging technique, termed Fourier ptychographic microscopy (FPM), uses a sequence of low-resolution images captured under varied illumination to iteratively converge upon a high-resolution complex sample estimate. Here, we propose a mathematical model of FPM that explicitly connects its operation to conventional ptychography, a common procedure applied to electron and X-ray diffractive imaging. Our mathematical framework demonstrates that under ideal illumination conditions, conventional ptychography and FPM both produce datasets that are mathematically linked by a linear transformation. We hope this finding encourages the future cross-pollination of ideas between two otherwise unconnected experimental imaging procedures. In addition, the coherence state of the illumination source used by each imaging platform is critical to successful operation, yet currently not well understood. We apply our mathematical framework to demonstrate that partial coherence uniquely alters both conventional ptychography's and FPM's captured data, but up to a certain threshold can still lead to accurate resolution-enhanced imaging through appropriate computational post-processing. We verify this theoretical finding through simulation and experiment.", "title": "" }, { "docid": "neg:1840042_9", "text": "This paper describes the case of a unilateral agraphic patient (GG) who makes letter substitutions only when writing letters and words with his dominant left hand. Accuracy is significantly greater when he is writing with his right hand and when he is asked to spell words orally. GG also makes case errors when writing letters, and will sometimes write words in mixed case. However, these allograph errors occur regardless of which hand he is using to write. In terms of cognitive models of peripheral dysgraphia (e.g., Ellis, 1988), it appears that he has an allograph level impairment that affects writing with both hands, and a separate problem in accessing graphic motor patterns that disrupts writing with the left hand only. In previous studies of left-handed patients with unilateral agraphia (Zesiger & Mayer, 1992; Zesiger, Pegna, & Rilliet, 1994), it has been suggested that allographic knowledge used for writing with both hands is stored exclusively in the left hemisphere, but that graphic motor patterns are represented separately in each hemisphere. The pattern of performance demonstrated by GG strongly supports such a conclusion.", "title": "" }, { "docid": "neg:1840042_10", "text": "Multi-view learning can provide self-supervision when different views are avail1 able of the same data. Distributional hypothesis provides another form of useful 2 self-supervision from adjacent sentences which are plentiful in large unlabelled 3 corpora. Motivated by the asymmetry in the two hemispheres of the human brain 4 as well as the observation that different learning architectures tend to emphasise 5 different aspects of sentence meaning, we present two multi-view frameworks for 6 learning sentence representations in an unsupervised fashion. One framework uses 7 a generative objective and the other a discriminative one. In both frameworks, 8 the final representation is an ensemble of two views, in which, one view encodes 9 the input sentence with a Recurrent Neural Network (RNN), and the other view 10 encodes it with a simple linear model. We show that, after learning, the vectors 11 produced by our multi-view frameworks provide improved representations over 12 their single-view learned counterparts, and the combination of different views gives 13 representational improvement over each view and demonstrates solid transferability 14 on standard downstream tasks. 15", "title": "" }, { "docid": "neg:1840042_11", "text": "Android packing services provide significant benefits in code protection by hiding original executable code, which help app developers to protect their code against reverse engineering. However, adversaries take the advantage of packers to hide their malicious code. A number of unpacking approaches have been proposed to defend against malicious packed apps. Unfortunately, most of the unpacking approaches work only for a limited time or for a particular type of packers. The analysis for different packers often requires specific domain knowledge and a significant amount of manual effort. In this paper, we conducted analyses of known Android packers appeared in recent years and propose to design an automatic detection and classification framework. The framework is capable of identifying packed apps, extracting the execution behavioral pattern of packers, and categorizing packed apps into groups. The variants of packer families share typical behavioral patterns reflecting their activities and packing techniques. The behavioral patterns obtained dynamically can be exploited to detect and classify unknown packers, which shed light on new directions for security researchers.", "title": "" }, { "docid": "neg:1840042_12", "text": "This paper describes our initial work in developing a real-time audio-visual Chinese speech synthesizer with a 3D expressive avatar. The avatar model is parameterized according to the MPEG-4 facial animation standard [1]. This standard offers a compact set of facial animation parameters (FAPs) and feature points (FPs) to enable realization of 20 Chinese visemes and 7 facial expressions (i.e. 27 target facial configurations). The Xface [2] open source toolkit enables us to define the influence zone for each FP and the deformation function that relates them. Hence we can easily animate a large number of coordinates in the 3D model by specifying values for a small set of FAPs and their FPs. FAP values for 27 target facial configurations were estimated from available corpora. We extended the dominance blending approach to effect animations for coarticulated visemes superposed with expression changes. We selected six sentiment-carrying text messages and synthesized expressive visual speech (for all expressions, in randomized order) with neutral audio speech. A perceptual experiment involving 11 subjects shows that they can identify the facial expression that matches the text message’s sentiment 85% of the time.", "title": "" }, { "docid": "neg:1840042_13", "text": "A software-defined radar is a versatile radar system, where most of the processing, like signal generation, filtering, up-and down conversion etc. is performed by a software. This paper presents a state of the art of software-defined radar technology. It describes the design concept of software-defined radars and the two possible implementations. A global assessment is presented, and the link with the Cognitive Radar is explained.", "title": "" }, { "docid": "neg:1840042_14", "text": "Over the last few years, the phenomenon of adversarial examples — maliciously constructed inputs that fool trained machine learning models — has captured the attention of the research community, especially when the adversary is restricted to small modifications of a correctly handled input. Less surprisingly, image classifiers also lack human-level performance on randomly corrupted images, such as images with additive Gaussian noise. In this paper we provide both empirical and theoretical evidence that these are two manifestations of the same underlying phenomenon, establishing close connections between the adversarial robustness and corruption robustness research programs. This suggests that improving adversarial robustness should go hand in hand with improving performance in the presence of more general and realistic image corruptions. Based on our results we recommend that future adversarial defenses consider evaluating the robustness of their methods to distributional shift with benchmarks such as Imagenet-C.", "title": "" }, { "docid": "neg:1840042_15", "text": "Consensus molecular subtypes and the evolution of precision medicine in colorectal cancer Rodrigo Dienstmann, Louis Vermeulen, Justin Guinney, Scott Kopetz, Sabine Tejpar and Josep Tabernero Nature Reviews Cancer 17, 79–92 (2017) In this article a source of grant funding for one of the authors was omitted from the Acknowledgements section. The online version of the article has been corrected to include: “The work of R.D. was supported by the Grant for Oncology Innovation under the project ‘Next generation of clinical trials with matched targeted therapies in colorectal cancer’”. C O R R E C T I O N", "title": "" }, { "docid": "neg:1840042_16", "text": "Interactive Evolutionary Computation (IEC) creates the intriguing possibility that a large variety of useful content can be produced quickly and easily for practical computer graphics and gaming applications. To show that IEC can produce such content, this paper applies IEC to particle system effects, which are the de facto method in computer graphics for generating fire, smoke, explosions, electricity, water, and many other special effects. While particle systems are capable of producing a broad array of effects, they require substantial mathematical and programming knowledge to produce. Therefore, efficient particle system generation tools are required for content developers to produce special effects in a timely manner. This paper details the design, representation, and animation of particle systems via two IEC tools called NEAT Particles and NEAT Projectiles. Both tools evolve artificial neural networks (ANN) with the NeuroEvolution of Augmenting Topologies (NEAT) method to control the behavior of particles. NEAT Particles evolves general-purpose particle effects, whereas NEAT Projectiles specializes in evolving particle weapon effects for video games. The primary advantage of this NEAT-based IEC approach is to decouple the creation of new effects from mathematics and programming, enabling content developers without programming knowledge to produce complex effects. Furthermore, it allows content designers to produce a broader range of effects than typical development tools. Finally, it acts as a concept generator, allowing content creators to interactively and efficiently explore the space of possible effects. Both NEAT Particles and NEAT Projectiles demonstrate how IEC can evolve useful content for graphical media and games, and are together a step toward the larger goal of automated content generation.", "title": "" }, { "docid": "neg:1840042_17", "text": "In the new designs of military aircraft and unmanned aircraft there is a clear trend towards increasing demand of electrical power. This fact is mainly due to the replacement of mechanical, pneumatic and hydraulic equipments by partially or completely electrical systems. Generally, use of electrical power onboard is continuously increasing within the areas of communications, surveillance and general systems, such as: radar, cooling, landing gear or actuators systems. To cope with this growing demand for electric power, new levels of voltage (270 VDC), architectures and power electronics devices are being applied to the onboard electrical power distribution systems. The purpose of this paper is to present and describe the technological project HV270DC. In this project, one Electrical Power Distribution System (EPDS), applicable to the more electric aircrafts, has been developed. This system has been integrated by EADS in order to study the benefits and possible problems or risks that affect this kind of power distribution systems, in comparison with conventional distribution systems.", "title": "" }, { "docid": "neg:1840042_18", "text": "This paper presents an air-filled substrate integrated waveguide (AFSIW) filter post-process tuning technique. The emerging high-performance AFSIW technology is of high interest for the design of microwave and millimeter-wave substrate integrated systems based on low-cost multilayer printed circuit board (PCB) process. However, to comply with stringent specifications, especially for space, aeronautical and safety applications, a filter post-process tuning technique is desired. AFSIW single pole filter post-process tuning using a capacitive post is theoretically analyzed. It is demonstrated that a tuning of more than 3% of the resonant frequency is achieved at 21 GHz using a 0.3 mm radius post with a 40% insertion ratio. For experimental demonstration, a fourth-order AFSIW band pass filter operating in the 20.88 to 21.11 GHz band is designed and fabricated. Due to fabrication tolerances, it is shown that its performances are not in line with expected results. Using capacitive post tuning, characteristics are improved and agree with optimized results. This post-process tuning can be used for other types of substrate integrated devices.", "title": "" }, { "docid": "neg:1840042_19", "text": "We present MCTest, a freely available set of stories and associated questions intended for research on the machine comprehension of text. Previous work on machine comprehension (e.g., semantic modeling) has made great strides, but primarily focuses either on limited-domain datasets, or on solving a more restricted goal (e.g., open-domain relation extraction). In contrast, MCTest requires machines to answer multiple-choice reading comprehension questions about fictional stories, directly tackling the high-level goal of open-domain machine comprehension. Reading comprehension can test advanced abilities such as causal reasoning and understanding the world, yet, by being multiple-choice, still provide a clear metric. By being fictional, the answer typically can be found only in the story itself. The stories and questions are also carefully limited to those a young child would understand, reducing the world knowledge that is required for the task. We present the scalable crowd-sourcing methods that allow us to cheaply construct a dataset of 500 stories and 2000 questions. By screening workers (with grammar tests) and stories (with grading), we have ensured that the data is the same quality as another set that we manually edited, but at one tenth the editing cost. By being open-domain, yet carefully restricted, we hope MCTest will serve to encourage research and provide a clear metric for advancement on the machine comprehension of text. 1 Reading Comprehension A major goal for NLP is for machines to be able to understand text as well as people. Several research disciplines are focused on this problem: for example, information extraction, relation extraction, semantic role labeling, and recognizing textual entailment. Yet these techniques are necessarily evaluated individually, rather than by how much they advance us towards the end goal. On the other hand, the goal of semantic parsing is the machine comprehension of text (MCT), yet its evaluation requires adherence to a specific knowledge representation, and it is currently unclear what the best representation is, for open-domain text. We believe that it is useful to directly tackle the top-level task of MCT. For this, we need a way to measure progress. One common method for evaluating someone’s understanding of text is by giving them a multiple-choice reading comprehension test. This has the advantage that it is objectively gradable (vs. essays) yet may test a range of abilities such as causal or counterfactual reasoning, inference among relations, or just basic understanding of the world in which the passage is set. Therefore, we propose a multiple-choice reading comprehension task as a way to evaluate progress on MCT. We have built a reading comprehension dataset containing 500 fictional stories, with 4 multiple choice questions per story. It was built using methods which can easily scale to at least 5000 stories, since the stories were created, and the curation was done, using crowd sourcing almost entirely, at a total of $4.00 per story. We plan to periodically update the dataset to ensure that methods are not overfitting to the existing data. The dataset is open-domain, yet restricted to concepts and words that a 7 year old is expected to understand. This task is still beyond the capability of today’s computers and algorithms.", "title": "" } ]
1840043
Psychopathic Personality: Bridging the Gap Between Scientific Evidence and Public Policy.
[ { "docid": "pos:1840043_0", "text": "Prior studies of childhood aggression have demonstrated that, as a group, boys are more aggressive than girls. We hypothesized that this finding reflects a lack of research on forms of aggression that are relevant to young females rather than an actual gender difference in levels of overall aggressiveness. In the present study, a form of aggression hypothesized to be typical of girls, relational aggression, was assessed with a peer nomination instrument for a sample of 491 third-through sixth-grade children. Overt aggression (i.e., physical and verbal aggression as assessed in past research) and social-psychological adjustment were also assessed. Results provide evidence for the validity and distinctiveness of relational aggression. Further, they indicated that, as predicted, girls were significantly more relationally aggressive than were boys. Results also indicated that relationally aggressive children may be at risk for serious adjustment difficulties (e.g., they were significantly more rejected and reported significantly higher levels of loneliness, depression, and isolation relative to their nonrelationally aggressive peers).", "title": "" } ]
[ { "docid": "neg:1840043_0", "text": "Many people believe that information that is stored in long-term memory is permanent, citing examples of \"retrieval techniques\" that are alleged to uncover previously forgotten information. Such techniques include hypnosis, psychoanalytic procedures, methods for eliciting spontaneous and other conscious recoveries, and—perhaps most important—the electrical stimulation of the brain reported by Wilder Penfield and his associates. In this article we first evaluate • the evidence and conclude that, contrary to apparent popular belief, the evidence in no way confirms the view that all memories are permanent and thus potentially recoverable. We then describe some failures that resulted from attempts to elicit retrieval of previously stored information and conjecture what circumstances might cause information stored in memory to be irrevocably destroyed. Few would deny the existence of a phenomenon called \"forgetting,\" which is evident in the common observation that information becomes less available as the interval increases between the time of the information's initial acquisition and the time of its attempted retrieval. Despite the prevalence of the phenomenon, the factors that underlie forgetting have proved to be rather elusive, and the literature abounds with hypothesized mechanisms to account for the observed data. In this article we shall focus our attention on what is perhaps the fundamental issue concerning forgetting; Does forgetting consist of an actual loss of stored information, or does it result from a loss of access to information, which, once stored, remains forever? It should be noted at the outset that this question may be impossible to resolve in an absolute sense. Consider the following thought experiment. A person (call him Geoffrey) observes some event, say a traffic accident. During the period of observation, a movie camera strapped to Geoffrey's head records the event as Geoffrey experiences it. Some time later, Geoffrey attempts to recall and Vol. 35, No. S, 409-420 describe the event with the aid of some retrieval technique (e.g., hypnosis or brain stimulation), which is alleged to allow recovery of any information stored in his brain. While Geoffrey describes the event, a second person (Elizabeth) watches the movie that has been made of the event. Suppose, now, that Elizabeth is unable to decide whether Geoffrey is describing his memory or the movie—in other words, memory and movie are indistinguishable. Such a finding would constitute rather impressive support for the position held by many people that the mind registers an accurate representation of reality and that this information is stored permanently. But suppose, on the other hand, that Geoffrey's report—even with the aid of the miraculous retrieval technique—is incomplete, sketchy, and inaccurate, and furthermore, suppose that the accuracy of his report deteriorates over time. Such a finding, though consistent with the view that forgetting consists of information loss, would still be inconclusive, because it could be argued that the retrieval technique—no matter what it was— was simply not good enough to disgorge the information, which remained buried somewhere in the recesses of Geoffrey's brain. Thus, the question of information loss versus This article was written while E. Loftus was a fellow at the Center for Advanced Study in the Behavioral Sciences, Stanford, California, and G. Loftus was a visiting scholar in the Department of Psychology at Stanford University. James Fries generously picked apart an earlier version of this article. Paul Baltes translated the writings of Johann Nicolas Tetens (177?). The following financial sources are gratefully acknowledged: (a) National Science Foundation (NSF) Grant BNS 76-2337 to G. Loftus; (b) 'NSF Grant ENS 7726856 to E. Loftus; and (c) NSF Grant BNS 76-22943 and an Andrew Mellon Foundation grant to the Center for Advanced Study in the Behavioral Sciences. Requests for reprints should be sent to Elizabeth Loftus, Department of Psychology, University of Washington, Seattle, Washington 98195. AMERICAN PSYCHOLOGIST • MAY 1980 * 409 Copyright 1980 by the American Psychological Association, Inc. 0003-066X/80/3505-0409$00.75 retrieval failure may be unanswerable in principle. Nonetheless it often becomes necessary to choose sides. In the scientific arena, for example, a theorist constructing a model of memory may— depending on the details of the model'—be forced to adopt one position or the other. In fact, several leading theorists have suggested that although loss from short-term memory does occur, once material is registered in long-term memory, the information is never lost from the system, although it may normally be inaccessible (Shiffrin & Atkinson, 1969; Tulving, 1974). The idea is not new, however. Two hundred years earlier, the German philosopher Johann Nicolas Tetens (1777) wrote: \"Each idea does not only leave a trace or a consequent of that trace somewhere in the body, but each of them can be stimulated—-even if it is not possible to demonstrate this in a given situation\" (p, 7S1). He was explicit about his belief that certain ideas may seem to be forgotten, but that actually they are only enveloped by other ideas and, in truth, are \"always with us\" (p, 733). Apart from theoretical interest, the position one takes on the permanence of memory traces has important practical consequences. It therefore makes sense to air the issue from time to time, which is what we shall do here, The purpose of this paper is threefold. We shall first report some data bearing on people's beliefs about the question of information loss versus retrieval failure. To anticipate our findings, our survey revealed that a substantial number of the individuals queried take the position that stored information is permanent'—-or in other words, that all forgetting results from retrieval failure. In support of their answers, people typically cited data from some variant of the thought experiment described above, that is, they described currently available retrieval techniques that are alleged to uncover previously forgotten information. Such techniques include hypnosis, psychoanalytic procedures (e.g., free association), and— most important—the electrical stimulation of the brain reported by Wilder Penfield and his associates (Penfield, 1969; Penfield & Perot, 1963; Penfield & Roberts, 1959). The results of our survey lead to the second purpose of this paper, which is to evaluate this evidence. Finally, we shall describe some interesting failures that have resulted from attempts to elicit retrieval of previously stored information. These failures lend support to the contrary view that some memories are apparently modifiable, and that consequently they are probably unrecoverable. Beliefs About Memory In an informal survey, 169 individuals from various parts of the U.S. were asked to give their views about how memory works. Of these, 75 had formal graduate training in psychology, while the remaining 94 did not. The nonpsychologists had varied occupations. For example, lawyers, secretaries, taxicab drivers, physicians, philosophers, fire investigators, and even an 11-year-old child participated. They were given this question: Which of these statements best reflects your view on how human memory works? 1. Everything we learn is permanently stored in the mind, although sometimes particular details are not accessible. With hypnosis, or other special techniques, these inaccessible details could eventually be recovered. 2. Some details that we learn may be permanently lost from memory. Such details would never be» able to be recovered by hypnosis, or any other special technique, because these details are simply no longer there. Please elaborate briefly or give any reasons you may have for your view. We found that 84% of the psychologists chose Position 1, that is, they indicated a belief that all information in long-term memory is there, even though much of it cannot be retrieved; 14% chose Position 2, and 2% gave some other answer. A somewhat smaller percentage, 69%, of the nonpsychologists indicated a belief in Position 1; 23% chose Position 2, while 8% did not make a clear choice. What reasons did people give for their belief? The most common reason for choosing Position 1 was based on personal experience and involved the occasional recovery of an idea that the person had not thought about for quite some time. For example, one person wrote: \"I've experienced and heard too many descriptions of spontaneous recoveries of ostensibly quite trivial memories, which seem to have been triggered by just the right set of a person's experiences.\" A second reason for a belief in Position 1, commonly given by persons trained in psychology, was knowledge of the work of Wilder Penfield. One psychologist wrote: \"Even though Statement 1 is untestable, I think that evidence, weak though it is, such as Penfield's work, strongly suggests it may be correct.\" Occasionally respondents offered a comment about 410 • MAY 1980 • AMERICAN PSYCHOLOGIST hypnosis, and more rarely about psychoanalysis and repression, sodium pentothal, or even reincarnation, to support their belief in the permanence of memory. Admittedly, the survey was informally conducted, the respondents were not selected randomly, and the question itself may have pressured people to take sides when their true belief may have been a position in between. Nevertheless, the results suggest a widespread belief in the permanence of memories and give us some idea of the reasons people offer in support of this belief.", "title": "" }, { "docid": "neg:1840043_1", "text": "We review more than 200 applications of neural networks in image processing and discuss the present and possible future role of neural networks, especially feed-forward neural networks, Kohonen feature maps and Hop1eld neural networks. The various applications are categorised into a novel two-dimensional taxonomy for image processing algorithms. One dimension speci1es the type of task performed by the algorithm: preprocessing, data reduction=feature extraction, segmentation, object recognition, image understanding and optimisation. The other dimension captures the abstraction level of the input data processed by the algorithm: pixel-level, local feature-level, structure-level, object-level,ion level of the input data processed by the algorithm: pixel-level, local feature-level, structure-level, object-level, object-set-level and scene characterisation. Each of the six types of tasks poses speci1c constraints to a neural-based approach. These speci1c conditions are discussed in detail. A synthesis is made of unresolved problems related to the application of pattern recognition techniques in image processing and speci1cally to the application of neural networks. Finally, we present an outlook into the future application of neural networks and relate them to novel developments. ? 2002 Pattern Recognition Society. Published by Elsevier Science Ltd. All rights reserved.", "title": "" }, { "docid": "neg:1840043_2", "text": "This paper proposes state-of-charge (SOC) and remaining charge estimation algorithm of each cell in series-connected lithium-ion batteries. SOC and remaining charge information are indicators for diagnosing cell-to-cell variation; thus, the proposed algorithm can be applied to SOC- or charge-based balancing in cell balancing controller. Compared to voltage-based balancing, SOC and remaining charge information improve the performance of balancing circuit but increase computational complexity which is a stumbling block in implementation. In this work, a simple current sensor-less SOC estimation algorithm with estimated current equalizer is used to achieve aforementioned object. To check the characteristics and validate the feasibility of the proposed method, a constant current discharging/charging profile is applied to a series-connected battery pack (twelve 2.6Ah Li-ion batteries). The experimental results show its applicability to SOC- and remaining charge-based balancing controller with high estimation accuracy.", "title": "" }, { "docid": "neg:1840043_3", "text": "A Large number of digital text information is generated every day. Effectively searching, managing and exploring the text data has become a main task. In this paper, we first represent an introduction to text mining and a probabilistic topic model Latent Dirichlet allocation. Then two experiments are proposed Wikipedia articles and users’ tweets topic modelling. The former one builds up a document topic model, aiming to a topic perspective solution on searching, exploring and recommending articles. The latter one sets up a user topic model, providing a full research and analysis over Twitter users’ interest. The experiment process including data collecting, data pre-processing and model training is fully documented and commented. Further more, the conclusion and application of this paper could be a useful computation tool for social and business research.", "title": "" }, { "docid": "neg:1840043_4", "text": "In high speed ADC, comparator influences the overall performance of ADC directly. This paper describes a very high speed and high resolution preamplifier comparator. The comparator use a self biased differential amp to increase the output current sinking and sourcing capability. The threshold and width of the new comparator can be reduced to the millivolt (mV) range, the resolution and the dynamic characteristics are good. Based on UMC 0. 18um CMOS process model, simulated results show the comparator can work under a 25dB gain, 55MHz speed and 210. 10μW power .", "title": "" }, { "docid": "neg:1840043_5", "text": "The body-schema concept is revisited in the context of embodied cognition, further developing the theory formulated by Marc Jeannerod that the motor system is part of a simulation network related to action, whose function is not only to shape the motor system for preparing an action (either overt or covert) but also to provide the self with information on the feasibility and the meaning of potential actions. The proposed computational formulation is based on a dynamical system approach, which is linked to an extension of the equilibrium-point hypothesis, called Passive Motor Paradigm: this dynamical system generates goal-oriented, spatio-temporal, sensorimotor patterns, integrating a direct and inverse internal model in a multi-referential framework. The purpose of such computational model is to operate at the same time as a general synergy formation machinery for planning whole-body actions in humanoid robots and/or for predicting coordinated sensory-motor patterns in human movements. In order to illustrate the computational approach, the integration of simultaneous, even partially conflicting tasks will be analyzed in some detail with regard to postural-focal dynamics, which can be defined as the fusion of a focal task, namely reaching a target with the whole-body, and a postural task, namely maintaining overall stability.", "title": "" }, { "docid": "neg:1840043_6", "text": "0020-7225/$ see front matter 2009 Elsevier Ltd doi:10.1016/j.ijengsci.2009.08.001 * Corresponding author. Tel.: +66 2 9869009x220 E-mail address: thanan@siit.tu.ac.th (T. Leephakp Nowadays, Pneumatic Artificial Muscle (PAM) has become one of the most widely-used fluid-power actuators which yields remarkable muscle-like properties such as high force to weight ratio, soft and flexible structure, minimal compressed-air consumption and low cost. To obtain optimum design and usage, it is necessary to understand mechanical behaviors of the PAM. In this study, the proposed models are experimentally derived to describe mechanical behaviors of the PAMs. The experimental results show a non-linear relationship between contraction as well as air pressure within the PAMs and a pulling force of the PAMs. Three different sizes of PAMs available in industry are studied for empirical modeling and simulation. The case studies are presented to verify close agreement on the simulated results to the experimental results when the PAMs perform under various loads. 2009 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "neg:1840043_7", "text": "De-noising and extraction of the weak signature are crucial to fault prognostics in which case features are often very weak and masked by noise. The wavelet transform has been widely used in signal de-noising due to its extraordinary time-frequency representation capability. In this paper, the performance of wavelet decomposition-based de-noising and wavelet filter-based de-noising methods are compared based on signals from mechanical defects. The comparison result reveals that wavelet filter is more suitable and reliable to detect a weak signature of mechanical impulse-like defect signals, whereas the wavelet decomposition de-noising method can achieve satisfactory results on smooth signal detection. In order to select optimal parameters for the wavelet filter, a two-step optimization process is proposed. Minimal Shannon entropy is used to optimize the Morlet wavelet shape factor. A periodicity detection method based on singular value decomposition (SVD) is used to choose the appropriate scale for the wavelet transform. The signal de-noising results from both simulated signals and experimental data are presented and both support the proposed method. r 2005 Elsevier Ltd. All rights reserved. see front matter r 2005 Elsevier Ltd. All rights reserved. jsv.2005.03.007 ding author. Tel.: +1 414 229 3106; fax: +1 414 229 3107. resses: haiqiu@uwm.edu (H. Qiu), jaylee@uwm.edu (J. Lee), jinglin@mail.ioc.ac.cn (J. Lin).", "title": "" }, { "docid": "neg:1840043_8", "text": "In many specific laboratories the students use only a PLC simulator software, because the hardware equipment is expensive. This paper presents a solution that allows students to study both the hardware and software parts, in the laboratory works. The hardware part of solution consists in an old plotter, an adapter board, a PLC and a HMI. The software part of this solution is represented by the projects of the students, in which they developed applications for programming the PLC and the HMI. This equipment can be made very easy and can be used in university labs by students, so that they design and test their applications, from low to high complexity [1], [2].", "title": "" }, { "docid": "neg:1840043_9", "text": "A novel reverse-conducting insulated-gate bipolar transistor (RC-IGBT) featuring an oxide trench placed between the n-collector and the p-collector and a floating p-region (p-float) sandwiched between the n-drift and n-collector is proposed. First, the new structure introduces a high-resistance collector short resistor at low current density, which leads to the suppression of the snapback effect. Second, the collector short resistance can be adjusted by varying the p-float length without increasing the collector cell length. Third, the p-float layer also acts as the base of the n-collector/p-float/n-drift transistor which can be activated and offers a low-resistance current path at high current densities, which contributes to the low on-state voltage of the integrated freewheeling diode and the fast turnoff. As simulations show, the proposed RC-IGBT shows snapback-free output characteristics and faster turnoff compared with the conventional RC-IGBT.", "title": "" }, { "docid": "neg:1840043_10", "text": "This chapter describes the different steps of designing, building, simulating, and testing an intelligent flight control module for an increasingly popular unmanned aerial vehicle (UAV), known as a quadrotor. It presents an in-depth view of the modeling of the kinematics, dynamics, and control of such an interesting UAV. A quadrotor offers a challenging control problem due to its highly unstable nature. An effective control methodology is therefore needed for such a unique airborne vehicle. The chapter starts with a brief overview on the quadrotor's background and its applications, in light of its advantages. Comparisons with other UAVs are made to emphasize the versatile capabilities of this special design. For a better understanding of the vehicle's behavior, the quadrotor's kinematics and dynamics are then detailed. This yields the equations of motion, which are used later as a guideline for developing the proposed intelligent flight control scheme. In this chapter, fuzzy logic is adopted for building the flight controller of the quadrotor. It has been witnessed that fuzzy logic control offers several advantages over certain types of conventional control methods, specifically in dealing with highly nonlinear systems and modeling uncertainties. Two types of fuzzy inference engines are employed in the design of the flight controller, each of which is explained and evaluated. For testing the designed intelligent flight controller, a simulation environment was first developed. The simulations were made as realistic as possible by incorporating environmental disturbances such as wind gust and the ever-present sensor noise. The proposed controller was then tested on a real test-bed built specifically for this project. Both the simulator and the real quadrotor were later used for conducting different attitude stabilization experiments to evaluate the performance of the proposed control strategy. The controller's performance was also benchmarked against conventional control techniques such as input-output linearization, backstepping and sliding mode control strategies. Conclusions were then drawn based on the conducted experiments and their results.", "title": "" }, { "docid": "neg:1840043_11", "text": "Digitization forces industries to adapt to changing market conditions and consumer behavior. Exponential advances in technology, increased consumer power and sharpened competition imply that companies are facing the menace of commoditization. To sustainably succeed in the market, obsolete business models have to be adapted and new business models can be developed. Differentiation and unique selling propositions through innovation as well as holistic stakeholder engagement help companies to master the transformation. To enable companies and start-ups facing the implications of digital change, a tool was created and designed specifically for this demand: the Business Model Builder. This paper investigates the process of transforming the Business Model Builder into a software-supported digitized version. The digital twin allows companies to simulate the iterative adjustment of business models to constantly changing market conditions as well as customer needs on an ongoing basis. The user can modify individual variables, understand interdependencies and see the impact on the result of the business case, i.e. earnings before interest and taxes (EBIT) or economic value added (EVA). The simulation of a business models accordingly provides the opportunity to generate a dynamic view of the business model where any changes of input variables are considered in the result, the business case. Thus, functionality, feasibility and profitability of a business model can be reviewed, tested and validated in the digital simulation tool.", "title": "" }, { "docid": "neg:1840043_12", "text": "RATIONALE, AIMS AND OBJECTIVES\nTotal quality in coagulation testing is a necessary requisite to achieve clinically reliable results. Evidence was provided that poor standardization in the extra-analytical phases of the testing process has the greatest influence on test results, though little information is available so far on prevalence and type of pre-analytical variability in coagulation testing.\n\n\nMETHODS\nThe present study was designed to describe all pre-analytical problems on inpatients routine and stat samples recorded in our coagulation laboratory over a 2-year period and clustered according to their source (hospital departments).\n\n\nRESULTS\nOverall, pre-analytic problems were identified in 5.5% of the specimens. Although the highest frequency was observed for paediatric departments, in no case was the comparison of the prevalence among the different hospital departments statistically significant. The more frequent problems could be referred to samples not received in the laboratory following a doctor's order (49.3%), haemolysis (19.5%), clotting (14.2%) and inappropriate volume (13.7%). Specimens not received prevailed in the intensive care unit, surgical and clinical departments, whereas clotted and haemolysed specimens were those most frequently recorded from paediatric and emergency departments, respectively. The present investigation demonstrates a high prevalence of pre-analytical problems affecting samples for coagulation testing.\n\n\nCONCLUSIONS\nFull implementation of a total quality system, encompassing a systematic error tracking system, is a valuable tool to achieve meaningful information on the local pre-analytic processes most susceptible to errors, enabling considerations on specific responsibilities and providing the ideal basis for an efficient feedback within the hospital departments.", "title": "" }, { "docid": "neg:1840043_13", "text": "The use of visemes as atomic speech units in visual speech analysis and synthesis systems is well-established. Viseme labels are determined using a many-to-one phoneme-to-viseme mapping. However, due to the visual coarticulation effects, an accurate mapping from phonemes to visemes should define a many-to-many mapping scheme. In this research it was found that neither the use of standardized nor speaker-dependent many-to-one viseme labels could satisfy the quality requirements of concatenative visual speech synthesis. Therefore, a novel technique to define a many-to-many phoneme-to-viseme mapping scheme is introduced, which makes use of both treebased and k-means clustering approaches. We show that these many-to-many viseme labels more accurately describe the visual speech information as compared to both phoneme-based and many-toone viseme-based speech labels. In addition, we found that the use of these many-to-many visemes improves the precision of the segment selection phase in concatenative visual speech synthesis using limited speech databases. Furthermore, the resulting synthetic visual speech was both objectively and subjectively found to be of higher quality when the many-to-many visemes are used to describe the speech database as well as the synthesis targets.", "title": "" }, { "docid": "neg:1840043_14", "text": "Convolutional Neural Network (CNN) is a very powerful approach to extract discriminative local descriptors for effective image search. Recent work adopts fine-tuned strategies to further improve the discriminative power of the descriptors. Taking a different approach, in this paper, we propose a novel framework to achieve competitive retrieval performance. Firstly, we propose various masking schemes, namely SIFT-mask, SUM-mask, and MAX-mask, to select a representative subset of local convolutional features and remove a large number of redundant features. We demonstrate that this can effectively address the burstiness issue and improve retrieval accuracy. Secondly, we propose to employ recent embedding and aggregating methods to further enhance feature discriminability. Extensive experiments demonstrate that our proposed framework achieves state-of-the-art retrieval accuracy.", "title": "" }, { "docid": "neg:1840043_15", "text": "Ceramics are widely used biomaterials in prosthetic dentistry due to their attractive clinical properties. They are aesthetically pleasing with their color, shade and luster, and they are chemically stable. The main constituents of dental ceramic are Si-based inorganic materials, such as feldspar, quartz, and silica. Traditional feldspar-based ceramics are also referred to as “Porcelain”. The crucial difference between a regular ceramic and a dental ceramic is the proportion of feldspar, quartz, and silica contained in the ceramic. A dental ceramic is a multiphase system, i.e. it contains a dispersed crystalline phase surrounded by a continuous amorphous phase (a glassy phase). Modern dental ceramics contain a higher proportion of the crystalline phase that significantly improves the biomechanical properties of ceramics. Examples of these high crystalline ceramics include lithium disilicate and zirconia.", "title": "" }, { "docid": "neg:1840043_16", "text": "MicroRNAs (miRNAs) have within the past decade emerged as key regulators of metabolic homoeostasis. Major tissues in intermediary metabolism important during development of the metabolic syndrome, such as β-cells, liver, skeletal and heart muscle as well as adipose tissue, have all been shown to be affected by miRNAs. In the pancreatic β-cell, a number of miRNAs are important in maintaining the balance between differentiation and proliferation (miR-200 and miR-29 families) and insulin exocytosis in the differentiated state is controlled by miR-7, miR-375 and miR-335. MiR-33a and MiR-33b play crucial roles in cholesterol and lipid metabolism, whereas miR-103 and miR-107 regulates hepatic insulin sensitivity. In muscle tissue, a defined number of miRNAs (miR-1, miR-133, miR-206) control myofibre type switch and induce myogenic differentiation programmes. Similarly, in adipose tissue, a defined number of miRNAs control white to brown adipocyte conversion or differentiation (miR-365, miR-133, miR-455). The discovery of circulating miRNAs in exosomes emphasizes their importance as both endocrine signalling molecules and potentially disease markers. Their dysregulation in metabolic diseases, such as obesity, type 2 diabetes and atherosclerosis stresses their potential as therapeutic targets. This review emphasizes current ideas and controversies within miRNA research in metabolism.", "title": "" }, { "docid": "neg:1840043_17", "text": "This paper reports a continuously tunable lumped bandpass filter implemented in a third-order coupled resonator configuration. The filter is fabricated on a Borosilicate glass substrate using a surface micromachining technology that offers hightunable passive components. Continuous electrostatic tuning is achieved using three tunable capacitor banks, each consisting of one continuously tunable capacitor and three switched capacitors with pull-in voltage of less than 40 V. The center frequency of the filter is tuned from 1 GHz down to 600 MHz while maintaining a 3-dB bandwidth of 13%-14% and insertion loss of less than 4 dB. The maximum group delay is less than 10 ns across the entire tuning range. The temperature stability of the center frequency from -50°C to 50°C is better than 2%. The measured tuning speed of the filter is better than 80 s, and the is better than 20 dBm, which are in good agreement with simulations. The filter occupies a small size of less than 1.5 cm × 1.1 cm. The implemented filter shows the highest performance amongst the fully integrated microelectromechanical systems filters operating at sub-gigahertz range.", "title": "" }, { "docid": "neg:1840043_18", "text": "In modern days, a large no of automobile accidents are caused due to driver fatigue. To address the problem we propose a vision-based real-time driver fatigue detection system based on eye-tracking, which is an active safety system. Eye tracking is one of the key technologies, for, future driver assistance systems since human eyes contain much information about the driver's condition such as gaze, attention level, and fatigue level. Face and eyes of the driver are first localized and then marked in every frame obtained from the video source. The eyes are tracked in real time using correlation function with an automatically generated online template. Additionally, driver’s distraction and conversations with passengers during driving can lead to serious results. A real-time vision-based model for monitoring driver’s unsafe states, including fatigue state is proposed. A time-based eye glance to mitigate driver distraction is proposed. Keywords— Driver fatigue, Eye-Tracking, Template matching,", "title": "" } ]
1840044
Parallel Concatenated Trellis Coded Modulation1
[ { "docid": "pos:1840044_0", "text": "A parallel concatenated coding scheme consists of two simple constituent systematic encoders linked by an interleaver. The input bits to the first encoder are scrambled by the interleaver before entering the second encoder. The codeword of the parallel concatenated code consists of the input bits to the first encoder followed by the parity check bits of both encoders. This construction can be generalized to any number of constituent codes. Parallel concatenated schemes employing two convolutional codes as constituent codes, in connection with an iterative decoding algorithm of complexity comparable to that of the constituent codes, have been recently shown to yield remarkable coding gains close to theoretical limits. They have been named, and are known as, “turbo codes.” We propose a method to evaluate an upper bound to the bit error probability of a parallel concatenated coding scheme averaged over all interleavers of a given length. The analytical bounding technique is then used to shed some light on some crucial questions which have been floating around in the communications community since the proposal of turbo codes.", "title": "" } ]
[ { "docid": "neg:1840044_0", "text": "Herbal toothpaste Salvadora with comprehensive effective materials for dental health ranging from antibacterial, detergent and whitening properties including benzyl isothiocyanate, alkaloids, and anions such as thiocyanate, sulfate, and nitrate with potential antibacterial feature against oral microbial flora, silica and chloride for oral disinfection and bleaching the tooth, fluoride to strengthen tooth enamel, and saponin with appropriate detergent, and resin which protects tooth enamel by placing on it and is aggregated in Salvadora has been formulated. The paste is also from other herbs extract including valerian and chamomile. Current toothpaste has antibacterial, anti-plaque, anti-tartar and whitening, and wood extract of the toothbrush strengthens the tooth and enamel, and prevents the cancellation of enamel.From the other side, resin present in toothbrush wood creates a proper covering on tooth enamel and protects it against decay and benzyl isothiocyanate and also alkaloids present in miswak wood gives Salvadora toothpaste considerable antibacterial and bactericidal effects. Anti-inflammatory effects of the toothpaste are for apigenin and alpha bisabolol available in chamomile extract and seskuiterpen components including valeric acid with sedating features give the paste sedating and calming effect to oral tissues.", "title": "" }, { "docid": "neg:1840044_1", "text": "Hydrokinetic turbines can provide a source of electricity for remote areas located near a river or stream. The objective of this paper is to describe the design, simulation, build, and testing of a novel hydrokinetic turbine. The main components of the system are a permanent magnet synchronous generator (PMSG), a machined H-Darrieus rotor, an embedded controls system, and a cataraft. The design and construction of this device was conducted at the Oregon Institute of Technology in Wilsonville, Oregon.", "title": "" }, { "docid": "neg:1840044_2", "text": "SOAR is a cognitive architecture named from state, operator and result, which is adopted to portray the drivers’ guidance compliance behavior on variable message sign VMS in this paper. VMS represents traffic conditions to drivers by three colors: red, yellow, and green. Based on the multiagent platform, SOAR is introduced to design the agent with the detailed description of the working memory, long-term memory, decision cycle, and learning mechanism. With the fixed decision cycle, agent transforms state through four kinds of operators, including choosing route directly, changing the driving goal, changing the temper of driver, and changing the road condition of prediction. The agent learns from the process of state transformation by chunking and reinforcement learning. Finally, computerized simulation program is used to study the guidance compliance behavior. Experiments are simulated many times under given simulation network and conditions. The result, including the comparison between guidance and no guidance, the state transition times, and average chunking times are analyzed to further study the laws of guidance compliance and learning mechanism.", "title": "" }, { "docid": "neg:1840044_3", "text": "Saliva in the mouth is a biofluid produced mainly by three pairs of major salivary glands--the submandibular, parotid and sublingual glands--along with secretions from many minor submucosal salivary glands. Salivary gland secretion is a nerve-mediated reflex and the volume of saliva secreted is dependent on the intensity and type of taste and on chemosensory, masticatory or tactile stimulation. Long periods of low (resting or unstimulated) flow are broken by short periods of high flow, which is stimulated by taste and mastication. The nerve-mediated salivary reflex is modulated by nerve signals from other centers in the central nervous system, which is most obvious as hyposalivation at times of anxiety. An example of other neurohormonal influences on the salivary reflex is the circadian rhythm, which affects salivary flow and ionic composition. Cholinergic parasympathetic and adrenergic sympathetic autonomic nerves evoke salivary secretion, signaling through muscarinic M3 and adrenoceptors on salivary acinar cells and leading to secretion of fluid and salivary proteins. Saliva gland acinar cells are chloride and sodium secreting, and the isotonic fluid produced is rendered hypotonic by salivary gland duct cells as it flows to the mouth. The major proteins present in saliva are secreted by salivary glands, creating viscoelasticity and enabling the coating of oral surfaces with saliva. Salivary films are essential for maintaining oral health and regulating the oral microbiome. Saliva in the mouth contains a range of validated and potential disease biomarkers derived from epithelial cells, neutrophils, the microbiome, gingival crevicular fluid and serum. For example, cortisol levels are used in the assessment of stress, matrix metalloproteinases-8 and -9 appear to be promising markers of caries and periodontal disease, and a panel of mRNA and proteins has been proposed as a marker of oral squamous cell carcinoma. Understanding the mechanisms by which components enter saliva is an important aspect of validating their use as biomarkers of health and disease.", "title": "" }, { "docid": "neg:1840044_4", "text": "In this work, we study the problem of scheduling parallelizable jobs online with an objective of minimizing average flow time. Each parallel job is modeled as a DAG where each node is a sequential task and each edge represents dependence between tasks. Previous work has focused on a model of parallelizability known as the arbitrary speed-up curves setting where a scalable algorithm is known. However, the DAG model is more widely used by practitioners, since many jobs generated from parallel programming languages and libraries can be represented in this model. However, little is known for this model in the online setting with multiple jobs. The DAG model and the speed-up curve models are incomparable and algorithmic results from one do not immediately imply results for the other. Previous work has left open the question of whether an online algorithm can be O(1)-competitive with O(1)-speed for average flow time in the DAG setting. In this work, we answer this question positively by giving a scalable algorithm which is (1 + ǫ)-speed O( 1 ǫ )-competitive for any ǫ > 0. We further introduce the first greedy algorithm for scheduling parallelizable jobs — our algorithm is a generalization of the shortest jobs first algorithm. Greedy algorithms are among the most useful in practice due to their simplicity. We show that this algorithm is (2 + ǫ)-speed O( 1 ǫ )competitive for any ǫ > 0. ∗Department of Computer Science and Engineering, Washington University in St. Louis, 1 Brookings Drive, St. Louis, MO 63130. {kunal, li.jing, kefulu, bmoseley}@wustl.edu. B. Moseley and K. Lu work was supported in part by a Google Research Award and a Yahoo Research Award. K. Agrawal and J. Li were supported in part by NSF grants CCF-1150036 and CCF-1340571.", "title": "" }, { "docid": "neg:1840044_5", "text": "In this paper, we discuss the development of cost effective, wireless, and wearable vibrotactile haptic device for stiffness perception during an interaction with virtual objects. Our experimental setup consists of haptic device with five vibrotactile actuators, virtual reality environment tailored in Unity 3D integrating the Oculus Rift Head Mounted Display (HMD) and the Leap Motion controller. The virtual environment is able to capture touch inputs from users. Interaction forces are then rendered at 500 Hz and fed back to the wearable setup stimulating fingertips with ERM vibrotactile actuators. Amplitude and frequency of vibrations are modulated proportionally to the interaction force to simulate the stiffness of a virtual object. A quantitative and qualitative study is done to compare the discrimination of stiffness on virtual linear spring in three sensory modalities: visual only feedback, tactile only feedback, and their combination. A common psychophysics method called the Two Alternative Forced Choice (2AFC) approach is used for quantitative analysis using Just Noticeable Difference (JND) and Weber Fractions (WF). According to the psychometric experiment result, average Weber fraction values of 0.39 for visual only feedback was improved to 0.25 by adding the tactile feedback.", "title": "" }, { "docid": "neg:1840044_6", "text": "AUTOSAR supports the re-use of software and hardware components of automotive electronic systems. Therefore, amongst other things, AUTOSAR defines a software architecture that is used to decouple software components from hardware devices. This paper gives an overview about the different layers of that architecture. In addition, the upper most layer that concerns the application specific part of automotive electronic systems is presented.", "title": "" }, { "docid": "neg:1840044_7", "text": "According to the trend towards high-resolution CMOS image sensors, pixel sizes are continuously shrinking, towards and below 1.0μm, and sizes are now reaching a technological limit to meet required SNR performance [1-2]. SNR at low-light conditions, which is a key performance metric, is determined by the sensitivity and crosstalk in pixels. To improve sensitivity, pixel technology has migrated from frontside illumination (FSI) to backside illumiation (BSI) as pixel size shrinks down. In BSI technology, it is very difficult to further increase the sensitivity in a pixel of near-1.0μm size because there are no structural obstacles for incident light from micro-lens to photodiode. Therefore the only way to improve low-light SNR is to reduce crosstalk, which makes the non-diagonal elements of the color-correction matrix (CCM) close to zero and thus reduces color noise [3]. The best way to improve crosstalk is to introduce a complete physical isolation between neighboring pixels, e.g., using deep-trench isolation (DTI). So far, a few attempts using DTI have been made to suppress silicon crosstalk. A backside DTI in as small as 1.12μm-pixel, which is formed in the BSI process, is reported in [4], but it is just an intermediate step in the DTI-related technology because it cannot completely prevent silicon crosstalk, especially for long wavelengths of light. On the other hand, front-side DTIs for FSI pixels [5] and BSI pixels [6] are reported. In [5], however, DTI is present not only along the periphery of each pixel, but also invades into the pixel so that it is inefficient in terms of gathering incident light and providing sufficient amount of photodiode area. In [6], the pixel size is as large as 2.0μm and it is hard to scale down with this technology for near 1.0μm pitch because DTI width imposes a critical limit on the sufficient amount of photodiode area for full-well capacity. Thus, a new technological advance is necessary to realize the ideal front DTI in a small size pixel near 1.0μm.", "title": "" }, { "docid": "neg:1840044_8", "text": "The market of converters connected to transmission lines continues to require insulated gate bipolar transistors (IGBTs) with higher blocking voltages to reduce the number of IGBTs connected in series in high-voltage converters. To cope with these demands, semiconductor manufactures have developed several technologies. Nowadays, IGBTs up to 6.5-kV blocking voltage and IEGTs up to 4.5-kV blocking voltage are on the market. However, these IGBTs and injection-enhanced gate transistors (IEGTs) still have very high switching losses compared to low-voltage devices, leading to a realistic switching frequency of up to 1 kHz. To reduce switching losses in high-power applications, the auxiliary resonant commutated pole inverter (ARCPI) is a possible alternative. In this paper, switching losses and on-state voltages of NPT-IGBT (3.3 kV-1200 A), FS-IGBT (6.5 kV-600 A), SPT-IGBT (2.5 kV-1200 A, 3.3 kV-1200 A and 6.5 kV-600 A) and IEGT (3.3 kV-1200 A) are measured under hard-switching and zero-voltage switching (ZVS) conditions. The aim of this selection is to evaluate the impact of ZVS on various devices of the same voltage ranges. In addition, the difference in ZVS effects among the devices with various blocking voltage levels is evaluated.", "title": "" }, { "docid": "neg:1840044_9", "text": "The large availability of biomedical data brings opportunities and challenges to health care. Representation of medical concepts has been well studied in many applications, such as medical informatics, cohort selection, risk prediction, and health care quality measurement. In this paper, we propose an efficient multichannel convolutional neural network (CNN) model based on multi-granularity embeddings of medical concepts named MG-CNN, to examine the effect of individual patient characteristics including demographic factors and medical comorbidities on total hospital costs and length of stay (LOS) by using the Hospital Quality Monitoring System (HQMS) data. The proposed embedding method leverages prior medical hierarchical ontology and improves the quality of embedding for rare medical concepts. The embedded vectors are further visualized by the t-Distributed Stochastic Neighbor Embedding (t-SNE) technique to demonstrate the effectiveness of grouping related medical concepts. Experimental results demonstrate that our MG-CNN model outperforms traditional regression methods based on the one-hot representation of medical concepts, especially in the outcome prediction tasks for patients with low-frequency medical events. In summary, MG-CNN model is capable of mining potential knowledge from the clinical data and will be broadly applicable in medical research and inform clinical decisions.", "title": "" }, { "docid": "neg:1840044_10", "text": "Muscles are required to perform or absorb mechanical work under different conditions. However the ability of a muscle to do this depends on the interaction between its contractile components and its elastic components. In the present study we have used ultrasound to examine the length changes of the gastrocnemius medialis muscle fascicle along with those of the elastic Achilles tendon during locomotion under different incline conditions. Six male participants walked (at 5 km h(-1)) on a treadmill at grades of -10%, 0% and 10% and ran (at 10 km h(-1)) at grades of 0% and 10%, whilst simultaneous ultrasound, electromyography and kinematics were recorded. In both walking and running, force was developed isometrically; however, increases in incline increased the muscle fascicle length at which force was developed. Force was developed at shorter muscle lengths for running when compared to walking. Substantial levels of Achilles tendon strain were recorded in both walking and running conditions, which allowed the muscle fascicles to act at speeds more favourable for power production. In all conditions, positive work was performed by the muscle. The measurements suggest that there is very little change in the function of the muscle fascicles at different slopes or speeds, despite changes in the required external work. This may be a consequence of the role of this biarticular muscle or of the load sharing between the other muscles of the triceps surae.", "title": "" }, { "docid": "neg:1840044_11", "text": "From fishtail to princess braids, these intricately woven structures define an important and popular class of hairstyle, frequently used for digital characters in computer graphics. In addition to the challenges created by the infinite range of styles, existing modeling and capture techniques are particularly constrained by the geometric and topological complexities. We propose a data-driven method to automatically reconstruct braided hairstyles from input data obtained from a single consumer RGB-D camera. Our approach covers the large variation of repetitive braid structures using a family of compact procedural braid models. From these models, we produce a database of braid patches and use a robust random sampling approach for data fitting. We then recover the input braid structures using a multi-label optimization algorithm and synthesize the intertwining hair strands of the braids. We demonstrate that a minimal capture equipment is sufficient to effectively capture a wide range of complex braids with distinct shapes and structures.", "title": "" }, { "docid": "neg:1840044_12", "text": "Lane keeping is an important feature for self-driving cars. This paper presents an end-to-end learning approach to obtain the proper steering angle to maintain the car in the lane. The convolutional neural network (CNN) model takes raw image frames as input and outputs the steering angles accordingly. The model is trained and evaluated using the comma.ai dataset, which contains the front view image frames and the steering angle data captured when driving on the road. Unlike the traditional approach that manually decomposes the autonomous driving problem into technical components such as lane detection, path planning and steering control, the end-to-end model can directly steer the vehicle from the front view camera data after training. It learns how to keep in lane from human driving data. Further discussion of this end-to-end approach and its limitation are also provided.", "title": "" }, { "docid": "neg:1840044_13", "text": "Proposes a new method of personal recognition based on footprints. In this method, an input pair of raw footprints is normalized, both in direction and in position for robustness image-matching between the input pair of footprints and the pair of registered footprints. In addition to the Euclidean distance between them, the geometric information of the input footprint is used prior to the normalization, i.e., directional and positional information. In the experiment, the pressure distribution of the footprint was measured with a pressure-sensing mat. Ten volunteers contributed footprints for testing the proposed method. The recognition rate was 30.45% without any normalization (i.e., raw image), and 85.00% with the authors' method.", "title": "" }, { "docid": "neg:1840044_14", "text": "Data noise is present in many machine learning problems domains, some of these are well studied but others have received less attention. In this paper we propose an algorithm for constructing a kernel Fisher discriminant (KFD) from training examples withnoisy labels. The approach allows to associate with each example a probability of the label being flipped. We utilise an expectation maximization (EM) algorithm for updating the probabilities. The E-step uses class conditional probabilities estimated as a by-product of the KFD algorithm. The M-step updates the flip probabilities and determines the parameters of the discriminant. We demonstrate the feasibility of the approach on two real-world data-sets.", "title": "" }, { "docid": "neg:1840044_15", "text": "In this paper, an ac-linked hybrid electrical energy system comprising of photo voltaic (PV) and fuel cell (FC) with electrolyzer for standalone applications is proposed. PV is the primary power source of the system, and an FC-electrolyzer combination is used as a backup and as long-term storage system. A Fuzzy Logic controller is developed for the maximum power point tracking for the PV system. A simple power management strategy is designed for the proposed system to manage power flows among the different energy sources. A simulation model for the hybrid energy has been developed using MATLAB/Simulink.", "title": "" }, { "docid": "neg:1840044_16", "text": "Submission instructions: You should submit your answers via GradeScope and your code via Snap submission site. Submitting answers: Prepare answers to your homework into a single PDF file and submit it via http://gradescope.com. Make sure that answer to each question is on a separate page. This means you should submit a 14-page PDF (1 page for the cover sheet, 4 pages for the answers to question 1, 3 pages for answers to question 2, and 6 pages for question 3). On top of each page write the number of the question you are answering. Please find the cover sheet and the recommended templates located here: Not including the cover sheet in your submission will result in a 2-point penalty. Put all the code for a single question into a single file and upload it. Questions We strongly encourage you to use Snap.py for Python. However, you can use any other graph analysis tool or package you want (SNAP for C++, NetworkX for Python, JUNG for Java, etc.). A question that occupied sociologists and economists as early as the 1900's is how do innovations (e.g. ideas, products, technologies, behaviors) diffuse (spread) within a society. One of the prominent researchers in the field is Professor Mark Granovetter who among other contributions introduced along with Thomas Schelling threshold models in sociology. In Granovetter's model, there is a population of individuals (mob) and for simplicity two behaviours (riot or not riot). • Threshold model: each individual i has a threshold t i that determines her behavior in the following way. If there are at least t i individuals that are rioting, then she will join the riot, otherwise she stays inactive. Here, it is implicitly assumed that each individual has full knowledge of the behavior of all other individuals in the group. Nodes with small threshold are called innovators (early adopters) and nodes with large threshold are called laggards (late adopters). Granovetter's threshold model has been successful in explain classical empirical adoption curves by relating them to thresholds in", "title": "" }, { "docid": "neg:1840044_17", "text": "Multiple studies have illustrated the potential for dramatic societal, environmental and economic benefits from significant penetration of autonomous driving. However, all the current approaches to autonomous driving require the automotive manufacturers to shoulder the primary responsibility and liability associated with replacing human perception and decision making with automation, potentially slowing the penetration of autonomous vehicles, and consequently slowing the realization of the societal benefits of autonomous vehicles. We propose here a new approach to autonomous driving that will re-balance the responsibility and liabilities associated with autonomous driving between traditional automotive manufacturers, private infrastructure players, and third-party players. Our proposed distributed intelligence architecture leverages the significant advancements in connectivity and edge computing in the recent decades to partition the driving functions between the vehicle, edge computers on the road side, and specialized third-party computers that reside in the vehicle. Infrastructure becomes a critical enabler for autonomy. With this Infrastructure Enabled Autonomy (IEA) concept, the traditional automotive manufacturers will only need to shoulder responsibility and liability comparable to what they already do today, and the infrastructure and third-party players will share the added responsibility and liabilities associated with autonomous functionalities. We propose a Bayesian Network Model based framework for assessing the risk benefits of such a distributed intelligence architecture. An additional benefit of the proposed architecture is that it enables “autonomy as a service” while still allowing for private ownership of automobiles.", "title": "" }, { "docid": "neg:1840044_18", "text": "With the advent of battery-powered portable devices and the mandatory adoptions of power factor correction (PFC), non-inverting buck-boost converter is attracting numerous attentions. Conventional two-switch or four-switch non-inverting buck-boost converters choose their operation modes by measuring input and output voltage magnitudes. This can cause higher output voltage transients when input and output are close to each other. For the mode selection, the comparison of input and output voltage magnitudes is not enough due to the voltage drops raised by the parasitic components. In addition, the difference in the minimum and maximum effective duty cycle between controller output and switching device yields the discontinuity at the instant of mode change. Moreover, the different properties of output voltage versus a given duty cycle of buck and boost operating modes contribute to the output voltage transients. In this paper, the effect of the discontinuity due to the effective duty cycle derived from device switching time at the mode change is analyzed. A technique to compensate the output voltage transient due to this discontinuity is proposed. In order to attain additional mitigation of output transients and linear input/output voltage characteristic in buck and boost modes, the linearization of DC-gain of large signal model in boost operation is analyzed as well. Analytical, simulation, and experimental results are presented to validate the proposed theory.", "title": "" }, { "docid": "neg:1840044_19", "text": "Context-aware intelligent systems employ implicit inputs, and make decisions based on complex rules and machine learning models that are rarely clear to users. Such lack of system intelligibility can lead to loss of user trust, satisfaction and acceptance of these systems. However, automatically providing explanations about a system's decision process can help mitigate this problem. In this paper we present results from a controlled study with over 200 participants in which the effectiveness of different types of explanations was examined. Participants were shown examples of a system's operation along with various automatically generated explanations, and then tested on their understanding of the system. We show, for example, that explanations describing why the system behaved a certain way resulted in better understanding and stronger feelings of trust. Explanations describing why the system did not behave a certain way, resulted in lower understanding yet adequate performance. We discuss implications for the use of our findings in real-world context-aware applications.", "title": "" } ]
1840045
A literature survey on Facial Expression Recognition using Global Features
[ { "docid": "pos:1840045_0", "text": "It is well known that how to extract dynamical features is a key issue for video based face analysis. In this paper, we present a novel approach of facial action units (AU) and expression recognition based on coded dynamical features. In order to capture the dynamical characteristics of facial events, we design the dynamical haar-like features to represent the temporal variations of facial events. Inspired by the binary pattern coding, we further encode the dynamic haar-like features into binary pattern features, which are useful to construct weak classifiers for boosting learning. Finally the Adaboost is performed to learn a set of discriminating coded dynamic features for facial active units and expression recognition. Experiments on the CMU expression database and our own facial AU database show its encouraging performance.", "title": "" } ]
[ { "docid": "neg:1840045_0", "text": "The hybrid runtime (HRT) model offers a plausible path towards high performance and efficiency. By integrating the OS kernel, parallel runtime, and application, an HRT allows the runtime developer to leverage the full privileged feature set of the hardware and specialize OS services to the runtime's needs. However, conforming to the HRT model currently requires a complete port of the runtime and application to the kernel level, for example to our Nautilus kernel framework, and this requires knowledge of kernel internals. In response, we developed Multiverse, a system that bridges the gap between a built-from-scratch HRT and a legacy runtime system. Multiverse allows existing, unmodified applications and runtimes to be brought into the HRT model without any porting effort whatsoever. Developers simply recompile their package with our compiler toolchain, and Multiverse automatically splits the execution of the application between the domains of a legacy OS and an HRT environment. To the user, the package appears to run as usual on Linux, but the bulk of it now runs as a kernel. The developer can then incrementally extend the runtime and application to take advantage of the HRT model. We describe the design and implementation of Multiverse, and illustrate its capabilities using the Racket runtime system.", "title": "" }, { "docid": "neg:1840045_1", "text": "While a large number of consumers in the US and Europe frequently shop on the Internet, research on what drives consumers to shop online has typically been fragmented. This paper therefore proposes a framework to increase researchers’ understanding of consumers’ attitudes toward online shopping and their intention to shop on the Internet. The framework uses the constructs of the Technology Acceptance Model (TAM) as a basis, extended by exogenous factors and applies it to the online shopping context. The review shows that attitudes toward online shopping and intention to shop online are not only affected by ease of use, usefulness, and enjoyment, but also by exogenous factors like consumer traits, situational factors, product characteristics, previous online shopping experiences, and trust in online shopping.", "title": "" }, { "docid": "neg:1840045_2", "text": "BACKGROUND\nGenital warts may mimic a variety of conditions, thus complicating their diagnosis and treatment. The recognition of early flat lesions presents a diagnostic challenge.\n\n\nOBJECTIVE\nWe sought to describe the dermatoscopic features of genital warts, unveiling the possibility of their diagnosis by dermatoscopy.\n\n\nMETHODS\nDermatoscopic patterns of 61 genital warts from 48 consecutively enrolled male patients were identified with their frequencies being used as main outcome measures.\n\n\nRESULTS\nThe lesions were examined dermatoscopically and further classified according to their dermatoscopic pattern. The most frequent finding was an unspecific pattern, which was found in 15/61 (24.6%) lesions; a fingerlike pattern was observed in 7 (11.5%), a mosaic pattern in 6 (9.8%), and a knoblike pattern in 3 (4.9%) cases. In almost half of the lesions, pattern combinations were seen, of which a fingerlike/knoblike pattern was the most common, observed in 11/61 (18.0%) cases. Among the vascular features, glomerular, hairpin/dotted, and glomerular/dotted vessels were the most frequent finding seen in 22 (36.0%), 15 (24.6%), and 10 (16.4%) of the 61 cases, respectively. In 10 (16.4%) lesions no vessels were detected. Hairpin vessels were more often seen in fingerlike (χ(2) = 39.31, P = .000) and glomerular/dotted vessels in knoblike/mosaic (χ(2) = 9.97, P = .008) pattern zones; vessels were frequently missing in unspecified (χ(2) = 8.54, P = .014) areas.\n\n\nLIMITATIONS\nOnly male patients were examined.\n\n\nCONCLUSIONS\nThere is a correlation between dermatoscopic patterns and vascular features reflecting the life stages of genital warts; dermatoscopy may be useful in the diagnosis of early-stage lesions.", "title": "" }, { "docid": "neg:1840045_3", "text": "The European FF POIROT project (IST-2001-38248) aims at developing applications for tackling financial fraud, using formal ontological repositories as well as multilingual terminological resources. In this article, we want to focus on the development cycle towards an application recognizing several types of e-mail fraud, such as phishing, Nigerian advance fee fraud and lottery scam. The development cycle covers four tracks of development - language engineering, terminology engineering, knowledge engineering and system engineering. These development tracks are preceded by a problem determination phase and followed by a deployment phase. Each development track is supported by a methodology. All methodologies and phases in the development cycle will be discussed in detail", "title": "" }, { "docid": "neg:1840045_4", "text": "To examine how inclusive our schools are after 25 years of educational reform, students with disabilities and their parents were asked to identify current barriers and provide suggestions for removing those barriers. Based on a series of focus group meetings, 15 students with mobility limitations (9-15 years) and 12 parents identified four categories of barriers at their schools: (a) the physical environment (e.g., narrow doorways, ramps); (b) intentional attitudinal barriers (e.g., isolation, bullying); (c) unintentional attitudinal barriers (e.g., lack of knowledge, understanding, or awareness); and (d) physical limitations (e.g., difficulty with manual dexterity). Recommendations for promoting accessibility and full participation are provided and discussed in relation to inclusive education efforts. Exceptional Children", "title": "" }, { "docid": "neg:1840045_5", "text": "Deep networks have been successfully applied to visual tracking by learning a generic representation offline from numerous training images. However the offline training is time-consuming and the learned generic representation may be less discriminative for tracking specific objects. In this paper we present that, even without learning, simple convolutional networks can be powerful enough to develop a robust representation for visual tracking. In the first frame, we randomly extract a set of normalized patches from the target region as filters, which define a set of feature maps in the subsequent frames. These maps measure similarities between each filter and the useful local intensity patterns across the target, thereby encoding its local structural information. Furthermore, all the maps form together a global representation, which maintains the relative geometric positions of the local intensity patterns, and hence the inner geometric layout of the target is also well preserved. A simple and effective online strategy is adopted to update the representation, allowing it to robustly adapt to target appearance variations. Our convolution networks have surprisingly lightweight structure, yet perform favorably against several state-of-the-art methods on a large benchmark dataset with 50 challenging videos.", "title": "" }, { "docid": "neg:1840045_6", "text": "BACKGROUND\nCore stability training has grown in popularity over 25 years, initially for back pain prevention or therapy. Subsequently, it developed as a mode of exercise training for health, fitness and sport. The scientific basis for traditional core stability exercise has recently been questioned and challenged, especially in relation to dynamic athletic performance. Reviews have called for clarity on what constitutes anatomy and function of the core, especially in healthy and uninjured people. Clinical research suggests that traditional core stability training is inappropriate for development of fitness for heath and sports performance. However, commonly used methods of measuring core stability in research do not reflect functional nature of core stability in uninjured, healthy and athletic populations. Recent reviews have proposed a more dynamic, whole body approach to training core stabilization, and research has begun to measure and report efficacy of these modes training. The purpose of this study was to assess extent to which these developments have informed people currently working and participating in sport.\n\n\nMETHODS\nAn online survey questionnaire was developed around common themes on core stability training as defined in the current scientific literature and circulated to a sample population of people working and participating in sport. Survey results were assessed against key elements of the current scientific debate.\n\n\nRESULTS\nPerceptions on anatomy and function of the core were gathered from a representative cohort of athletes, coaches, sports science and sports medicine practitioners (n = 241), along with their views on effectiveness of various current and traditional exercise training modes. Most popular method of testing and measuring core function was subjective assessment through observation (43%), while a quarter (22%) believed there was no effective method of measurement. Perceptions of people in sport reflect the scientific debate, and practitioners have adopted a more functional approach to core stability training. There was strong support for loaded, compound exercises performed upright, compared to moderate support for traditional core stability exercises. Half of the participants (50%) in the survey, however, still support a traditional isolation core stability training.\n\n\nCONCLUSION\nPerceptions in applied practice on core stability training for dynamic athletic performance are aligned to a large extent to the scientific literature.", "title": "" }, { "docid": "neg:1840045_7", "text": "I focus on the role of case studies in developing causal explanations. I distinguish between the theoretical purposes of case studies and the case selection strategies or research designs used to advance those objectives. I construct a typology of case studies based on their purposes: idiographic (inductive and theory-guided), hypothesis-generating, hypothesis-testing, and plausibility probe case studies. I then examine different case study research designs, including comparable cases, most and least likely cases, deviant cases, and process tracing, with attention to their different purposes and logics of inference. I address the issue of selection bias and the “single logic” debate, and I emphasize the utility of multi-method research.", "title": "" }, { "docid": "neg:1840045_8", "text": "Derived from the field of art curation, digital provenance is an unforgeable record of a digital object’s chain of successive custody and sequence of operations performed on it. Digital provenance forms an immutable directed acyclic graph (DAG) structure. Recent works in digital provenance have focused on provenance generation, storage and management frameworks in different fields. In this paper, we address two important aspects of digital provenance that have not been investigated thoroughly in existing works: 1) capturing the DAG structure of provenance and 2) supporting dynamic information sharing. We propose a scheme that uses signature-based mutual agreements between successive users to clearly delineate the transition of responsibility of the document as it is passed along the chain of users. In addition to preserving the properties of confidentiality, immutability and availability for a digital provenance chain, it supports the representation of DAG structures of provenance. Our scheme supports dynamic information sharing scenarios where the sequence of users who have custody of the document is not predetermined. Security analysis and empirical results indicate that our scheme improves the security of the existing Onion and PKLC provenance schemes with comparable performance. Keywords—Provenance, cryptography, signatures, integrity, confidentiality, availability", "title": "" }, { "docid": "neg:1840045_9", "text": "Knowledge bases of real-world facts about entities and their relationships are useful resources for a variety of natural language processing tasks. However, because knowledge bases are typically incomplete, it is useful to be able to perform link prediction, i.e., predict whether a relationship not in the knowledge base is likely to be true. This paper combines insights from several previous link prediction models into a new embedding model STransE that represents each entity as a lowdimensional vector, and each relation by two matrices and a translation vector. STransE is a simple combination of the SE and TransE models, but it obtains better link prediction performance on two benchmark datasets than previous embedding models. Thus, STransE can serve as a new baseline for the more complex models in the link prediction task.", "title": "" }, { "docid": "neg:1840045_10", "text": "We present Query-Regression Network (QRN), a variant of Recurrent Neural Network (RNN) that is suitable for end-to-end machine comprehension. While previous work [18, 22] largely relied on external memory and global softmax attention mechanism, QRN is a single recurrent unit with internal memory and local sigmoid attention. Unlike most RNN-based models, QRN is able to effectively handle long-term dependencies and is highly parallelizable. In our experiments we show that QRN obtains the state-of-the-art result in end-to-end bAbI QA tasks [21].", "title": "" }, { "docid": "neg:1840045_11", "text": "The Visagraph IITM Eye Movement Recording System is an instrument that assesses reading eye movement efficiency and related parameters objectively. It also incorporates automated data analysis. In the standard protocol, the patient reads selections only at the level of their current school grade, or at the level that has been determined by a standardized reading test. In either case, deficient reading eye movements may be the consequence of a language-based reading disability, an oculomotor-based reading inefficiency, or both. We propose an addition to the standard protocol: the patient’s eye movements are recorded a second time with text that is significantly below the grade level of the initial reading. The goal is to determine which factor is primarily contributing to the patient’s reading problem, oculomotor or language. This concept is discussed in the context of two representative cases.", "title": "" }, { "docid": "neg:1840045_12", "text": "The ability to measure the level of customer satisfaction with online shopping is essential in gauging the success and failure of e-commerce. To do so, Internet businesses must be able to determine and understand the values of their existing and potential customers. Hence, it is important for IS researchers to develop and validate a diverse array of metrics to comprehensively capture the attitudes and feelings of online customers. What factors make online shopping appealing to customers? What customer values take priority over others? This study’s purpose is to answer these questions, examining the role of several technology, shopping, and product factors on online customer satisfaction. This is done using a conjoint analysis of consumer preferences based on data collected from 188 young consumers. Results indicate that the three most important attributes to consumers for online satisfaction are privacy (technology factor), merchandising (product factor), and convenience (shopping factor). These are followed by trust, delivery, usability, product customization, product quality, and security. Implications of these findings are discussed and suggestions for future research are provided.", "title": "" }, { "docid": "neg:1840045_13", "text": "Extreme sensitivity of soil organic carbon (SOC) to climate and land use change warrants further research in different terrestrial ecosystems. The aim of this study was to investigate the link between aggregate and SOC dynamics in a chronosequence of three different land uses of a south Chilean Andisol: a second growth Nothofagus obliqua forest (SGFOR), a grassland (GRASS) and a Pinus radiataplantation (PINUS). Total carbon content of the 0–10 cm soil layer was higher for GRASS (6.7 kg C m −2) than for PINUS (4.3 kg C m−2), while TC content of SGFOR (5.8 kg C m−2) was not significantly different from either one. High extractable oxalate and pyrophosphate Al concentrations (varying from 20.3–24.4 g kg −1, and 3.9– 11.1 g kg−1, respectively) were found in all sites. In this study, SOC and aggregate dynamics were studied using size and density fractionation experiments of the SOC, δ13C and total carbon analysis of the different SOC fractions, and C mineralization experiments. The results showed that electrostatic sorption between and among amorphous Al components and clay minerals is mainly responsible for the formation of metal-humus-clay complexes and the stabilization of soil aggregates. The process of ligand exchange between SOC and Al would be of minor importance resulting in the absence of aggregate hierarchy in this soil type. Whole soil C mineralization rate constants were highest for SGFOR and PINUS, followed by GRASS (respectively 0.495, 0.266 and 0.196 g CO 2-C m−2 d−1 for the top soil layer). In contrast, incubation experiments of isolated macro organic matter fractions gave opposite results, showing that the recalcitrance of the SOC decreased in another order: PINUS>SGFOR>GRASS. We deduced that electrostatic sorption processes and physical protection of SOC in soil aggregates were the main processes determining SOC stabilization. As a result, high aggregate carbon concentraCorrespondence to: D. Huygens (dries.huygens@ugent.be) tions, varying from 148 till 48 g kg −1, were encountered for all land use sites. Al availability and electrostatic charges are dependent on pH, resulting in an important influence of soil pH on aggregate stability. Recalcitrance of the SOC did not appear to largely affect SOC stabilization. Statistical correlations between extractable amorphous Al contents, aggregate stability and C mineralization rate constants were encountered, supporting this hypothesis. Land use changes affected SOC dynamics and aggregate stability by modifying soil pH (and thus electrostatic charges and available Al content), root SOC input and management practices (such as ploughing and accompanying drying of the soil).", "title": "" }, { "docid": "neg:1840045_14", "text": "Browsing is part of the information seeking process, used when information needs are ill-defined or unspecific. Browsing and searching are often interleaved during information seeking to accommodate changing awareness of information needs. Digital Libraries often support full-text search, but are not so helpful in supporting browsing. Described here is a novel browsing system created for the Greenstone software used by the New Zealand Digital Library that supports users in a more natural approach to the information seeking process.", "title": "" }, { "docid": "neg:1840045_15", "text": "We introduce an interactive tool which enables a user to quickly assemble an architectural model directly over a 3D point cloud acquired from large-scale scanning of an urban scene. The user loosely defines and manipulates simple building blocks, which we call SmartBoxes, over the point samples. These boxes quickly snap to their proper locations to conform to common architectural structures. The key idea is that the building blocks are smart in the sense that their locations and sizes are automatically adjusted on-the-fly to fit well to the point data, while at the same time respecting contextual relations with nearby similar blocks. SmartBoxes are assembled through a discrete optimization to balance between two snapping forces defined respectively by a data-fitting term and a contextual term, which together assist the user in reconstructing the architectural model from a sparse and noisy point cloud. We show that a combination of the user's interactive guidance and high-level knowledge about the semantics of the underlying model, together with the snapping forces, allows the reconstruction of structures which are partially or even completely missing from the input.", "title": "" }, { "docid": "neg:1840045_16", "text": "Texture classification is one of the problems which has been paid much attention on by computer scientists since late 90s. If texture classification is done correctly and accurately, it can be used in many cases such as Pattern recognition, object tracking, and shape recognition. So far, there have been so many methods offered to solve this problem. Near all these methods have tried to extract and define features to separate different labels of textures really well. This article has offered an approach which has an overall process on the images of textures based on Local binary pattern and Gray Level Co-occurrence matrix and then by edge detection, and finally, extracting the statistical features from the images would classify them. Although, this approach is a general one and is could be used in different applications, the method has been tested on the stone texture and the results have been compared with some of the previous approaches to prove the quality of proposed approach. Keywords-Texture Classification, Gray level Co occurrence, Local Binary Pattern, Statistical Features", "title": "" }, { "docid": "neg:1840045_17", "text": "Solving the visual symbol grounding problem has long been a goal of artificial intelligence. The field appears to be advancing closer to this goal with recent breakthroughs in deep learning for natural language grounding in static images. In this paper, we propose to translate videos directly to sentences using a unified deep neural network with both convolutional and recurrent structure. Described video datasets are scarce, and most existing methods have been applied to toy domains with a small vocabulary of possible words. By transferring knowledge from 1.2M+ images with category labels and 100,000+ images with captions, our method is able to create sentence descriptions of open-domain videos with large vocabularies. We compare our approach with recent work using language generation metrics, subject, verb, and object prediction accuracy, and a human evaluation.", "title": "" }, { "docid": "neg:1840045_18", "text": "This article explores the roots of white support for capital punishment in the United States. Our analysis addresses individual-level and contextual factors, paying particular attention to how racial attitudes and racial composition influence white support for capital punishment. Our findings suggest that white support hinges on a range of attitudes wider than prior research has indicated, including social and governmental trust and individualist and authoritarian values. Extending individual-level analyses, we also find that white responses to capital punishment are sensitive to local context. Perhaps most important, our results clarify the impact of race in two ways. First, racial prejudice emerges here as a comparatively strong predictor of white support for the death penalty. Second, black residential proximity functions to polarize white opinion along lines of racial attitude. As the black percentage of county residents rises, so too does the impact of racial prejudice on white support for capital punishment.", "title": "" }, { "docid": "neg:1840045_19", "text": "Programming by Examples (PBE) involves synthesizing intended programs in an underlying domain-specific language from examplebased specifications. PBE systems are already revolutionizing the application domain of data wrangling and are set to significantly impact several other domains including code refactoring. There are three key components in a PBE system. (i) A search algorithm that can efficiently search for programs that are consistent with the examples provided by the user. We leverage a divide-and-conquerbased deductive search paradigm that inductively reduces the problem of synthesizing a program expression of a certain kind that satisfies a given specification into sub-problems that refer to sub-expressions or sub-specifications. (ii) Program ranking techniques to pick an intended program from among the many that satisfy the examples provided by the user. We leverage features of the program structure as well of the outputs generated by the program on test inputs. (iii) User interaction models to facilitate usability and debuggability. We leverage active-learning techniques based on clustering inputs and synthesizing multiple programs. Each of these PBE components leverage both symbolic reasoning and heuristics. We make the case for synthesizing these heuristics from training data using appropriate machine learning methods. This can not only lead to better heuristics, but can also enable easier development, maintenance, and even personalization of a PBE system.", "title": "" } ]
1840046
Training with Exploration Improves a Greedy Stack LSTM Parser
[ { "docid": "pos:1840046_0", "text": "In the imitation learning paradigm algorithms learn from expert demonstrations in order to become able to accomplish a particular task. Daumé III et al. (2009) framed structured prediction in this paradigm and developed the search-based structured prediction algorithm (Searn) which has been applied successfully to various natural language processing tasks with state-of-the-art performance. Recently, Ross et al. (2011) proposed the dataset aggregation algorithm (DAgger) and compared it with Searn in sequential prediction tasks. In this paper, we compare these two algorithms in the context of a more complex structured prediction task, namely biomedical event extraction. We demonstrate that DAgger has more stable performance and faster learning than Searn, and that these advantages are more pronounced in the parameter-free versions of the algorithms.", "title": "" } ]
[ { "docid": "neg:1840046_0", "text": "Inspired by the progress of deep neural network (DNN) in single-media retrieval, the researchers have applied the DNN to cross-media retrieval. These methods are mainly two-stage learning: the first stage is to generate the separate representation for each media type, and the existing methods only model the intra-media information but ignore the inter-media correlation with the rich complementary context to the intra-media information. The second stage is to get the shared representation by learning the cross-media correlation, and the existing methods learn the shared representation through a shallow network structure, which cannot fully capture the complex cross-media correlation. For addressing the above problems, we propose the cross-media multiple deep network (CMDN) to exploit the complex cross-media correlation by hierarchical learning. In the first stage, CMDN jointly models the intra-media and intermedia information for getting the complementary separate representation of each media type. In the second stage, CMDN hierarchically combines the inter-media and intra-media representations to further learn the rich cross-media correlation by a deeper two-level network strategy, and finally get the shared representation by a stacked network style. Experiment results show that CMDN achieves better performance comparing with several state-of-the-art methods on 3 extensively used cross-media datasets.", "title": "" }, { "docid": "neg:1840046_1", "text": "It's being very important to listen to social media streams whether it's Twitter, Facebook, Messenger, LinkedIn, email or even company own application. As many customers may be using this streams to reach out to company because they need help. The company have setup social marketing team to monitor this stream. But due to huge volumes of users it's very difficult to analyses each and every social message and take a relevant action to solve users grievances, which lead to many unsatisfied customers or may even lose a customer. This papers proposes a system architecture which will try to overcome the above shortcoming by analyzing messages of each ejabberd users to check whether it's actionable or not. If it's actionable then an automated Chatbot will initiates conversation with that user and help the user to resolve the issue by providing a human way interactions using LUIS and cognitive services. To provide a highly robust, scalable and extensible architecture, this system is implemented on AWS public cloud.", "title": "" }, { "docid": "neg:1840046_2", "text": "Background: Software fault prediction is the process of developing models that can be used by the software practitioners in the early phases of software development life cycle for detecting faulty constructs such as modules or classes. There are various machine learning techniques used in the past for predicting faults. Method: In this study we perform a systematic review studies from January 1991 to October 2013 in the literature that use the machine learning techniques for software fault prediction. We assess the performance capability of the machine learning techniques in existing research for software fault prediction. We also compare the performance of the machine learning techniques with the", "title": "" }, { "docid": "neg:1840046_3", "text": "Many cluster management systems (CMSs) have been proposed to share a single cluster with multiple distributed computing systems. However, none of the existing approaches can handle distributed machine learning (ML) workloads given the following criteria: high resource utilization, fair resource allocation and low sharing overhead. To solve this problem, we propose a new CMS named Dorm, incorporating a dynamically-partitioned cluster management mechanism and an utilization-fairness optimizer. Specifically, Dorm uses the container-based virtualization technique to partition a cluster, runs one application per partition, and can dynamically resize each partition at application runtime for resource efficiency and fairness. Each application directly launches its tasks on the assigned partition without petitioning for resources frequently, so Dorm imposes flat sharing overhead. Extensive performance evaluations showed that Dorm could simultaneously increase the resource utilization by a factor of up to 2.32, reduce the fairness loss by a factor of up to 1.52, and speed up popular distributed ML applications by a factor of up to 2.72, compared to existing approaches. Dorm's sharing overhead is less than 5% in most cases.", "title": "" }, { "docid": "neg:1840046_4", "text": "Psoriasis vulgaris is a common and often chronic inflammatory skin disease. The incidence of psoriasis in Western industrialized countries ranges from 1.5% to 2%. Patients afflicted with severe psoriasis vulgaris may experience a significant reduction in quality of life. Despite the large variety of treatment options available, surveys have shown that patients still do not received optimal treatments. To optimize the treatment of psoriasis in Germany, the Deutsche Dermatologi sche Gesellschaft (DDG) and the Berufsverband Deutscher Dermatologen (BVDD) have initiated a project to develop evidence-based guidelines for the management of psoriasis. They were first published in 2006 and updated in 2011. The Guidelines focus on induction therapy in cases of mild, moderate and severe plaque-type psoriasis in adults including systemic therapy, UV therapy and topical therapies. The therapeutic recommendations were developed based on the results of a systematic literature search and were finalized during a consensus meeting using structured consensus methods (nominal group process).", "title": "" }, { "docid": "neg:1840046_5", "text": "Digital information is accumulating at an astounding rate, straining our ability to store and archive it. DNA is among the most dense and stable information media known. The development of new technologies in both DNA synthesis and sequencing make DNA an increasingly feasible digital storage medium. We developed a strategy to encode arbitrary digital information in DNA, wrote a 5.27-megabit book using DNA microchips, and read the book by using next-generation DNA sequencing.", "title": "" }, { "docid": "neg:1840046_6", "text": "The task of matching patterns in graph-structured data has applications in such diverse areas as computer vision, biology, electronics, computer aided design, social networks, and intelligence analysis. Consequently, work on graph-based pattern matching spans a wide range of research communities. Due to variations in graph characteristics and application requirements, graph matching is not a single problem, but a set of related problems. This paper presents a survey of existing work on graph matching, describing variations among problems, general and specific solution approaches, evaluation techniques, and directions for further research. An emphasis is given to techniques that apply to general graphs with semantic characteristics.", "title": "" }, { "docid": "neg:1840046_7", "text": "In recent years, classification of colon biopsy images has become an active research area. Traditionally, colon cancer is diagnosed using microscopic analysis. However, the process is subjective and leads to considerable inter/intra observer variation. Therefore, reliable computer-aided colon cancer detection techniques are in high demand. In this paper, we propose a colon biopsy image classification system, called CBIC, which benefits from discriminatory capabilities of information rich hybrid feature spaces, and performance enhancement based on ensemble classification methodology. Normal and malignant colon biopsy images differ with each other in terms of the color distribution of different biological constituents. The colors of different constituents are sharp in normal images, whereas the colors diffuse with each other in malignant images. In order to exploit this variation, two feature types, namely color components based statistical moments (CCSM) and Haralick features have been proposed, which are color components based variants of their traditional counterparts. Moreover, in normal colon biopsy images, epithelial cells possess sharp and well-defined edges. Histogram of oriented gradients (HOG) based features have been employed to exploit this information. Different combinations of hybrid features have been constructed from HOG, CCSM, and Haralick features. The minimum Redundancy Maximum Relevance (mRMR) feature selection method has been employed to select meaningful features from individual and hybrid feature sets. Finally, an ensemble classifier based on majority voting has been proposed, which classifies colon biopsy images using the selected features. Linear, RBF, and sigmoid SVM have been employed as base classifiers. The proposed system has been tested on 174 colon biopsy images, and improved performance (=98.85%) has been observed compared to previously reported studies. Additionally, the use of mRMR method has been justified by comparing the performance of CBIC on original and reduced feature sets.", "title": "" }, { "docid": "neg:1840046_8", "text": "This paper studies the minimum achievable source coding rate as a function of blocklength <i>n</i> and probability ϵ that the distortion exceeds a given level <i>d</i> . Tight general achievability and converse bounds are derived that hold at arbitrary fixed blocklength. For stationary memoryless sources with separable distortion, the minimum rate achievable is shown to be closely approximated by <i>R</i>(<i>d</i>) + √<i>V</i>(<i>d</i>)/(<i>n</i>) <i>Q</i><sup>-1</sup>(ϵ), where <i>R</i>(<i>d</i>) is the rate-distortion function, <i>V</i>(<i>d</i>) is the rate dispersion, a characteristic of the source which measures its stochastic variability, and <i>Q</i><sup>-1</sup>(·) is the inverse of the standard Gaussian complementary cumulative distribution function.", "title": "" }, { "docid": "neg:1840046_9", "text": "This paper describes our proposed solution for SemEval 2017 Task 1: Semantic Textual Similarity (Daniel Cer and Specia, 2017). The task aims at measuring the degree of equivalence between sentences given in English. Performance is evaluated by computing Pearson Correlation scores between the predicted scores and human judgements. Our proposed system consists of two subsystems and one regression model for predicting STS scores. The two subsystems are designed to learn Paraphrase and Event Embeddings that can take the consideration of paraphrasing characteristics and sentence structures into our system. The regression model associates these embeddings to make the final predictions. The experimental result shows that our system acquires 0.8 of Pearson Correlation Scores in this task.", "title": "" }, { "docid": "neg:1840046_10", "text": "Single cells were recorded in the visual cortex of monkeys trained to attend to stimuli at one location in the visual field and ignore stimuli at another. When both locations were within the receptive field of a cell in prestriate area V4 or the inferior temporal cortex, the response to the unattended stimulus was dramatically reduced. Cells in the striate cortex were unaffected by attention. The filtering of irrelevant information from the receptive fields of extrastriate neurons may underlie the ability to identify and remember the properties of a particular object out of the many that may be represented on the retina.", "title": "" }, { "docid": "neg:1840046_11", "text": "In recent years, despite several risk management models proposed by different researchers, software projects still have a high degree of failures. Improper risk assessment during software development was the major reason behind these unsuccessful projects as risk analysis was done on overall projects. This work attempts in identifying key risk factors and risk types for each of the development phases of SDLC, which would help in identifying the risks at a much early stage of development.", "title": "" }, { "docid": "neg:1840046_12", "text": "A circularly polarized magnetoelectric dipole antenna with high efficiency based on printed ridge gap waveguide is presented. The antenna gain is improved by using a wideband lens in front of the antennas. The lens consists of three layers dual-polarized mu-near zero (MNZ) inclusions. Each layer consists of a <inline-formula> <tex-math notation=\"LaTeX\">$3\\times4$ </tex-math></inline-formula> MNZ unit cell. The measured results indicate that the magnitude of <inline-formula> <tex-math notation=\"LaTeX\">$S_{11}$ </tex-math></inline-formula> is below −10 dB in the frequency range of 29.5–37 GHz. The resulting 3-dB axial ratio is over a frequency range of 32.5–35 GHz. The measured realized gain of the antenna is more than 10 dBi over a frequency band of 31–35 GHz achieving a radiation efficiency of 94% at 34 GHz.", "title": "" }, { "docid": "neg:1840046_13", "text": "In an Intelligent Environment, he user and the environment work together in a unique manner; the user expresses what he wishes to do, and the environment recognizes his intentions and helps out however it can. If well-implemented, such an environment allows the user to interact with it in the manner that is most natural for him personally. He should need virtually no time to learn to use it and should be more productive once he has. But to implement a useful and natural Intelligent Environment, he designers are faced with a daunting task: they must design a software system that senses what its users do, understands their intentions, and then responds appropriately. In this paper we argue that, in order to function reasonably in any of these ways, an Intelligent Environment must make use of declarative representations of what the user might do. We present our evidence in the context of the Intelligent Classroom, a facility that aids a speaker in this way and uses its understanding to produce a video of his presentation.", "title": "" }, { "docid": "neg:1840046_14", "text": "The current practice used in the design of physical interactive products (such as handheld devices), often suffers from a divide between exploration of form and exploration of interactivity. This can be attributed, in part, to the fact that working prototypes are typically expensive, take a long time to manufacture, and require specialized skills and tools not commonly available in design studios.We have designed a prototyping tool that, we believe, can significantly reduce this divide. The tool allows designers to rapidly create functioning, interactive, physical prototypes early in the design process using a collection of wireless input components (buttons, sliders, etc.) and a sketch of form. The input components communicate with Macromedia Director to enable interactivity.We believe that this tool can improve the design practice by: a) Improving the designer's ability to explore both the form and interactivity of the product early in the design process, b) Improving the designer's ability to detect problems that emerge from the combination of the form and the interactivity, c) Improving users' ability to communicate their ideas, needs, frustrations and desires, and d) Improving the client's understanding of the proposed design, resulting in greater involvement and support for the design.", "title": "" }, { "docid": "neg:1840046_15", "text": "Traditional data on influenza vaccination has several limitations: high cost, limited coverage of underrepresented groups, and low sensitivity to emerging public health issues. Social media, such as Twitter, provide an alternative way to understand a population’s vaccination-related opinions and behaviors. In this study, we build and employ several natural language classifiers to examine and analyze behavioral patterns regarding influenza vaccination in Twitter across three dimensions: temporality (by week and month), geography (by US region), and demography (by gender). Our best results are highly correlated official government data, with a correlation over 0.90, providing validation of our approach. We then suggest a number of directions for future work.", "title": "" }, { "docid": "neg:1840046_16", "text": "Dowser is a ‘guided’ fuzzer that combines taint tracking, program analysis and symbolic execution to find buffer overflow and underflow vulnerabilities buried deep in a program’s logic. The key idea is that analysis of a program lets us pinpoint the right areas in the program code to probe and the appropriate inputs to do so. Intuitively, for typical buffer overflows, we need consider only the code that accesses an array in a loop, rather than all possible instructions in the program. After finding all such candidate sets of instructions, we rank them according to an estimation of how likely they are to contain interesting vulnerabilities. We then subject the most promising sets to further testing. Specifically, we first use taint analysis to determine which input bytes influence the array index and then execute the program symbolically, making only this set of inputs symbolic. By constantly steering the symbolic execution along branch outcomes most likely to lead to overflows, we were able to detect deep bugs in real programs (like the nginx webserver, the inspircd IRC server, and the ffmpeg videoplayer). Two of the bugs we found were previously undocumented buffer overflows in ffmpeg and the poppler PDF rendering library.", "title": "" }, { "docid": "neg:1840046_17", "text": "The use of dialogue systems in vehicles raises the problem of making sure that the dialogue does not distract the driver from the primary task of driving. Earlier studies have indicated that humans are very apt at adapting the dialogue to the traffic situation and the cognitive load of the driver. The goal of this paper is to investigate strategies for interrupting and resuming in, as well as changing topic domain of, spoken human-human in-vehicle dialogue. The results show a large variety of strategies being used, and indicate that the choice of resumption and domain-switching strategy depends partly on the topic domain being resumed, and partly on the role of the speaker (driver or passenger). These results will be used as a basis for the development of dialogue strategies for interruption, resumption and domain-switching in the DICO in-vehicle dialogue system.", "title": "" }, { "docid": "neg:1840046_18", "text": "Before the big data era, scene recognition was often approached with two-step inference using localized intermediate representations (objects, topics, and so on). One of such approaches is the semantic manifold (SM), in which patches and images are modeled as points in a semantic probability simplex. Patch models are learned resorting to weak supervision via image labels, which leads to the problem of scene categories co-occurring in this semantic space. Fortunately, each category has its own co-occurrence patterns that are consistent across the images in that category. Thus, discovering and modeling these patterns are critical to improve the recognition performance in this representation. Since the emergence of large data sets, such as ImageNet and Places, these approaches have been relegated in favor of the much more powerful convolutional neural networks (CNNs), which can automatically learn multi-layered representations from the data. In this paper, we address many limitations of the original SM approach and related works. We propose discriminative patch representations using neural networks and further propose a hybrid architecture in which the semantic manifold is built on top of multiscale CNNs. Both representations can be computed significantly faster than the Gaussian mixture models of the original SM. To combine multiple scales, spatial relations, and multiple features, we formulate rich context models using Markov random fields. To solve the optimization problem, we analyze global and local approaches, where a top–down hierarchical algorithm has the best performance. Experimental results show that exploiting different types of contextual relations jointly consistently improves the recognition accuracy.", "title": "" }, { "docid": "neg:1840046_19", "text": "The facial system plays an important role in human-robot interaction. EveR-4 H33 is a head system for an android face controlled by thirty-three motors. It consists of three layers: a mechanical layer, an inner cover layer and an outer cover layer. Motors are attached under the skin and some motors are correlated with each other. Some expressions cannot be shown by moving just one motor. In addition, moving just one motor can cause damage to other motors or the skin. To solve these problems, a facial muscle control method that controls motors in a correlated manner is required. We designed a facial muscle control method and applied it to EveR-4 H33. We develop the actress robot EveR-4A by applying the EveR-4 H33 to the 24 degrees of freedom upper body and mannequin legs. EveR-4A shows various facial expressions with lip synchronization using our facial muscle control method.", "title": "" } ]
1840047
PGX.D: a fast distributed graph processing engine
[ { "docid": "pos:1840047_0", "text": "Given a query graph q and a data graph g, the subgraph isomorphism search finds all occurrences of q in g and is considered one of the most fundamental query types for many real applications. While this problem belongs to NP-hard, many algorithms have been proposed to solve it in a reasonable time for real datasets. However, a recent study has shown, through an extensive benchmark with various real datasets, that all existing algorithms have serious problems in their matching order selection. Furthermore, all algorithms blindly permutate all possible mappings for query vertices, often leading to useless computations. In this paper, we present an efficient and robust subgraph search solution, called TurboISO, which is turbo-charged with two novel concepts, candidate region exploration and the combine and permute strategy (in short, Comb/Perm). The candidate region exploration identifies on-the-fly candidate subgraphs (i.e, candidate regions), which contain embeddings, and computes a robust matching order for each candidate region explored. The Comb/Perm strategy exploits the novel concept of the neighborhood equivalence class (NEC). Each query vertex in the same NEC has identically matching data vertices. During subgraph isomorphism search, Comb/Perm generates only combinations for each NEC instead of permutating all possible enumerations. Thus, if a chosen combination is determined to not contribute to a complete solution, all possible permutations for that combination will be safely pruned. Extensive experiments with many real datasets show that TurboISO consistently and significantly outperforms all competitors by up to several orders of magnitude.", "title": "" }, { "docid": "pos:1840047_1", "text": "With the proliferation of large, irregular, and sparse relational datasets, new storage and analysis platforms have arisen to fill gaps in performance and capability left by conventional approaches built on traditional database technologies and query languages. Many of these platforms apply graph structures and analysis techniques to enable users to ingest, update, query, and compute on the topological structure of the network represented as sets of edges relating sets of vertices. To store and process Facebook-scale datasets, software and algorithms must be able to support data sources with billions of edges, update rates of millions of updates per second, and complex analysis kernels. These platforms must provide intuitive interfaces that enable graph experts and novice programmers to write implementations of common graph algorithms. In this paper, we conduct a qualitative study and a performance comparison of 12 open source graph databases using four fundamental graph algorithms on networks containing up to 256 million edges.", "title": "" } ]
[ { "docid": "neg:1840047_0", "text": "Money laundering is a global problem that affects all countries to various degrees. Although, many countries take benefits from money laundering, by accepting the money from laundering but keeping the crime abroad, at the long run, “money laundering attracts crime”. Criminals come to know a country, create networks and eventually also locate their criminal activities there. Most financial institutions have been implementing antimoney laundering solutions (AML) to fight investment fraud. The key pillar of a strong Anti-Money Laundering system for any financial institution depends mainly on a well-designed and effective monitoring system. The main purpose of the Anti-Money Laundering transactions monitoring system is to identify potential suspicious behaviors embedded in legitimate transactions. This paper presents a monitor framework that uses various techniques to enhance the monitoring capabilities. This framework is depending on rule base monitoring, behavior detection monitoring, cluster monitoring and link analysis based monitoring. The monitor detection processes are based on a money laundering deterministic finite automaton that has been obtained from their corresponding regular expressions. Index Terms – Anti Money Laundering system, Money laundering monitoring and detecting, Cycle detection monitoring, Suspected Link monitoring.", "title": "" }, { "docid": "neg:1840047_1", "text": "Learning generic and robust feature representations with data from multiple domains for the same problem is of great value, especially for the problems that have multiple datasets but none of them are large enough to provide abundant data variations. In this work, we present a pipeline for learning deep feature representations from multiple domains with Convolutional Neural Networks (CNNs). When training a CNN with data from all the domains, some neurons learn representations shared across several domains, while some others are effective only for a specific one. Based on this important observation, we propose a Domain Guided Dropout algorithm to improve the feature learning procedure. Experiments show the effectiveness of our pipeline and the proposed algorithm. Our methods on the person re-identification problem outperform stateof-the-art methods on multiple datasets by large margins.", "title": "" }, { "docid": "neg:1840047_2", "text": "The past decade has witnessed a rapid proliferation of video cameras in all walks of life and has resulted in a tremendous explosion of video content. Several applications such as content-based video annotation and retrieval, highlight extraction and video summarization require recognition of the activities occurring in the video. The analysis of human activities in videos is an area with increasingly important consequences from security and surveillance to entertainment and personal archiving. Several challenges at various levels of processing-robustness against errors in low-level processing, view and rate-invariant representations at midlevel processing and semantic representation of human activities at higher level processing-make this problem hard to solve. In this review paper, we present a comprehensive survey of efforts in the past couple of decades to address the problems of representation, recognition, and learning of human activities from video and related applications. We discuss the problem at two major levels of complexity: 1) \"actions\" and 2) \"activities.\" \"Actions\" are characterized by simple motion patterns typically executed by a single human. \"Activities\" are more complex and involve coordinated actions among a small number of humans. We will discuss several approaches and classify them according to their ability to handle varying degrees of complexity as interpreted above. We begin with a discussion of approaches to model the simplest of action classes known as atomic or primitive actions that do not require sophisticated dynamical modeling. Then, methods to model actions with more complex dynamics are discussed. The discussion then leads naturally to methods for higher level representation of complex activities.", "title": "" }, { "docid": "neg:1840047_3", "text": "OBJECTIVES\nTo evaluate whether 7-mm-long implants could be an alternative to longer implants placed in vertically augmented posterior mandibles.\n\n\nMATERIALS AND METHODS\nSixty patients with posterior mandibular edentulism with 7-8 mm bone height above the mandibular canal were randomized to either vertical augmentation with anorganic bovine bone blocks and delayed 5-month placement of ≥10 mm implants or to receive 7-mm-long implants. Four months after implant placement, provisional prostheses were delivered, replaced after 4 months, by definitive prostheses. The outcome measures were prosthesis and implant failures, any complications and peri-implant marginal bone levels. All patients were followed to 1 year after loading.\n\n\nRESULTS\nOne patient dropped out from the short implant group. In two augmented mandibles, there was not sufficient bone to place 10-mm-long implants possibly because the blocks had broken apart during insertion. One prosthesis could not be placed when planned in the 7 mm group vs. three prostheses in the augmented group, because of early failure of one implant in each patient. Four complications (wound dehiscence) occurred during graft healing in the augmented group vs. none in the 7 mm group. No complications occurred after implant placement. These differences were not statistically significant. One year after loading, patients of both groups lost an average of 1 mm of peri-implant bone. There no statistically significant differences in bone loss between groups.\n\n\nCONCLUSIONS\nWhen residual bone height over the mandibular canal is between 7 and 8 mm, 7 mm short implants might be a preferable choice than vertical augmentation, reducing the chair time, expenses and morbidity. These 1-year preliminary results need to be confirmed by follow-up of at least 5 years.", "title": "" }, { "docid": "neg:1840047_4", "text": "This work intends to build a Game Mechanics Ontology based on the mechanics category presented in BoardGameGeek.com vis à vis the formal concepts from the MDA framework. The 51 concepts presented in BoardGameGeek (BGG) as game mechanics are analyzed and arranged in a systemic way in order to build a domain sub-ontology in which the root concept is the mechanics as defined in MDA. The relations between the terms were built from its available descriptions as well as from the authors’ previous experiences. Our purpose is to show that a set of terms commonly accepted by players can lead us to better understand how players perceive the games components that are closer to the designer. The ontology proposed in this paper is not exhaustive. The intent of this work is to supply a tool to game designers, scholars, and others that see game artifacts as study objects or are interested in creating games. However, although it can be used as a starting point for games construction or study, the proposed Game Mechanics Ontology should be seen as the seed of a domain ontology encompassing game mechanics in general.", "title": "" }, { "docid": "neg:1840047_5", "text": "Sentiment analysis in Twitter is a field that has recently attracted research interest. Twitter is one of the most popular microblog platforms on which users can publish their thoughts and opinions. Sentiment analysis in Twitter tackles the problem of analyzing the tweets in terms of the opinion they express. This survey provides an overview of the topic by investigating and briefly describing the algorithms that have been proposed for sentiment analysis in Twitter. The presented studies are categorized according to the approach they follow. In addition, we discuss fields related to sentiment analysis in Twitter including Twitter opinion retrieval, tracking sentiments over time, irony detection, emotion detection, and tweet sentiment quantification, tasks that have recently attracted increasing attention. Resources that have been used in the Twitter sentiment analysis literature are also briefly presented. The main contributions of this survey include the presentation of the proposed approaches for sentiment analysis in Twitter, their categorization according to the technique they use, and the discussion of recent research trends of the topic and its related fields.", "title": "" }, { "docid": "neg:1840047_6", "text": "A prediction market is a place where individuals can wager on the outcomes of future events. Those who forecast the outcome correctly win money, and if they forecast incorrectly, they lose money. People value money, so they are incentivized to forecast such outcomes as accurately as they can. Thus, the price of a prediction market can serve as an excellent indicator of how likely an event is to occur [1, 2]. Augur is a decentralized platform for prediction markets. Our goal here is to provide a blueprint of a decentralized prediction market using Bitcoin’s input/output-style transactions. Many theoretical details of this project, such as its game-theoretic underpinning, are touched on lightly or not at all. This work builds on (and is intended to be read as a companion to) the theoretical foundation established in [3].", "title": "" }, { "docid": "neg:1840047_7", "text": "We present a Convolutional Neural Network (CNN) regression based framework for 2-D/3-D medical image registration, which directly estimates the transformation parameters from image features extracted from the DRR and the X-ray images using learned hierarchical regressors. Our framework consists of learning and application stages. In the learning stage, CNN regressors are trained using supervised machine learning to reveal the correlation between the transformation parameters and the image features. In the application stage, CNN regressors are applied on extracted image features in a hierarchical manner to estimate the transformation parameters. Our experiment results demonstrate that the proposed method can achieve real-time 2-D/3-D registration with very high (i.e., sub-milliliter) accuracy.", "title": "" }, { "docid": "neg:1840047_8", "text": "Machine translation systems achieve near human-level performance on some languages, yet their effectiveness strongly relies on the availability of large amounts of parallel sentences, which hinders their applicability to the majority of language pairs. This work investigates how to learn to translate when having access to only large monolingual corpora in each language. We propose two model variants, a neural and a phrase-based model. Both versions leverage a careful initialization of the parameters, the denoising effect of language models and automatic generation of parallel data by iterative back-translation. These models are significantly better than methods from the literature, while being simpler and having fewer hyper-parameters. On the widely used WMT’14 English-French and WMT’16 German-English benchmarks, our models respectively obtain 28.1 and 25.2 BLEU points without using a single parallel sentence, outperforming the state of the art by more than 11 BLEU points. On low-resource languages like English-Urdu and English-Romanian, our methods achieve even better results than semisupervised and supervised approaches leveraging the paucity of available bitexts. Our code for NMT and PBSMT is publicly available.1", "title": "" }, { "docid": "neg:1840047_9", "text": "BACKGROUND\nLong-term continuous systolic blood pressure (SBP) and heart rate (HR) monitors are of tremendous value to medical (cardiovascular, circulatory and cerebrovascular management), wellness (emotional and stress tracking) and fitness (performance monitoring) applications, but face several major impediments, such as poor wearability, lack of widely accepted robust SBP models and insufficient proofing of the generalization ability of calibrated models.\n\n\nMETHODS\nThis paper proposes a wearable cuff-less electrocardiography (ECG) and photoplethysmogram (PPG)-based SBP and HR monitoring system and many efforts are made focusing on above challenges. Firstly, both ECG/PPG sensors are integrated into a single-arm band to provide a super wearability. A highly convenient but challenging single-lead configuration is proposed for weak single-arm-ECG acquisition, instead of placing the electrodes on the chest, or two wrists. Secondly, to identify heartbeats and estimate HR from the motion artifacts-sensitive weak arm-ECG, a machine learning-enabled framework is applied. Then ECG-PPG heartbeat pairs are determined for pulse transit time (PTT) measurement. Thirdly, a PTT&HR-SBP model is applied for SBP estimation, which is also compared with many PTT-SBP models to demonstrate the necessity to introduce HR information in model establishment. Fourthly, the fitted SBP models are further evaluated on the unseen data to illustrate the generalization ability. A customized hardware prototype was established and a dataset collected from ten volunteers was acquired to evaluate the proof-of-concept system.\n\n\nRESULTS\nThe semi-customized prototype successfully acquired from the left upper arm the PPG signal, and the weak ECG signal, the amplitude of which is only around 10% of that of the chest-ECG. The HR estimation has a mean absolute error (MAE) and a root mean square error (RMSE) of only 0.21 and 1.20 beats per min, respectively. Through the comparative analysis, the PTT&HR-SBP models significantly outperform the PTT-SBP models. The testing performance is 1.63 ± 4.44, 3.68, 4.71 mmHg in terms of mean error ± standard deviation, MAE and RMSE, respectively, indicating a good generalization ability on the unseen fresh data.\n\n\nCONCLUSIONS\nThe proposed proof-of-concept system is highly wearable, and its robustness is thoroughly evaluated on different modeling strategies and also the unseen data, which are expected to contribute to long-term pervasive hypertension, heart health and fitness management.", "title": "" }, { "docid": "neg:1840047_10", "text": "The Bidirectional Reflectance Distribution Function (BRDF) describes the appearance of a material by its interaction with light at a surface point. A variety of analytical models have been proposed to represent BRDFs. However, analysis of these models has been scarce due to the lack of high-resolution measured data. In this work we evaluate several well-known analytical models in terms of their ability to fit measured BRDFs. We use an existing high-resolution data set of a hundred isotropic materials and compute the best approximation for each analytical model. Furthermore, we have built a new setup for efficient acquisition of anisotropic BRDFs, which allows us to acquire anisotropic materials at high resolution. We have measured four samples of anisotropic materials (brushed aluminum, velvet, and two satins). Based on the numerical errors, function plots, and rendered images we provide insights into the performance of the various models. We conclude that for most isotropic materials physically-based analytic reflectance models can represent their appearance quite well. We illustrate the important difference between the two common ways of defining the specular lobe: around the mirror direction and with respect to the half-vector. Our evaluation shows that the latter gives a more accurate shape for the reflection lobe. Our analysis of anisotropic materials indicates current parametric reflectance models cannot represent their appearances faithfully in many cases. We show that using a sampled microfacet distribution computed from measurements improves the fit and qualitatively reproduces the measurements.", "title": "" }, { "docid": "neg:1840047_11", "text": "Structured sparse coding and the related structured dictionary learning problems are novel research areas in machine learning. In this paper we present a new application of structured dictionary learning for collaborative filtering based recommender systems. Our extensive numerical experiments demonstrate that the presented method outperforms its state-of-the-art competitors and has several advantages over approaches that do not put structured constraints on the dictionary elements.", "title": "" }, { "docid": "neg:1840047_12", "text": "L-Systems have traditionally been used as a popular method for the modelling of spacefilling curves, biological systems and morphogenesis. In this paper, we adapt string rewriting grammars based on L-Systems into a system for music composition. Representation of pitch, duration and timbre are encoded as grammar symbols, upon which a series of re-writing rules are applied. Parametric extensions to the grammar allow the specification of continuous data for the purposes of modulation and control. Such continuous data is also under control of the grammar. Using non-deterministic grammars with context sensitivity allows the simulation of Nth-order Markov models with a more economical representation than transition matrices and greater flexibility than previous composition models based on finite state automata or Petri nets. Using symbols in the grammar to represent relationships between notes, (rather than absolute notes) in combination with a hierarchical grammar representation, permits the emergence of complex music compositions from a relatively simple grammars.", "title": "" }, { "docid": "neg:1840047_13", "text": "Recommender systems play a crucial role in mitigating the problem of information overload by suggesting users' personalized items or services. The vast majority of traditional recommender systems consider the recommendation procedure as a static process and make recommendations following a fixed strategy. In this paper, we propose a novel recommender system with the capability of continuously improving its strategies during the interactions with users. We model the sequential interactions between users and a recommender system as a Markov Decision Process (MDP) and leverage Reinforcement Learning (RL) to automatically learn the optimal strategies via recommending trial-and-error items and receiving reinforcements of these items from users' feedback. Users' feedback can be positive and negative and both types of feedback have great potentials to boost recommendations. However, the number of negative feedback is much larger than that of positive one; thus incorporating them simultaneously is challenging since positive feedback could be buried by negative one. In this paper, we develop a novel approach to incorporate them into the proposed deep recommender system (DEERS) framework. The experimental results based on real-world e-commerce data demonstrate the effectiveness of the proposed framework. Further experiments have been conducted to understand the importance of both positive and negative feedback in recommendations.", "title": "" }, { "docid": "neg:1840047_14", "text": "In this paper we describe a method of procedurally generating maps using Markov chains. This method learns statistical patterns from human-authored maps, which are assumed to be of high quality. Our method then uses those learned patterns to generate new maps. We present a collection of strategies both for training the Markov chains, and for generating maps from such Markov chains. We then validate our approach using the game Super Mario Bros., by evaluating the quality of the produced maps based on different configurations for training and generation.", "title": "" }, { "docid": "neg:1840047_15", "text": "OBJECTIVES\nUrinalysis is one of the most commonly performed tests in the clinical laboratory. However, manual microscopic sediment examination is labor-intensive, time-consuming, and lacks standardization in high-volume laboratories. In this study, the concordance of analyses between manual microscopic examination and two different automatic urine sediment analyzers has been evaluated.\n\n\nDESIGN AND METHODS\n209 urine samples were analyzed by the Iris iQ200 ELITE (İris Diagnostics, USA), Dirui FUS-200 (DIRUI Industrial Co., China) automatic urine sediment analyzers and by manual microscopic examination. The degree of concordance (Kappa coefficient) and the rates within the same grading were evaluated.\n\n\nRESULTS\nFor erythrocytes, leukocytes, epithelial cells, bacteria, crystals and yeasts, the degree of concordance between the two instruments was better than the degree of concordance between the manual microscopic method and the individual devices. There was no concordance between all methods for casts.\n\n\nCONCLUSION\nThe results from the automated analyzers for erythrocytes, leukocytes and epithelial cells were similar to the result of microscopic examination. However, in order to avoid any error or uncertainty, some images (particularly: dysmorphic cells, bacteria, yeasts, casts and crystals) have to be analyzed by manual microscopic examination by trained staff. Therefore, the software programs which are used in automatic urine sediment analysers need further development to recognize urinary shaped elements more accurately. Automated systems are important in terms of time saving and standardization.", "title": "" }, { "docid": "neg:1840047_16", "text": "Arterial plasma glucose values throughout a 24-h period average approximately 90 mg/dl, with a maximal concentration usually not exceeding 165 mg/dl such as after meal ingestion1 and remaining above 55 mg/dl such as after exercise2 or a moderate fast (60 h).3 This relative stability contrasts with the situation for other substrates such as glycerol, lactate, free fatty acids, and ketone bodies whose fluctuations are much wider (Table 2.1).4 This narrow range defining normoglycemia is maintained through an intricate regulatory and counterregulatory neuro-hormonal system: A decrement in plasma glucose as little as 20 mg/dl (from 90 to 70 mg/dl) will suppress the release of insulin and will decrease glucose uptake in certain areas in the brain (e.g., hypothalamus where glucose sensors are located); this will activate the sympathetic nervous system and trigger the release of counterregulatory hormones (glucagon, catecholamines, cortisol, and growth hormone).5 All these changes will increase glucose release into plasma and decrease its removal so as to restore normoglycemia. On the other hand, a 10 mg/dl increment in plasma glucose will stimulate insulin release and suppress glucagon secretion to prevent further increments and restore normoglycemia. Glucose in plasma either comes from dietary sources or is either the result of the breakdown of glycogen in liver (glycogenolysis) or the formation of glucose in liver and kidney from other carbons compounds (precursors) such as lactate, pyruvate, amino acids, and glycerol (gluconeogenesis). In humans, glucose removed from plasma may have different fates in different tissues and under different conditions (e.g., postabsorptive vs. postprandial), but the pathways for its disposal are relatively limited. It (1) may be immediately stored as glycogen or (2) may undergo glycolysis, which can be non-oxidative producing pyruvate (which can be reduced to lactate or transaminated to form alanine) or oxidative through conversion to acetyl CoA which is further oxidized through the tricarboxylic acid cycle to form carbon dioxide and water. Non-oxidative glycolysis carbons undergo gluconeogenesis and the newly formed glucose is either stored as glycogen or released back into plasma (Fig. 2.1).", "title": "" }, { "docid": "neg:1840047_17", "text": "Large scale visual understanding is challenging, as it requires a model to handle the widely-spread and imbalanced distribution of 〈subject, relation, object〉 triples. In real-world scenarios with large numbers of objects and relations, some are seen very commonly while others are barely seen. We develop a new relationship detection model that embeds objects and relations into two vector spaces where both discriminative capability and semantic affinity are preserved. We learn a visual and a semantic module that map features from the two modalities into a shared space, where matched pairs of features have to discriminate against those unmatched, but also maintain close distances to semantically similar ones. Benefiting from that, our model can achieve superior performance even when the visual entity categories scale up to more than 80, 000, with extremely skewed class distribution. We demonstrate the efficacy of our model on a large and imbalanced benchmark based of Visual Genome that comprises 53, 000+ objects and 29, 000+ relations, a scale at which no previous work has been evaluated at. We show superiority of our model over competitive baselines on the original Visual Genome dataset with 80, 000+ categories. We also show state-of-the-art performance on the VRD dataset and the scene graph dataset which is a subset of Visual Genome with 200 categories.", "title": "" }, { "docid": "neg:1840047_18", "text": "The Great East Japan Earthquake and Tsunami drastically changed Japanese society, and the requirements for ICT was completely redefined. After the disaster, it was impossible for disaster victims to utilize their communication devices, such as cellular phones, tablet computers, or laptop computers, to notify their families and friends of their safety and confirm the safety of their loved ones since the communication infrastructures were physically damaged or lacked the energy necessary to operate. Due to this drastic event, we have come to realize the importance of device-to-device communications. With the recent increase in popularity of D2D communications, many research works are focusing their attention on a centralized network operated by network operators and neglect the importance of decentralized infrastructureless multihop communication, which is essential for disaster relief applications. In this article, we propose the concept of multihop D2D communication network systems that are applicable to many different wireless technologies, and clarify requirements along with introducing open issues in such systems. The first generation prototype of relay by smartphone can deliver messages using only users' mobile devices, allowing us to send out emergency messages from disconnected areas as well as information sharing among people gathered in evacuation centers. The success of field experiments demonstrates steady advancement toward realizing user-driven networking powered by communication devices independent of operator networks.", "title": "" }, { "docid": "neg:1840047_19", "text": "J. K. Strosnider P. Nandi S. Kumaran S. Ghosh A. Arsanjani The current approach to the design, maintenance, and governance of service-oriented architecture (SOA) solutions has focused primarily on flow-driven assembly and orchestration of reusable service components. The practical application of this approach in creating industry solutions has been limited, because flow-driven assembly and orchestration models are too rigid and static to accommodate complex, real-world business processes. Furthermore, the approach assumes a rich, easily configured library of reusable service components when in fact the development, maintenance, and governance of these libraries is difficult. An alternative approach pioneered by the IBM Research Division, model-driven business transformation (MDBT), uses a model-driven software synthesis technology to automatically generate production-quality business service components from high-level business process models. In this paper, we present the business entity life cycle analysis (BELA) technique for MDBT-based SOA solution realization and its integration into serviceoriented modeling and architecture (SOMA), the end-to-end method from IBM for SOA application and solution development. BELA shifts the process-modeling paradigm from one that is centered on activities to one that is centered on entities. BELA teams process subject-matter experts with IT and data architects to identify and specify business entities and decompose business processes. Supporting synthesis tools then automatically generate the interacting business entity service components and their associated data stores and service interface definitions. We use a large-scale project as an example demonstrating the benefits of this innovation, which include an estimated 40 percent project cost reduction and an estimated 20 percent reduction in cycle time when compared with conventional SOA approaches.", "title": "" } ]
1840048
Selection of K in K-means clustering
[ { "docid": "pos:1840048_0", "text": "ÐIn k-means clustering, we are given a set of n data points in d-dimensional space R and an integer k and the problem is to determine a set of k points in R, called centers, so as to minimize the mean squared distance from each data point to its nearest center. A popular heuristic for k-means clustering is Lloyd's algorithm. In this paper, we present a simple and efficient implementation of Lloyd's k-means clustering algorithm, which we call the filtering algorithm. This algorithm is easy to implement, requiring a kd-tree as the only major data structure. We establish the practical efficiency of the filtering algorithm in two ways. First, we present a data-sensitive analysis of the algorithm's running time, which shows that the algorithm runs faster as the separation between clusters increases. Second, we present a number of empirical studies both on synthetically generated data and on real data sets from applications in color quantization, data compression, and image segmentation. Index TermsÐPattern recognition, machine learning, data mining, k-means clustering, nearest-neighbor searching, k-d tree, computational geometry, knowledge discovery.", "title": "" }, { "docid": "pos:1840048_1", "text": "In this paper, we aim to compare empirically four initialization methods for the K-Means algorithm: random, Forgy, MacQueen and Kaufman. Although this algorithm is known for its robustness, it is widely reported in literature that its performance depends upon two key points: initial clustering and instance order. We conduct a series of experiments to draw up (in terms of mean, maximum, minimum and standard deviation) the probability distribution of the square-error values of the nal clusters returned by the K-Means algorithm independently on any initial clustering and on any instance order when each of the four initialization methods is used. The results of our experiments illustrate that the random and the Kauf-man initialization methods outperform the rest of the compared methods as they make the K-Means more eeective and more independent on initial clustering and on instance order. In addition, we compare the convergence speed of the K-Means algorithm when using each of the four initialization methods. Our results suggest that the Kaufman initialization method induces to the K-Means algorithm a more desirable behaviour with respect to the convergence speed than the random initial-ization method.", "title": "" }, { "docid": "pos:1840048_2", "text": "We investigate here the behavior of the standard k-means clustering algorithm and several alternatives to it: the k-harmonic means algorithm due to Zhang and colleagues, fuzzy k-means, Gaussian expectation-maximization, and two new variants of k-harmonic means. Our aim is to find which aspects of these algorithms contribute to finding good clusterings, as opposed to converging to a low-quality local optimum. We describe each algorithm in a unified framework that introduces separate cluster membership and data weight functions. We then show that the algorithms do behave very differently from each other on simple low-dimensional synthetic datasets and image segmentation tasks, and that the k-harmonic means method is superior. Having a soft membership function is essential for finding high-quality clusterings, but having a non-constant data weight function is useful also.", "title": "" } ]
[ { "docid": "neg:1840048_0", "text": "With increasing quality requirements for multimedia communications, audio codecs must maintain both high quality and low delay. Typically, audio codecs offer either low delay or high quality, but rarely both. We propose a codec that simultaneously addresses both these requirements, with a delay of only 8.7 ms at 44.1 kHz. It uses gain-shape algebraic vector quantization in the frequency domain with time-domain pitch prediction. We demonstrate that the proposed codec operating at 48 kb/s and 64 kb/s out-performs both G.722.1C and MP3 and has quality comparable to AAC-LD, despite having less than one fourth of the algorithmic delay of these codecs.", "title": "" }, { "docid": "neg:1840048_1", "text": "Understanding inter-character relationships is fundamental for understanding character intentions and goals in a narrative. This paper addresses unsupervised modeling of relationships between characters. We model relationships as dynamic phenomenon, represented as evolving sequences of latent states empirically learned from data. Unlike most previous work our approach is completely unsupervised. This enables data-driven inference of inter-character relationship types beyond simple sentiment polarities, by incorporating lexical and semantic representations, and leveraging large quantities of raw text. We present three models based on rich sets of linguistic features that capture various cues about relationships. We compare these models with existing techniques and also demonstrate that relationship categories learned by our model are semantically coherent.", "title": "" }, { "docid": "neg:1840048_2", "text": "Worldwide the pros and cons of games and social behaviour are discussed. In Western countries the discussion is focussing on violent game and media content; in Japan on intensive game usage and the impact on the intellectual development of children. A lot is already discussed on the harmful and negative effects of entertainment technology on human behaviour, therefore we decided to focus primarily on the positive effects. Based on an online document search we could find and select 393 online available publications according the following categories: meta review (N=34), meta analysis (N=13), literature review (N=38), literature survey (N=36), empirical study (N=91), survey study (N=44), design study (N=91), any other document (N=46). In this paper a first preliminary overview over positive effects of entertainment technology on human behaviour is presented and discussed. The drawn recommendations can support developers and designers in entertainment industry.", "title": "" }, { "docid": "neg:1840048_3", "text": "In this paper, we propose an end-to-end capsule network for pixel level localization of actors and actions present in a video. The localization is performed based on a natural language query through which an actor and action are specified. We propose to encode both the video as well as textual input in the form of capsules, which provide more effective representation in comparison with standard convolution based features. We introduce a novel capsule based attention mechanism for fusion of video and text capsules for text selected video segmentation. The attention mechanism is performed via joint EM routing over video and text capsules for text selected actor and action localization. The existing works on actor-action localization are mainly focused on localization in a single frame instead of the full video. Different from existing works, we propose to perform the localization on all frames of the video. To validate the potential of the proposed network for actor and action localization on all the frames of a video, we extend an existing actor-action dataset (A2D) with annotations for all the frames. The experimental evaluation demonstrates the effectiveness of the proposed capsule network for text selective actor and action localization in videos, and it also improves upon the performance of the existing state-of-the art works on single frame-based localization. Figure 1: Overview of the proposed approach. For a given video, we want to localize the actor and action which are described by an input textual query. Capsules are extracted from both the video and the textual query, and a joint EM routing algorithm creates high level capsules, which are further used for localization of selected actors and actions.", "title": "" }, { "docid": "neg:1840048_4", "text": "Battery storage (BS) systems are static energy conversion units that convert the chemical energy directly into electrical energy. They exist in our cars, laptops, electronic appliances, micro electricity generation systems and in many other mobile to stationary power supply systems. The economic advantages, partial sustainability and the portability of these units pose promising substitutes for backup power systems for hybrid vehicles and hybrid electricity generation systems. Dynamic behaviour of these systems can be analysed by using mathematical modeling and simulation software programs. Though, there have been many mathematical models presented in the literature and proved to be successful, dynamic simulation of these systems are still very exhaustive and time consuming as they do not behave according to specific mathematical models or functions. The charging and discharging of battery functions are a combination of exponential and non-linear nature. The aim of this research paper is to present a suitable convenient, dynamic battery model that can be used to model a general BS system. Proposed model is a new modified dynamic Lead-Acid battery model considering the effect of temperature and cyclic charging and discharging effects. Simulink has been used to study the characteristics of the system and the proposed system has proved to be very successful as the simulation results have been very good. Keywords—Simulink Matlab, Battery Model, Simulation, BS Lead-Acid, Dynamic modeling, Temperature effect, Hybrid Vehicles.", "title": "" }, { "docid": "neg:1840048_5", "text": "Multi-dimensional arrays, or tensors, are increasingly found in fields such as signal processing and recommender systems. Real-world tensors can be enormous in size and often very sparse. There is a need for efficient, high-performance tools capable of processing the massive sparse tensors of today and the future. This paper introduces SPLATT, a C library with shared-memory parallelism for three-mode tensors. SPLATT contains algorithmic improvements over competing state of the art tools for sparse tensor factorization. SPLATT has a fast, parallel method of multiplying a matricide tensor by a Khatri-Rao product, which is a key kernel in tensor factorization methods. SPLATT uses a novel data structure that exploits the sparsity patterns of tensors. This data structure has a small memory footprint similar to competing methods and allows for the computational improvements featured in our work. We also present a method of finding cache-friendly reordering and utilizing them with a novel form of cache tiling. To our knowledge, this is the first work to investigate reordering and cache tiling in this context. SPLATT averages almost 30x speedup compared to our baseline when using 16 threads and reaches over 80x speedup on NELL-2.", "title": "" }, { "docid": "neg:1840048_6", "text": "Wireless industry nowadays is facing two major challenges: 1) how to support the vertical industry applications so that to expand the wireless industry market and 2) how to further enhance device capability and user experience. In this paper, we propose a technology framework to address these challenges. The proposed technology framework is based on end-to-end vertical and horizontal slicing, where vertical slicing enables vertical industry and services and horizontal slicing improves system capacity and user experience. The technology development on vertical slicing has already started in late 4G and early 5G and is mostly focused on slicing the core network. We envision this trend to continue with the development of vertical slicing in the radio access network and the air interface. Moving beyond vertical slicing, we propose to horizontally slice the computation and communication resources to form virtual computation platforms for solving the network capacity scaling problem and enhancing device capability and user experience. In this paper, we explain the concept of vertical and horizontal slicing and illustrate the slicing techniques in the air interface, the radio access network, the core network and the computation platform. This paper aims to initiate the discussion on the long-range technology roadmap and spur development on the solutions for E2E network slicing in 5G and beyond.", "title": "" }, { "docid": "neg:1840048_7", "text": "Short text is usually expressed in refined slightly, insufficient information, which makes text classification difficult. But we can try to introduce some information from the existing knowledge base to strengthen the performance of short text classification. Wikipedia [2,13,15] is now the largest human-edited knowledge base of high quality. It would benefit to short text classification if we can make full use of Wikipedia information in short text classification. This paper presents a new concept based [22] on Wikipedia short text representation method, by identifying the concept of Wikipedia mentioned in short text, and then expand the concept of wiki correlation and short text messages to the feature vector representation.", "title": "" }, { "docid": "neg:1840048_8", "text": "Radar is an attractive technology for long term monitoring of human movement as it operates remotely, can be placed behind walls and is able to monitor a large area depending on its operating parameters. A radar signal reflected off a moving person carries rich information on his or her activity pattern in the form of a set of Doppler frequency signatures produced by the specific combination of limbs and torso movements. To enable classification and efficient storage and transmission of movement data, unique parameters have to be extracted from the Doppler signatures. Two of the most important human movement parameters for activity identification and classification are the velocity profile and the fundamental cadence frequency of the movement pattern. However, the complicated pattern of limbs and torso movement worsened by multipath propagation in indoor environment poses a challenge for the extraction of these human movement parameters. In this paper, three new approaches for the estimation of human walking velocity profile in indoor environment are proposed and discussed. The first two methods are based on spectrogram estimates whereas the third method is based on phase difference computation. In addition, a method to estimate the fundamental cadence frequency of the gait is suggested and discussed. The accuracy of the methods are evaluated and compared in an indoor experiment using a flexible and low-cost software defined radar platform. The results obtained indicate that the velocity estimation methods are able to estimate the velocity profile of the person’s translational motion with an error of less than 10%. The results also showed that the fundamental cadence is estimated with an error of 7%.", "title": "" }, { "docid": "neg:1840048_9", "text": "This paper presents two novel approaches to increase performance bounds of image steganography under the criteria of minimizing distortion. First, in order to efficiently use the images’ capacities, we propose using parallel images in the embedding stage. The result is then used to prove sub-optimality of the message distribution technique used by all cost based algorithms including HUGO, S-UNIWARD, and HILL. Second, a new distribution approach is presented to further improve the security of these algorithms. Experiments show that this distribution method avoids embedding in smooth regions and thus achieves a better performance, measured by state-of-the-art steganalysis, when compared with the current used distribution.", "title": "" }, { "docid": "neg:1840048_10", "text": "Nonabelian group-based public key cryptography is a relatively new and exciting research field. Rapidly increasing computing power and the futurity quantum computers [52] that have since led to, the security of public key cryptosystems in use today, will be questioned. Research in new cryptographic methods is also imperative. Research on nonabelian group-based cryptosystems will become one of contemporary research priorities. Many innovative ideas for them have been presented for the past two decades, and many corresponding problems remain to be resolved. The purpose of this paper, is to present a survey of the nonabelian group-based public key cryptosystems with the corresponding problems of security. We hope that readers can grasp the trend that is examined in this study.", "title": "" }, { "docid": "neg:1840048_11", "text": "BACKGROUND\nNivolumab combined with ipilimumab resulted in longer progression-free survival and a higher objective response rate than ipilimumab alone in a phase 3 trial involving patients with advanced melanoma. We now report 3-year overall survival outcomes in this trial.\n\n\nMETHODS\nWe randomly assigned, in a 1:1:1 ratio, patients with previously untreated advanced melanoma to receive nivolumab at a dose of 1 mg per kilogram of body weight plus ipilimumab at a dose of 3 mg per kilogram every 3 weeks for four doses, followed by nivolumab at a dose of 3 mg per kilogram every 2 weeks; nivolumab at a dose of 3 mg per kilogram every 2 weeks plus placebo; or ipilimumab at a dose of 3 mg per kilogram every 3 weeks for four doses plus placebo, until progression, the occurrence of unacceptable toxic effects, or withdrawal of consent. Randomization was stratified according to programmed death ligand 1 (PD-L1) status, BRAF mutation status, and metastasis stage. The two primary end points were progression-free survival and overall survival in the nivolumab-plus-ipilimumab group and in the nivolumab group versus the ipilimumab group.\n\n\nRESULTS\nAt a minimum follow-up of 36 months, the median overall survival had not been reached in the nivolumab-plus-ipilimumab group and was 37.6 months in the nivolumab group, as compared with 19.9 months in the ipilimumab group (hazard ratio for death with nivolumab plus ipilimumab vs. ipilimumab, 0.55 [P<0.001]; hazard ratio for death with nivolumab vs. ipilimumab, 0.65 [P<0.001]). The overall survival rate at 3 years was 58% in the nivolumab-plus-ipilimumab group and 52% in the nivolumab group, as compared with 34% in the ipilimumab group. The safety profile was unchanged from the initial report. Treatment-related adverse events of grade 3 or 4 occurred in 59% of the patients in the nivolumab-plus-ipilimumab group, in 21% of those in the nivolumab group, and in 28% of those in the ipilimumab group.\n\n\nCONCLUSIONS\nAmong patients with advanced melanoma, significantly longer overall survival occurred with combination therapy with nivolumab plus ipilimumab or with nivolumab alone than with ipilimumab alone. (Funded by Bristol-Myers Squibb and others; CheckMate 067 ClinicalTrials.gov number, NCT01844505 .).", "title": "" }, { "docid": "neg:1840048_12", "text": "It is indispensable to understand and analyze industry structure and company relations from documents, such as news articles, in order to make management decisions concerning supply chains, selection of business partners, etc. Analysis of company relations from news articles requires both a macro-viewpoint, e.g., overviewing competitor groups, and a micro-viewpoint, e.g., grasping the descriptions of the relationship between a specific pair of companies collaborating. Research has typically focused on only the macro-viewpoint, classifying each company pair into a specific relation type. In this paper, to support company relation analysis from both macro-and micro-viewpoints, we propose a method that extracts collaborative/competitive company pairs from individual sentences in Web news articles by applying a Markov logic network and gather extracted relations from each company pair. By this method, we are able not only to perform clustering of company pairs into competitor groups based on the dominant relations of each pair (macro-viewpoint) but also to know how each company pair is described in individual sentences (micro-viewpoint). We empirically confirmed that the proposed method is feasible through analysis of 4,661 Web news articles on the semiconductor and related industries.", "title": "" }, { "docid": "neg:1840048_13", "text": "The chapter introduces the book explaining its purposes and significance, framing it within the current literature related to Location-Based Mobile Games. It further clarifies the methodology of the study on the ground of this work and summarizes the content of each chapter.", "title": "" }, { "docid": "neg:1840048_14", "text": "Water balance of the terrestrial isopod, Armadillidium vulgare, was investigated during conglobation (rolling-up behavior). Water loss and metabolic rates were measured at 18 +/- 1 degrees C in dry air using flow-through respirometry. Water-loss rates decreased 34.8% when specimens were in their conglobated form, while CO2 release decreased by 37.1%. Water loss was also measured gravimetrically at humidities ranging from 6 to 75 %RH. Conglobation was associated with a decrease in water-loss rates up to 53 %RH, but no significant differences were observed at higher humidities. Our findings suggest that conglobation behavior may help to conserve water, in addition to its demonstrated role in protection from predation.", "title": "" }, { "docid": "neg:1840048_15", "text": "Recent advances in wireless networking technologies and the growing success of mobile computing devices, such as laptop computers, third generation mobile phones, personal digital assistants, watches and the like, are enabling new classes of applications that present challenging problems to designers. Mobile devices face temporary loss of network connectivity when they move; they are likely to have scarce resources, such as low battery power, slow CPU speed and little memory; they are required to react to frequent and unannounced changes in the environment, such as high variability of network bandwidth, and in the remote resources availability, and so on. To support designers building mobile applications, research in the field of middleware systems has proliferated. Middleware aims at facilitating communication and coordination of distributed components, concealing difficulties raised by mobility from application engineers as much as possible. In this survey, we examine characteristics of mobile distributed systems and distinguish them from their fixed counterpart. We introduce a framework and a categorization of the various middleware systems designed to support mobility, and we present a detailed and comparative review of the major results reached in this field. An analysis of current trends inside the mobile middleware community and a discussion of further directions of research conclude the survey.", "title": "" }, { "docid": "neg:1840048_16", "text": "Many advancements have been taking place in unmanned aerial vehicle (UAV) technology lately. This is leading towards the design and development of UAVs with various sizes that possess increased on-board processing, memory, storage, and communication capabilities. Consequently, UAVs are increasingly being used in a vast amount of commercial, military, civilian, agricultural, and environmental applications. However, to take full advantages of their services, these UAVs must be able to communicate efficiently with each other using UAV-to-UAV (U2U) communication and with existing networking infrastructures using UAV-to-Infrastructure (U2I) communication. In this paper, we identify the functions, services and requirements of UAV-based communication systems. We also present networking architectures, underlying frameworks, and data traffic requirements in these systems as well as outline the various protocols and technologies that can be used at different UAV communication links and networking layers. In addition, the paper discusses middleware layer services that can be provided in order to provide seamless communication and support heterogeneous network interfaces. Furthermore, we discuss a new important area of research, which involves the use of UAVs in collecting data from wireless sensor networks (WSNs). We discuss and evaluate several approaches that can be used to collect data from different types of WSNs including topologies such as linear sensor networks (LSNs), geometric and clustered WSNs. We outline the benefits of using UAVs for this function, which include significantly decreasing sensor node energy consumption, lower interference, and offers considerably increased flexibility in controlling the density of the deployed nodes since the need for the multihop approach for sensor-tosink communication is either eliminated or significantly reduced. Consequently, UAVs can provide good connectivity to WSN clusters.", "title": "" }, { "docid": "neg:1840048_17", "text": "Passwords are still the predominant mode of authentication in contemporary information systems, despite a long list of problems associated with their insecurity. Their primary advantage is the ease of use and the price of implementation, compared to other systems of authentication (e.g. two-factor, biometry, …). In this paper we present an analysis of passwords used by students of one of universities and their resilience against brute force and dictionary attacks. The passwords were obtained from a university's computing center in plaintext format for a very long period - first passwords were created before 1980. The results show that early passwords are extremely easy to crack: the percentage of cracked passwords is above 95 % for those created before 2006. Surprisingly, more than 40 % of passwords created in 2014 were easily broken within a few hours. The results show that users - in our case students, despite positive trends, still choose easy to break passwords. This work contributes to loud warnings that a shift from traditional password schemes to more elaborate systems is needed.", "title": "" }, { "docid": "neg:1840048_18", "text": "BACKGROUND\nThe neonatal and pediatric antimicrobial point prevalence survey (PPS) of the Antibiotic Resistance and Prescribing in European Children project (http://www.arpecproject.eu/) aims to standardize a method for surveillance of antimicrobial use in children and neonates admitted to the hospital within Europe. This article describes the audit criteria used and reports overall country-specific proportions of antimicrobial use. An analytical review presents methodologies on antimicrobial use.\n\n\nMETHODS\nA 1-day PPS on antimicrobial use in hospitalized children was organized in September 2011, using a previously validated and standardized method. The survey included all inpatient pediatric and neonatal beds and identified all children receiving an antimicrobial treatment on the day of survey. Mandatory data were age, gender, (birth) weight, underlying diagnosis, antimicrobial agent, dose and indication for treatment. Data were entered through a web-based system for data-entry and reporting, based on the WebPPS program developed for the European Surveillance of Antimicrobial Consumption project.\n\n\nRESULTS\nThere were 2760 and 1565 pediatric versus 1154 and 589 neonatal inpatients reported among 50 European (n = 14 countries) and 23 non-European hospitals (n = 9 countries), respectively. Overall, antibiotic pediatric and neonatal use was significantly higher in non-European (43.8%; 95% confidence interval [CI]: 41.3-46.3% and 39.4%; 95% CI: 35.5-43.4%) compared with that in European hospitals (35.4; 95% CI: 33.6-37.2% and 21.8%; 95% CI: 19.4-24.2%). Proportions of antibiotic use were highest in hematology/oncology wards (61.3%; 95% CI: 56.2-66.4%) and pediatric intensive care units (55.8%; 95% CI: 50.3-61.3%).\n\n\nCONCLUSIONS\nAn Antibiotic Resistance and Prescribing in European Children standardized web-based method for a 1-day PPS was successfully developed and conducted in 73 hospitals worldwide. It offers a simple, feasible and sustainable way of data collection that can be used globally.", "title": "" } ]
1840049
Automating image segmentation verification and validation by learning test oracles
[ { "docid": "pos:1840049_0", "text": "Measures of overlap of labelled regions of images, such as the Dice and Tanimoto coefficients, have been extensively used to evaluate image registration and segmentation algorithms. Modern studies can include multiple labels defined on multiple images yet most evaluation schemes report one overlap per labelled region, simply averaged over multiple images. In this paper, common overlap measures are generalized to measure the total overlap of ensembles of labels defined on multiple test images and account for fractional labels using fuzzy set theory. This framework allows a single \"figure-of-merit\" to be reported which summarises the results of a complex experiment by image pair, by label or overall. A complementary measure of error, the overlap distance, is defined which captures the spatial extent of the nonoverlapping part and is related to the Hausdorff distance computed on grey level images. The generalized overlap measures are validated on synthetic images for which the overlap can be computed analytically and used as similarity measures in nonrigid registration of three-dimensional magnetic resonance imaging (MRI) brain images. Finally, a pragmatic segmentation ground truth is constructed by registering a magnetic resonance atlas brain to 20 individual scans, and used with the overlap measures to evaluate publicly available brain segmentation algorithms", "title": "" }, { "docid": "pos:1840049_1", "text": "Metamorphic testing has been shown to be a simple yet effective technique in addressing the quality assurance of applications that do not have test oracles, i.e., for which it is difficult or impossible to know what the correct output should be for arbitrary input. In metamorphic testing, existing test case input is modified to produce new test cases in such a manner that, when given the new input, the application should produce an output that can easily be computed based on the original output. That is, if input x produces output f(x), then we create input x' such that we can predict f(x') based on f(x); if the application does not produce the expected output, then a defect must exist, and either f(x), or f(x') (or both) is wrong.\n In practice, however, metamorphic testing can be a manually intensive technique for all but the simplest cases. The transformation of input data can be laborious for large data sets, or practically impossible for input that is not in human-readable format. Similarly, comparing the outputs can be error-prone for large result sets, especially when slight variations in the results are not actually indicative of errors (i.e., are false positives), for instance when there is non-determinism in the application and multiple outputs can be considered correct.\n In this paper, we present an approach called Automated Metamorphic System Testing. This involves the automation of metamorphic testing at the system level by checking that the metamorphic properties of the entire application hold after its execution. The tester is able to easily set up and conduct metamorphic tests with little manual intervention, and testing can continue in the field with minimal impact on the user. Additionally, we present an approach called Heuristic Metamorphic Testing which seeks to reduce false positives and address some cases of non-determinism. We also describe an implementation framework called Amsterdam, and present the results of empirical studies in which we demonstrate the effectiveness of the technique on real-world programs without test oracles.", "title": "" }, { "docid": "pos:1840049_2", "text": "A novel method is proposed for performing multilabel, interactive image segmentation. Given a small number of pixels with user-defined (or predefined) labels, one can analytically and quickly determine the probability that a random walker starting at each unlabeled pixel will first reach one of the prelabeled pixels. By assigning each pixel to the label for which the greatest probability is calculated, a high-quality image segmentation may be obtained. Theoretical properties of this algorithm are developed along with the corresponding connections to discrete potential theory and electrical circuits. This algorithm is formulated in discrete space (i.e., on a graph) using combinatorial analogues of standard operators and principles from continuous potential theory, allowing it to be applied in arbitrary dimension on arbitrary graphs", "title": "" } ]
[ { "docid": "neg:1840049_0", "text": "Based on an example of translational motion, this report shows how to model and initialize the Kalman Filter. Basic rules about physical motion are introduced to point out, that the well-known laws of physical motion are a mere approximation. Hence, motion of non-constant velocity or acceleration is modelled by additional use of white noise. Special attention is drawn to the matrix initialization for use in the Kalman Filter, as, in general, papers and books do not give any hint on this; thus inducing the impression that initializing is not important and may be arbitrary. For unknown matrices many users of the Kalman Filter choose the unity matrix. Sometimes it works, sometimes it does not. In order to close this gap, initialization is shown on the example of human interactive motion. In contrast to measuring instruments with documented measurement errors in manuals, the errors generated by vision-based sensoring must be estimated carefully. Of course, the described methods may be adapted to other circumstances.", "title": "" }, { "docid": "neg:1840049_1", "text": "Fundus images provide an opportunity for early detection of diabetes. Generally, retina fundus images of diabetic patients exhibit exudates, which are lesions indicative of Diabetic Retinopathy (DR). Therefore, computational tools can be considered to be used in assisting ophthalmologists and medical doctor for the early screening of the disease. Hence in this paper, we proposed visualisation of exudates in fundus images using radar chart and Color Auto Correlogram (CAC) technique. The proposed technique requires that the Optic Disc (OD) from the fundus image be removed. Next, image normalisation was performed to standardise the colors in the fundus images. The exudates from the modified image are then extracted using Artificial Neural Network (ANN) and visualised using radar chart and CAC technique. The proposed technique was tested on 149 images of the publicly available MESSIDOR database. Experimental results suggest that the method has potential to be used for early indication of DR, by visualising the overlap between CAC features of the fundus images.", "title": "" }, { "docid": "neg:1840049_2", "text": "ScanSAR interferometry is an attractive option for efficient topographic mapping of large areas and for monitoring of large-scale motions. Only ScanSAR interferometry made it possible to map almost the entire landmass of the earth in the 11-day Shuttle Radar Topography Mission. Also the operational satellites RADARSAT and ENVISAT offer ScanSAR imaging modes and thus allow for repeat-pass ScanSAR interferometry. This paper gives a complete description of ScanSAR and burst-mode interferometric signal properties and compares different processing algorithms. The problems addressed are azimuth scanning pattern synchronization, spectral shift filtering in the presence of high squint, Doppler centroid estimation, different phase-preserving ScanSAR processing algorithms, ScanSAR interferogram formation, coregistration, and beam alignment. Interferograms and digital elevation models from RADARSAT ScanSAR Narrow modes are presented. The novel “pack-and-go” algorithm for efficient burst-mode range processing and a new time-variant fast interpolator for interferometric coregistration are introduced.", "title": "" }, { "docid": "neg:1840049_3", "text": "Based on self-determination theory, this study proposes and tests a motivational model of intraindividual changes in teacher burnout (emotional exhaustion, depersonalization, and reduced personal accomplishment). Participants were 806 French-Canadian teachers in public elementary and high schools. Results show that changes in teachers’ perceptions of classroom overload and students’ disruptive behavior are negatively related to changes in autonomous motivation, which in turn negatively predict changes in emotional exhaustion. Results also indicate that changes in teachers’ perceptions of students’ disruptive behaviors and school principal’s leadership behaviors are related to changes in self-efficacy, which in turn negatively predict changes in three burnout components. 2011 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "neg:1840049_4", "text": "ÐGraphs are a powerful and universal data structure useful in various subfields of science and engineering. In this paper, we propose a new algorithm for subgraph isomorphism detection from a set of a priori known model graphs to an input graph that is given online. The new approach is based on a compact representation of the model graphs that is computed offline. Subgraphs that appear multiple times within the same or within different model graphs are represented only once, thus reducing the computational effort to detect them in an input graph. In the extreme case where all model graphs are highly similar, the run-time of the new algorithm becomes independent of the number of model graphs. Both a theoretical complexity analysis and practical experiments characterizing the performance of the new approach will be given. Index TermsÐGraph matching, graph isomorphism, subgraph isomorphism, preprocessing.", "title": "" }, { "docid": "neg:1840049_5", "text": "Hiding a secret is needed in many situations. One might need to hide a password, an encryption key, a secret recipe, and etc. Information can be secured with encryption, but the need to secure the secret key used for such encryption is important too. Imagine you encrypt your important files with one secret key and if such a key is lost then all the important files will be inaccessible. Thus, secure and efficient key management mechanisms are required. One of them is secret sharing scheme (SSS) that lets you split your secret into several parts and distribute them among selected parties. The secret can be recovered once these parties collaborate in some way. This paper will study these schemes and explain the need for them and their security. Across the years, various schemes have been presented. This paper will survey some of them varying from trivial schemes to threshold based ones. Explanations on these schemes constructions are presented. The paper will also look at some applications of SSS.", "title": "" }, { "docid": "neg:1840049_6", "text": "A major challenge of semantic parsing is the vocabulary mismatch problem between natural language and target ontology. In this paper, we propose a sentence rewriting based semantic parsing method, which can effectively resolve the mismatch problem by rewriting a sentence into a new form which has the same structure with its target logical form. Specifically, we propose two sentence-rewriting methods for two common types of mismatch: a dictionary-based method for 1N mismatch and a template-based method for N-1 mismatch. We evaluate our sentence rewriting based semantic parser on the benchmark semantic parsing dataset – WEBQUESTIONS. Experimental results show that our system outperforms the base system with a 3.4% gain in F1, and generates logical forms more accurately and parses sentences more robustly.", "title": "" }, { "docid": "neg:1840049_7", "text": "Traditional Network-on-Chips (NoCs) employ simple arbitration strategies, such as round-robin or oldest-first, to decide which packets should be prioritized in the network. This is counter-intuitive since different packets can have very different effects on system performance due to, e.g., different level of memory-level parallelism (MLP) of applications. Certain packets may be performance-critical because they cause the processor to stall, whereas others may be delayed for a number of cycles with no effect on application-level performance as their latencies are hidden by other outstanding packets'latencies. In this paper, we define slack as a key measure that characterizes the relative importance of a packet. Specifically, the slack of a packet is the number of cycles the packet can be delayed in the network with no effect on execution time. This paper proposes new router prioritization policies that exploit the available slack of interfering packets in order to accelerate performance-critical packets and thus improve overall system performance. When two packets interfere with each other in a router, the packet with the lower slack value is prioritized. We describe mechanisms to estimate slack, prevent starvation, and combine slack-based prioritization with other recently proposed application-aware prioritization mechanisms.\n We evaluate slack-based prioritization policies on a 64-core CMP with an 8x8 mesh NoC using a suite of 35 diverse applications. For a representative set of case studies, our proposed policy increases average system throughput by 21.0% over the commonlyused round-robin policy. Averaged over 56 randomly-generated multiprogrammed workload mixes, the proposed policy improves system throughput by 10.3%, while also reducing application-level unfairness by 30.8%.", "title": "" }, { "docid": "neg:1840049_8", "text": "Most convolutional neural networks (CNNs) lack midlevel layers that model semantic parts of objects. This limits CNN-based methods from reaching their full potential in detecting and utilizing small semantic parts in recognition. Introducing such mid-level layers can facilitate the extraction of part-specific features which can be utilized for better recognition performance. This is particularly important in the domain of fine-grained recognition. In this paper, we propose a new CNN architecture that integrates semantic part detection and abstraction (SPDACNN) for fine-grained classification. The proposed network has two sub-networks: one for detection and one for recognition. The detection sub-network has a novel top-down proposal method to generate small semantic part candidates for detection. The classification sub-network introduces novel part layers that extract features from parts detected by the detection sub-network, and combine them for recognition. As a result, the proposed architecture provides an end-to-end network that performs detection, localization of multiple semantic parts, and whole object recognition within one framework that shares the computation of convolutional filters. Our method outperforms state-of-theart methods with a large margin for small parts detection (e.g. our precision of 93.40% vs the best previous precision of 74.00% for detecting the head on CUB-2011). It also compares favorably to the existing state-of-the-art on finegrained classification, e.g. it achieves 85.14% accuracy on CUB-2011.", "title": "" }, { "docid": "neg:1840049_9", "text": "The proliferation of malware has presented a serious threat to the security of computer systems. Traditional signature-based anti-virus systems fail to detect polymorphic/metamorphic and new, previously unseen malicious executables. Data mining methods such as Naive Bayes and Decision Tree have been studied on small collections of executables. In this paper, resting on the analysis of Windows APIs called by PE files, we develop the Intelligent Malware Detection System (IMDS) using Objective-Oriented Association (OOA) mining based classification. IMDS is an integrated system consisting of three major modules: PE parser, OOA rule generator, and rule based classifier. An OOA_Fast_FP-Growth algorithm is adapted to efficiently generate OOA rules for classification. A comprehensive experimental study on a large collection of PE files obtained from the anti-virus laboratory of KingSoft Corporation is performed to compare various malware detection approaches. Promising experimental results demonstrate that the accuracy and efficiency of our IMDS system outperform popular anti-virus software such as Norton AntiVirus and McAfee VirusScan, as well as previous data mining based detection systems which employed Naive Bayes, Support Vector Machine (SVM) and Decision Tree techniques. Our system has already been incorporated into the scanning tool of KingSoft’s Anti-Virus software.", "title": "" }, { "docid": "neg:1840049_10", "text": "In this study, an efficient addressing scheme for radix-4 FFT processor is presented. The proposed method uses extra registers to buffer and reorder the data inputs of the butterfly unit. It avoids the modulo-r addition in the address generation; hence, the critical path is significantly shorter than the conventional radix-4 FFT implementations. A significant property of the proposed method is that the critical path of the address generator is independent from the FFT transform length N, making it extremely efficient for large FFT transforms. For performance evaluation, the new FFT architecture has been implemented by FPGA (Altera Stratix) hardware and also synthesized by CMOS 0.18µm technology. The results confirm the speed and area advantages for large FFTs. Although only radix-4 FFT address generation is presented in the paper, it can be used for higher radix FFT.", "title": "" }, { "docid": "neg:1840049_11", "text": "The status of current model-driven engineering technologies has matured over the last years whereas the infrastructure supporting model management is still in its infancy. Infrastructural means include version control systems, which are successfully used for the management of textual artifacts like source code. Unfortunately, they are only limited suitable for models. Consequently, dedicated solutions emerge. These approaches are currently hard to compare, because no common quality measure has been established yet and no structured test cases are available. In this paper, we analyze the challenges coming along with merging different versions of one model and derive a first categorization of typical changes and the therefrom resulting conflicts. On this basis we create a set of test cases on which we apply state-of-the-art versioning systems and report our experiences.", "title": "" }, { "docid": "neg:1840049_12", "text": "This paper presents the findings of two studies that replicate previous work by Fred Davis on the subject of perceived usefulness, ease of use, and usage of information technology. The two studies focus on evaluating the psychometric properties of the ease of use and usefulness scales, while examining the relationship between ease of use, usefulness, and system usage. Study 1 provides a strong assessment of the convergent validity of the two scales by examining heterogeneous user groups dealing with heterogeneous implementations of messaging technology. In addition, because one might expect users to share similar perspectives about voice and electronic mail, the study also represents a strong test of discriminant validity. In this study a total of 118 respondents from 10 different organizations were surveyed for their attitudes toward two messaging technologies: voice and electronic mail. Study 2 complements the approach taken in Study 1 by focusing on the ability to demonstrate discriminant validity. Three popular software applications (WordPerfect, Lotus 1-2-3, and Harvard Graphics) were examined based on the expectation that they would all be rated highly on both scales. In this study a total of 73 users rated the three packages in terms of ease of use and usefulness. The results of the studies demonstrate reliable and valid scales for measurement of perceived ease of use and usefulness. In addition, the paper tests the relationships between ease of use, usefulness, and usage using structural equation modelling. The results of this model are consistent with previous research for Study 1, suggesting that usefulness is an important determinant of system use. For Study 2 the results are somewhat mixed, but indicate the importance of both ease of use and usefulness. Differences in conditions of usage are explored to explain these findings.", "title": "" }, { "docid": "neg:1840049_13", "text": "Selected elements of dynamical system (DS) theory approach to nonlinear time series analysis are introduced. Key role in this concept plays a method of time delay. The method enables us reconstruct phase space trajectory of DS without knowledge of its governing equations. Our variant is tested and compared with wellknown TISEAN package for Lorenz and Hénon systems. Introduction There are number of methods of nonlinear time series analysis (e.g. nonlinear prediction or noise reduction) that work in a phase space (PS) of dynamical systems. We assume that a given time series of some variable is generated by a dynamical system. A specific state of the system can be represented by a point in the phase space and time evolution of the system creates a trajectory in the phase space. From this point of view we consider our time series to be a projection of trajectory of DS to one (or more – when we have more simultaneously measured variables) coordinates of phase space. This view was enabled due to formulation of embedding theorem [1], [2] at the beginning of the 1980s. It says that it is possible to reconstruct the phase space from the time series. One of the most frequently used methods of phase space reconstruction is the method of time delay. The main task while using this method is to determine values of time delay τ and embedding dimension m. We tested individual steps of this method on simulated data generated by Lorenz and Hénon systems. We compared results computed by our own programs with outputs of program package TISEAN created by R. Hegger, H. Kantz, and T. Schreiber [3]. Method of time delay The most frequently used method of PS reconstruction is the method of time delay. If we have a time series of a scalar variable we construct a vector ( ) , ,..., 1 , N i t x i = in phase space in time ti as following: ( ) ( ) ( ) ( ) ( ) ( ) [ ], 1 ,..., 2 , , τ τ τ − + + + = m t x t x t x t x t i i i i i X where i goes from 1 to N – (m – 1)τ, τ is time delay, m is a dimension of reconstructed space (embedding dimension) and M = N – (m – 1)τ is number of points (states) in the phase space. According to embedding theorem, when this is done in a proper way, dynamics reconstructed using this formula is equivalent to the dynamics on an attractor in the origin phase space in the sense that characteristic invariants of the system are conserved. The time delay method and related aspects are described in literature, e.g. [4]. We estimated the two parameters—time delay and embedding dimension—using algorithms below. Choosing a time delay To determine a suitable time delay we used average mutual information (AMI), a certain generalization of autocorrelation function. Average mutual information between sets of measurements A and B is defined [5]:", "title": "" }, { "docid": "neg:1840049_14", "text": "Neural Architecture Search (NAS) aims at finding one “single” architecture that achieves the best accuracy for a given task such as image recognition. In this paper, we study the instance-level variation, and demonstrate that instance-awareness is an important yet currently missing component of NAS. Based on this observation, we propose InstaNAS for searching toward instance-level architectures; the controller is trained to search and form a “distribution of architectures” instead of a single final architecture. Then during the inference phase, the controller selects an architecture from the distribution, tailored for each unseen image to achieve both high accuracy and short latency. The experimental results show that InstaNAS reduces the inference latency without compromising classification accuracy. On average, InstaNAS achieves 48.9% latency reduction on CIFAR-10 and 40.2% latency reduction on CIFAR-100 with respect to MobileNetV2 architecture.", "title": "" }, { "docid": "neg:1840049_15", "text": "We introduce the Adaptive Skills, Adaptive Partitions (ASAP) framework that (1) learns skills (i.e., temporally extended actions or options) as well as (2) where to apply them. We believe that both (1) and (2) are necessary for a truly general skill learning framework, which is a key building block needed to scale up to lifelong learning agents. The ASAP framework can also solve related new tasks simply by adapting where it applies its existing learned skills. We prove that ASAP converges to a local optimum under natural conditions. Finally, our experimental results, which include a RoboCup domain, demonstrate the ability of ASAP to learn where to reuse skills as well as solve multiple tasks with considerably less experience than solving each task from scratch.", "title": "" }, { "docid": "neg:1840049_16", "text": "Using sensors to measure parameters of interest in rotating environments and communicating the measurements in real-time over wireless links, requires a reliable power source. In this paper, we have investigated the possibility to generate electric power locally by evaluating six different energy-harvesting technologies. The applicability of the technology is evaluated by several parameters that are important to the functionality in an industrial environment. All technologies are individually presented and evaluated, a concluding table is also summarizing the technologies strengths and weaknesses. To support the technology evaluation on a more theoretical level, simulations has been performed to strengthen our claims. Among the evaluated and simulated technologies, we found that the variable reluctance-based harvesting technology is the strongest candidate for further technology development for the considered use-case.", "title": "" }, { "docid": "neg:1840049_17", "text": "In industrial fabric productions, automated real time systems are needed to find out the minor defects. It will save the cost by not transporting defected products and also would help in making compmay image of quality fabrics by sending out only undefected products. A real time fabric defect detection system (FDDS), implementd on an embedded DSP platform is presented here. Textural features of fabric image are extracted based on gray level co-occurrence matrix (GLCM). A sliding window technique is used for defect detection where window moves over the whole image computing a textural energy from the GLCM of the fabric image. The energy values are compared to a reference and the deviations beyond a threshold are reported as defects and also visually represented by a window. The implementation is carried out on a TI TMS320DM642 platform and programmed using code composer studio software. The real time output of this implementation was shown on a monitor. KeywordsFabric Defects, Texture, Grey Level Co-occurrence Matrix, DSP Kit, Energy Computation, Sliding Window, FDDS", "title": "" }, { "docid": "neg:1840049_18", "text": "Much of Bluetooth’s data remains confidential in practice due to the difficulty of eavesdropping it. We present mechanisms for doing so, therefore eliminating the data confidentiality properties of the protocol. As an additional security measure, devices often operate in “undiscoverable mode” in order to hide their identity and provide access control. We show how the full MAC address of such master devices can be obtained, therefore bypassing the access control of this feature. Our work results in the first open-source Bluetooth sniffer.", "title": "" }, { "docid": "neg:1840049_19", "text": "Swarms of embedded devices provide new challenges for privacy and security. We propose Permissioned Blockchains as an effective way to secure and manage these systems of systems. A long view of blockchain technology yields several requirements absent in extant blockchain implementations. Our approach to Permissioned Blockchains meets the fundamental requirements for longevity, agility, and incremental adoption. Distributed Identity Management is an inherent feature of our Permissioned Blockchain and provides for resilient user and device identity and attribute management.", "title": "" } ]
1840050
Fast and robust face recognition via coding residual map learning based adaptive masking
[ { "docid": "pos:1840050_0", "text": "In linear representation based face recognition (FR), it is expected that a discriminative dictionary can be learned from the training samples so that the query sample can be better represented for classification. On the other hand, dimensionality reduction is also an important issue for FR. It can not only reduce significantly the storage space of face images, but also enhance the discrimination of face feature. Existing methods mostly perform dimensionality reduction and dictionary learning separately, which may not fully exploit the discriminative information in the training samples. In this paper, we propose to learn jointly the projection matrix for dimensionality reduction and the discriminative dictionary for face representation. The joint learning makes the learned projection and dictionary better fit with each other so that a more effective face classification can be obtained. The proposed algorithm is evaluated on benchmark face databases in comparison with existing linear representation based methods, and the results show that the joint learning improves the FR rate, particularly when the number of training samples per class is small.", "title": "" }, { "docid": "pos:1840050_1", "text": "Recently the sparse representation (or coding) based classification (SRC) has been successfully used in face recognition. In SRC, the testing image is represented as a sparse linear combination of the training samples, and the representation fidelity is measured by the l2-norm or l1-norm of coding residual. Such a sparse coding model actually assumes that the coding residual follows Gaussian or Laplacian distribution, which may not be accurate enough to describe the coding errors in practice. In this paper, we propose a new scheme, namely the robust sparse coding (RSC), by modeling the sparse coding as a sparsity-constrained robust regression problem. The RSC seeks for the MLE (maximum likelihood estimation) solution of the sparse coding problem, and it is much more robust to outliers (e.g., occlusions, corruptions, etc.) than SRC. An efficient iteratively reweighted sparse coding algorithm is proposed to solve the RSC model. Extensive experiments on representative face databases demonstrate that the RSC scheme is much more effective than state-of-the-art methods in dealing with face occlusion, corruption, lighting and expression changes, etc.", "title": "" }, { "docid": "pos:1840050_2", "text": "Many problems in information processing involve some form of dimensionality reduction. In this thesis, we introduce Locality Preserving Projections (LPP). These are linear projective maps that arise by solving a variational problem that optimally preserves the neighborhood structure of the data set. LPP should be seen as an alternative to Principal Component Analysis (PCA) – a classical linear technique that projects the data along the directions of maximal variance. When the high dimensional data lies on a low dimensional manifold embedded in the ambient space, the Locality Preserving Projections are obtained by finding the optimal linear approximations to the eigenfunctions of the Laplace Beltrami operator on the manifold. As a result, LPP shares many of the data representation properties of nonlinear techniques such as Laplacian Eigenmaps or Locally Linear Embedding. Yet LPP is linear and more crucially is defined everywhere in ambient space rather than just on the training data points. Theoretical analysis shows that PCA, LPP, and Linear Discriminant Analysis (LDA) can be obtained from different graph models. Central to this is a graph structure that is inferred on the data points. LPP finds a projection that respects this graph structure. We have applied our algorithms to several real world applications, e.g. face analysis and document representation.", "title": "" } ]
[ { "docid": "neg:1840050_0", "text": "The sequencing by hybridization (SBH) of determining the order in which nucleotides should occur on a DNA string is still under discussion for enhancements on computational intelligence although the next generation of DNA sequencing has come into existence. In the last decade, many works related to graph theory-based DNA sequencing have been carried out in the literature. This paper proposes a method for SBH by integrating hypergraph with genetic algorithm (HGGA) for designing a novel analytic technique to obtain DNA sequence from its spectrum. The paper represents elements of the spectrum and its relation as hypergraph and applies the unimodular property to ensure the compatibility of relations between l-mers. The hypergraph representation and unimodular property are bound with the genetic algorithm that has been customized with a novel selection and crossover operator reducing the computational complexity with accelerated convergence. Subsequently, upon determining the primary strand, an anti-homomorphism is invoked to find the reverse complement of the sequence. The proposed algorithm is implemented in the GenBank BioServer datasets, and the results are found to prove the efficiency of the algorithm. The HGGA is a non-classical algorithm with significant advantages and computationally attractive complexity reductions ranging to $$O(n^{2} )$$ O ( n 2 ) with improved accuracy that makes it prominent for applications other than DNA sequencing like image processing, task scheduling and big data processing.", "title": "" }, { "docid": "neg:1840050_1", "text": "Low-field extremity magnetic resonance imaging (lfMRI) is currently commercially available and has been used clinically to evaluate rheumatoid arthritis (RA). However, one disadvantage of this new modality is that the field of view (FOV) is too small to assess hand and wrist joints simultaneously. Thus, we have developed a new lfMRI system, compacTscan, with a FOV that is large enough to simultaneously assess the entire wrist to proximal interphalangeal joint area. In this work, we examined its clinical value compared to conventional 1.5 tesla (T) MRI. The comparison involved evaluating three RA patients by both 0.3 T compacTscan and 1.5 T MRI on the same day. Bone erosion, bone edema, and synovitis were estimated by our new compact MRI scoring system (cMRIS) and the kappa coefficient was calculated on a joint-by-joint basis. We evaluated a total of 69 regions. Bone erosion was detected in 49 regions by compacTscan and in 48 regions by 1.5 T MRI, while the total erosion score was 77 for compacTscan and 76.5 for 1.5 T MRI. These findings point to excellent agreement between the two techniques (kappa = 0.833). Bone edema was detected in 14 regions by compacTscan and in 19 by 1.5 T MRI, and the total edema score was 36.25 by compacTscan and 47.5 by 1.5 T MRI. Pseudo-negative findings were noted in 5 regions. However, there was still good agreement between the techniques (kappa = 0.640). Total number of evaluated joints was 33. Synovitis was detected in 13 joints by compacTscan and 14 joints by 1.5 T MRI, while the total synovitis score was 30 by compacTscan and 32 by 1.5 T MRI. Thus, although 1 pseudo-positive and 2 pseudo-negative findings resulted from the joint evaluations, there was again excellent agreement between the techniques (kappa = 0.827). Overall, the data obtained by our compacTscan system showed high agreement with those obtained by conventional 1.5 T MRI with regard to diagnosis and the scoring of bone erosion, edema, and synovitis. We conclude that compacTscan is useful for diagnosis and estimation of disease activity in patients with RA.", "title": "" }, { "docid": "neg:1840050_2", "text": "As known, fractional CO2 resurfacing treatments are more effective than non-ablative ones against aging signs, but post-operative redness and swelling prolong the overall downtime requiring up to steroid administration in order to reduce these local systems. In the last years, an increasing interest has been focused on the possible use of probiotics for treating inflammatory and allergic conditions suggesting that they can exert profound beneficial effects on skin homeostasis. In this work, the Authors report their experience on fractional CO2 laser resurfacing and provide the results of a new post-operative topical treatment with an experimental cream containing probiotic-derived active principles potentially able to modulate the inflammatory reaction associated to laser-treatment. The cream containing DermaACB (CERABEST™) was administered post-operatively to 42 consecutive patients who were treated with fractional CO2 laser. All patients adopted the cream twice a day for 2 weeks. Grades were given according to outcome scale. The efficacy of the cream containing DermaACB was evaluated comparing the rate of post-operative signs vanishing with a control group of 20 patients topically treated with an antibiotic cream and a hyaluronic acid based cream. Results registered with the experimental treatment were good in 22 patients, moderate in 17, and poor in 3 cases. Patients using the study cream took an average time of 14.3 days for erythema resolution and 9.3 days for swelling vanishing. The post-operative administration of the cream containing DermaACB induces a quicker reduction of post-operative erythema and swelling when compared to a standard treatment.", "title": "" }, { "docid": "neg:1840050_3", "text": "This paper discusses a general approach to qualitative modeling based on fuzzy logic. The method of qualitative modeling is divided into two parts: fuzzy modeling and linguistic approximation. It proposes to use a fuzzy clustering method (fuzzy c-means method) to identify the structure of a fuzzy model. To clarify the advantages of the proposed method, it also shows some examples of modeling, among them a model of a dynamical process and a model of a human operator’s control action.", "title": "" }, { "docid": "neg:1840050_4", "text": "This paper addresses an open challenge in educational data mining, i.e., the problem of using observed prerequisite relations among courses to learn a directed universal concept graph, and using the induced graph to predict unobserved prerequisite relations among a broader range of courses. This is particularly useful to induce prerequisite relations among courses from different providers (universities, MOOCs, etc.). We propose a new framework for inference within and across two graphs---at the course level and at the induced concept level---which we call Concept Graph Learning (CGL). In the training phase, our system projects the course-level links onto the concept space to induce directed concept links; in the testing phase, the concept links are used to predict (unobserved) prerequisite links for test-set courses within the same institution or across institutions. The dual mappings enable our system to perform an interlingua-style transfer learning, e.g. treating the concept graph as the interlingua, and inducing prerequisite links in a transferable manner across different universities. Experiments on our newly collected data sets of courses from MIT, Caltech, Princeton and CMU show promising results, including the viability of CGL for transfer learning.", "title": "" }, { "docid": "neg:1840050_5", "text": "Objectives: Straddle injury represents a rare and complex injury to the female genito urinary tract (GUT). Overall prevention would be the ultimate goal, but due to persistent inhomogenity and inconsistency in definitions and guidelines, or suboptimal coding, the optimal study design for a prevention programme is still missing. Thus, medical records data were tested for their potential use for an injury surveillance registry and their impact on future prevention programmes. Design: Retrospective record analysis out of a 3 year period. Setting: All patients were treated exclusively by the first author. Patients: Six girls, median age 7 years, range 3.5 to 12 years with classical straddle injury. Interventions: Medical treatment and recording according to National and International Standards. Main Outcome Measures: All records were analyzed for accuracy in diagnosis and coding, surgical procedure, time and location of incident and examination findings. Results: All registration data sets were complete. A specific code for “straddle injury” in International Classification of Diseases (ICD) did not exist. Coding followed mainly reimbursement issues and specific information about the injury was usually expressed in an individual style. Conclusions: As demonstrated in this pilot, population based medical record data collection can play a substantial part in local injury surveillance registry and prevention initiatives planning.", "title": "" }, { "docid": "neg:1840050_6", "text": "This work investigates the role of contrasting discourse relations signaled by cue phrases, together with phrase positional information, in predicting sentiment at the phrase level. Two domains of online reviews were chosen. The first domain is of nutritional supplement reviews, which are often poorly structured yet also allow certain simplifying assumptions to be made. The second domain is of hotel reviews, which have somewhat different characteristics. A corpus is built from these reviews, and manually tagged for polarity. We propose and evaluate a few new features that are realized through a lightweight method of discourse analysis, and use these features in a hybrid lexicon and machine learning based classifier. Our results show that these features may be used to obtain an improvement in classification accuracy compared to other traditional machine learning approaches.", "title": "" }, { "docid": "neg:1840050_7", "text": "iBeacons are a new way to interact with hardware. An iBeacon is a Bluetooth Low Energy device that only sends a signal in a specific format. They are like a lighthouse that sends light signals to boats. This paper explains what an iBeacon is, how it works and how it can simplify your daily life, what restriction comes with iBeacon and how to improve this restriction., as well as, how to use Location-based Services to track items. E.g., every time you touchdown at an airport and wait for your suitcase at the luggage reclaim, you have no information when your luggage will arrive at the conveyor belt. With an iBeacon inside your suitcase, it is possible to track the luggage and to receive a push notification about it even before you can see it. This is just one possible solution to use them. iBeacon can create a completely new shopping experience or make your home smarter. This paper demonstrates the luggage tracking use case and evaluates its possibilities and restrictions.", "title": "" }, { "docid": "neg:1840050_8", "text": "Cloud computing provides variety of services with the growth of their offerings. Due to efficient services, it faces numerous challenges. It is based on virtualization, which provides users a plethora computing resources by internet without managing any infrastructure of Virtual Machine (VM). With network virtualization, Virtual Machine Manager (VMM) gives isolation among different VMs. But, sometimes the levels of abstraction involved in virtualization have been reducing the workload performance which is also a concern when implementing virtualization to the Cloud computing domain. In this paper, it has been explored how the vendors in cloud environment are using Containers for hosting their applications and also the performance of VM deployments. It also compares VM and Linux Containers with respect to the quality of service, network performance and security evaluation.", "title": "" }, { "docid": "neg:1840050_9", "text": "Current parking space vacancy detection systems use simple trip sensors at the entry and exit points of parking lots. Unfortunately, this type of system fails when a vehicle takes up more than one spot or when a parking lot has different types of parking spaces. Therefore, I propose a camera-based system that would use computer vision algorithms for detecting vacant parking spaces. My algorithm uses a combination of car feature point detection and color histogram classification to detect vacant parking spaces in static overhead images.", "title": "" }, { "docid": "neg:1840050_10", "text": "In this paper, we compare the performance of descriptors computed for local interest regions, as, for example, extracted by the Harris-Affine detector [Mikolajczyk, K and Schmid, C, 2004]. Many different descriptors have been proposed in the literature. It is unclear which descriptors are more appropriate and how their performance depends on the interest region detector. The descriptors should be distinctive and at the same time robust to changes in viewing conditions as well as to errors of the detector. Our evaluation uses as criterion recall with respect to precision and is carried out for different image transformations. We compare shape context [Belongie, S, et al., April 2002], steerable filters [Freeman, W and Adelson, E, Setp. 1991], PCA-SIFT [Ke, Y and Sukthankar, R, 2004], differential invariants [Koenderink, J and van Doorn, A, 1987], spin images [Lazebnik, S, et al., 2003], SIFT [Lowe, D. G., 1999], complex filters [Schaffalitzky, F and Zisserman, A, 2002], moment invariants [Van Gool, L, et al., 1996], and cross-correlation for different types of interest regions. We also propose an extension of the SIFT descriptor and show that it outperforms the original method. Furthermore, we observe that the ranking of the descriptors is mostly independent of the interest region detector and that the SIFT-based descriptors perform best. Moments and steerable filters show the best performance among the low dimensional descriptors.", "title": "" }, { "docid": "neg:1840050_11", "text": "A novel algorithm for vehicle safety distance between driving cars for vehicle safety warning system is presented in this paper. The presented system concept includes a distance obstacle detection and safety distance calculation. The system detects the distance between the car and the in front of vehicles (obstacles) and uses the vehicle speed and other parameters to calculate the braking safety distance of the moving car. The system compares the obstacle distance and braking safety distance which are used to determine the moving vehicle's safety distance is enough or not. This paper focuses on the solution algorithm presentation.", "title": "" }, { "docid": "neg:1840050_12", "text": "Most of the existing methods for the recognition of faces and expressions consider either the expression-invariant face recognition problem or the identity-independent facial expression recognition problem. In this paper, we propose joint face and facial expression recognition using a dictionary-based component separation algorithm (DCS). In this approach, the given expressive face is viewed as a superposition of a neutral face component with a facial expression component which is sparse with respect to the whole image. This assumption leads to a dictionary-based component separation algorithm which benefits from the idea of sparsity and morphological diversity. This entails building data-driven dictionaries for neutral and expressive components. The DCS algorithm then uses these dictionaries to decompose an expressive test face into its constituent components. The sparse codes we obtain as a result of this decomposition are then used for joint face and expression recognition. Experiments on publicly available expression and face data sets show the effectiveness of our method.", "title": "" }, { "docid": "neg:1840050_13", "text": "Finite element analysis A 2D finite element analysis for the numerical prediction of capacity curve of unreinforced masonry (URM) walls is conducted. The studied model is based on the fiber finite element approach. The emphasis of this paper will be on the errors obtained from fiber finite element analysis of URM structures under pushover analysis. The masonry material is modeled by different constitutive stress-strain model in compression and tension. OpenSees software is employed to analysis the URM walls. Comparison of numerical predictions with experimental data, it is shown that the fiber model employed in OpenSees cannot properly predict the behavior of URM walls with balance between accuracy and low computational efforts. Additionally, the finite element analyses results show appropriate predictions of some experimental data when the real tensile strength of masonry material is changed. Hence, from the viewpoint of this result, it is concluded that obtained results from fiber finite element analyses employed in OpenSees are unreliable because the exact behavior of masonry material is different from the adopted masonry material models used in modeling process.", "title": "" }, { "docid": "neg:1840050_14", "text": "As the heart of an aircraft, the aircraft engine's condition directly affects the safety, reliability, and operation of the aircraft. Prognostics and health management for aircraft engines can provide advance warning of failure and estimate the remaining useful life. However, aircraft engine systems are complex with both intangible and uncertain factors, it is difficult to model the complex degradation process, and no single prognostic approach can effectively solve this critical and complicated problem. Thus, fusion prognostics is conducted to obtain more accurate prognostics results. In this paper, a prognostics and health management-oriented integrated fusion prognostic framework is developed to improve the system state forecasting accuracy. This framework strategically fuses the monitoring sensor data and integrates the strengths of the data-driven prognostics approach and the experience-based approach while reducing their respective limitations. As an application example, this developed fusion prognostics framework is employed to predict the remaining useful life of an aircraft gas turbine engine based on sensor data. The results demonstrate that the proposed fusion prognostics framework is an effective prognostics tool, which can provide a more accurate and robust remaining useful life estimation than any single prognostics method.", "title": "" }, { "docid": "neg:1840050_15", "text": "Machine learning models with very low test error have been shown to be consistently vulnerable to small, adversarially chosen perturbations of the input. We hypothesize that this counterintuitive behavior is a result of the high-dimensional geometry of the data manifold, and explore this hypothesis on a simple highdimensional dataset. For this dataset we show a fundamental bound relating the classification error rate to the average distance to the nearest misclassification, which is independent of the model. We train different neural network architectures on this dataset and show their error sets approach this theoretical bound. As a result of the theory, the vulnerability of machine learning models to small adversarial perturbations is a logical consequence of the amount of test error observed. We hope that our theoretical analysis of this foundational synthetic case will point a way forward to explore how the geometry of complex real-world data sets leads to adversarial examples.", "title": "" }, { "docid": "neg:1840050_16", "text": "The purpose of the study was to measure objectively the home use of the reciprocating gait orthosis (RGO) and the electrically augmented (hybrid) RGO. It was hypothesised that RGO use would increase following provision of functional electrical stimulation (FES). Five adult subjects participated in the study with spinal cord lesions ranging from C2 (incomplete) to T6. Selection criteria included active RGO use and suitability for electrical stimulation. Home RGO use was measured for up to 18 months by determining the mean number of steps taken per week. During this time patients were supplied with the hybrid system. Three alternatives for the measurement of steps taken were investigated: a commercial digital pedometer, a magnetically actuated counter and a heel contact switch linked to an electronic counter. The latter was found to be the most reliable system and was used for all measurements. Additional information on RGO use was acquired using three patient diaries administered throughout the study and before and after the provision of the hybrid system. Testing of the original hypothesis was complicated by problems in finding a reliable measurement tool and difficulties with data collection. However, the results showed that overall use of the RGO, whether with or without stimulation, is low. Statistical analysis of the step counter results was not realistic. No statistically significant change in RGO use was found between the patient diaries. The study suggests that the addition of electrical stimulation does not increase RGO use. The study highlights the problem of objectively measuring orthotic use in the home.", "title": "" }, { "docid": "neg:1840050_17", "text": "Theoretical models predict that overconŽdent investors trade excessively. We test this prediction by partitioning investors on gender. Psychological research demonstrates that, in areas such as Žnance, men are more overconŽdent than women. Thus, theory predicts that men will trade more excessively than women. Using account data for over 35,000 households from a large discount brokerage, we analyze the common stock investments of men and women from February 1991 through January 1997. We document that men trade 45 percent more than women. Trading reduces men’s net returns by 2.65 percentage points a year as opposed to 1.72 percentage points for women.", "title": "" }, { "docid": "neg:1840050_18", "text": "Fuzzing is a popular dynamic program analysis technique used to find vulnerabilities in complex software. Fuzzing involves presenting a target program with crafted malicious input designed to cause crashes, buffer overflows, memory errors, and exceptions. Crafting malicious inputs in an efficient manner is a difficult open problem and often the best approach to generating such inputs is through applying uniform random mutations to pre-existing valid inputs (seed files). We present a learning technique that uses neural networks to learn patterns in the input files from past fuzzing explorations to guide future fuzzing explorations. In particular, the neural models learn a function to predict good (and bad) locations in input files to perform fuzzing mutations based on the past mutations and corresponding code coverage information. We implement several neural models including LSTMs and sequence-to-sequence models that can encode variable length input files. We incorporate our models in the state-of-the-art AFL (American Fuzzy Lop) fuzzer and show significant improvements in terms of code coverage, unique code paths, and crashes for various input formats including ELF, PNG, PDF, and XML.", "title": "" }, { "docid": "neg:1840050_19", "text": "Life often presents us with situations in which it is important to assess the “true” qualities of a person or object, but in which some factor(s) might have affected (or might yet affect) our initial perceptions in an undesired way. For example, in the Reginald Denny case following the 1993 Los Angeles riots, jurors were asked to determine the guilt or innocence of two African-American defendants who were charged with violently assaulting a Caucasion truck driver. Some of the jurors in this case might have been likely to realize that in their culture many of the popular media portrayals of African-Americans are violent in nature. Yet, these jurors ideally would not want those portrayals to influence their perceptions of the particular defendants in the case. In fact, the justice system is based on the assumption that such portrayals will not influence jury verdicts. In our work on bias correction, we have been struck by the variety of potentially biasing factors that can be identified-including situational influences such as media, social norms, and general culture, and personal influences such as transient mood states, motives (e.g., to manage impressions or agree with liked others), and salient beliefs-and we have been impressed by the apparent ubiquity of correction phenomena (which appear to span many areas of psychological inquiry). Yet, systematic investigations of bias correction are in their early stages. Although various researchers have discussed the notion of effortful cognitive processes overcoming initial (sometimes “automatic”) biases in a variety of settings (e.g., Brewer, 1988; Chaiken, Liberman, & Eagly, 1989; Devine, 1989; Kruglanski & Freund, 1983; Neuberg & Fiske, 1987; Petty & Cacioppo, 1986), little attention has been given, until recently, to the specific processes by which biases are overcome when effort is targeted toward “correction of bias.” That is, when", "title": "" } ]
1840051
Community Detection in Multi-Dimensional Networks
[ { "docid": "pos:1840051_0", "text": "Clustering nodes in a graph is a useful general technique in data mining of large network data sets. In this context, Newman and Girvan [9] recently proposed an objective function for graph clustering called the Q function which allows automatic selection of the number of clusters. Empirically, higher values of the Q function have been shown to correlate well with good graph clusterings. In this paper we show how optimizing the Q function can be reformulated as a spectral relaxation problem and propose two new spectral clustering algorithms that seek to maximize Q. Experimental results indicate that the new algorithms are efficient and effective at finding both good clusterings and the appropriate number of clusters across a variety of real-world graph data sets. In addition, the spectral algorithms are much faster for large sparse graphs, scaling roughly linearly with the number of nodes n in the graph, compared to O(n) for previous clustering algorithms using the Q function.", "title": "" }, { "docid": "pos:1840051_1", "text": "Multiple view data, which have multiple representations from different feature spaces or graph spaces, arise in various data mining applications such as information retrieval, bioinformatics and social network analysis. Since different representations could have very different statistical properties, how to learn a consensus pattern from multiple representations is a challenging problem. In this paper, we propose a general model for multiple view unsupervised learning. The proposed model introduces the concept of mapping function to make the different patterns from different pattern spaces comparable and hence an optimal pattern can be learned from the multiple patterns of multiple representations. Under this model, we formulate two specific models for two important cases of unsupervised learning, clustering and spectral dimensionality reduction; we derive an iterating algorithm for multiple view clustering, and a simple algorithm providing a global optimum to multiple spectral dimensionality reduction. We also extend the proposed model and algorithms to evolutionary clustering and unsupervised learning with side information. Empirical evaluations on both synthetic and real data sets demonstrate the effectiveness of the proposed model and algorithms.", "title": "" } ]
[ { "docid": "neg:1840051_0", "text": "We give an introduction to computation and logic tailored for algebraists, and use this as a springboard to discuss geometric models of computation and the role of cut-elimination in these models, following Girard's geometry of interaction program. We discuss how to represent programs in the λ-calculus and proofs in linear logic as linear maps between infinite-dimensional vector spaces. The interesting part of this vector space semantics is based on the cofree cocommutative coalgebra of Sweedler [71] and the recent explicit computations of liftings in [62].", "title": "" }, { "docid": "neg:1840051_1", "text": "Victor Frankenstein sought to create an intelligent being imbued with the r ules of civilized human conduct, who could further learn how to behave and possibly even evolve through successive g nerations into a more perfect form. Modern human composers similarly strive to create intell igent algorithmic music composition systems that can follow prespecified rules, learn appropriate patte rns from a collection of melodies, or evolve to produce output more perfectly matched to some aesthetic criteria . H re we review recent efforts aimed at each of these three types of algorithmic composition. We focus pa rticularly on evolutionary methods, and indicate how monstrous many of the results have been. We present a ne w method that uses coevolution to create linked artificial music critics and music composers , and describe how this method can attach the separate parts of rules, learning, and evolution together in to one coherent body. “Invention, it must be humbly admitted, does not consist in creating out of void, but ou t of chaos; the materials must, in the first place, be afforded...” --Mary Shelley, Frankenstein (1831/1993, p. 299)", "title": "" }, { "docid": "neg:1840051_2", "text": "Communication technology plays an increasingly important role in the growing automated metering infrastructure (AMI) market. This paper presents a thorough analysis and comparison of four application layer protocols in the smart metering context. The inspected protocols are DLMS/COSEM, the Smart Message Language (SML), and the MMS and SOAP mappings of IEC 61850. The focus of this paper is on their use over TCP/IP. The protocols are first compared with respect to qualitative criteria such as the ability to transmit clock synchronization information. Afterwards the message size of meter reading requests and responses and the different binary encodings of the protocols are compared.", "title": "" }, { "docid": "neg:1840051_3", "text": "Due to deep automation, the configuration of many Cloud infrastructures is static and homogeneous, which, while easing administration, significantly decreases a potential attacker's uncertainty on a deployed Cloud-based service and hence increases the chance of the service being compromised. Moving-target defense (MTD) is a promising solution to the configuration staticity and homogeneity problem. This paper presents our findings on whether and to what extent MTD is effective in protecting a Cloud-based service with heterogeneous and dynamic attack surfaces - these attributes, which match the reality of current Cloud infrastructures, have not been investigated together in previous works on MTD in general network settings. We 1) formulate a Cloud-based service security model that incorporates Cloud-specific features such as VM migration/snapshotting and the diversity/compatibility of migration, 2) consider the accumulative effect of the attacker's intelligence on the target service's attack surface, 3) model the heterogeneity and dynamics of the service's attack surfaces, as defined by the (dynamic) probability of the service being compromised, as an S-shaped generalized logistic function, and 4) propose a probabilistic MTD service deployment strategy that exploits the dynamics and heterogeneity of attack surfaces for protecting the service against attackers. Through simulation, we identify the conditions and extent of the proposed MTD strategy's effectiveness in protecting Cloud-based services. Namely, 1) MTD is more effective when the service deployment is dense in the replacement pool and/or when the attack is strong, and 2) attack-surface heterogeneity-and-dynamics awareness helps in improving MTD's effectiveness.", "title": "" }, { "docid": "neg:1840051_4", "text": "Autonomous Vehicles are currently being tested in a variety of scenarios. As we move towards Autonomous Vehicles, how should intersections look? To answer that question, we break down an intersection management into the different conundrums and scenarios involved in the trajectory planning and current approaches to solve them. Then, a brief analysis of current works in autonomous intersection is conducted. With a critical eye, we try to delve into the discrepancies of existing solutions while presenting some critical and important factors that have been addressed. Furthermore, open issues that have to be addressed are also emphasized. We also try to answer the question of how to benchmark intersection management algorithms by providing some factors that impact autonomous navigation at intersection.", "title": "" }, { "docid": "neg:1840051_5", "text": "The basic knowledge required to do sentiment analysis of Twitter is discussed in this review paper. Sentiment Analysis can be viewed as field of text mining, natural language processing. Thus we can study sentiment analysis in various aspects. This paper presents levels of sentiment analysis, approaches to do sentiment analysis, methodologies for doing it, and features to be extracted from text and the applications. Twitter is a microblogging service to which if sentiment analysis done one has to follow explicit path. Thus this paper puts overview about tweets extraction, their preprocessing and their sentiment analysis.", "title": "" }, { "docid": "neg:1840051_6", "text": "..........................................................................................................iii ACKNOWLEDGMENTS.........................................................................................iv TABLE OF CONTENTS .........................................................................................vi LIST OF TABLES................................................................................................viii LIST OF FIGURES ................................................................................................ix", "title": "" }, { "docid": "neg:1840051_7", "text": "New technologies provide expanded opportunities for interaction design. The growing number of possible ways to interact, in turn, creates a new responsibility for designers: Besides the product's visual aesthetics, one has to make choices about the aesthetics of interaction. This issue recently gained interest in Human-Computer Interaction (HCI) research. Based on a review of 19 approaches, we provide an overview of today's state of the art. We focused on approaches that feature \"qualities\", \"dimensions\" or \"parameters\" to describe interaction. Those fell into two broad categories. One group of approaches dealt with detailed spatio-temporal attributes of interaction sequences (i.e., action-reaction) on a sensomotoric level (i.e., form). The other group addressed the feelings and meanings an interaction is enveloped in rather than the interaction itself (i.e., experience). Surprisingly, only two approaches addressed both levels simultaneously, making the explicit link between form and experience. We discuss these findings and its implications for future theory building.", "title": "" }, { "docid": "neg:1840051_8", "text": "Designing technological systems for personalized education is an iterative and interdisciplinary process that demands a deep understanding of the application domain, the limitations of current methods and technologies, and the computational methods and complexities behind user modeling and adaptation. We present our design process and the Socially Assistive Robot (SAR) tutoring system to support the efforts of educators in teaching number concepts to preschool children. We focus on the computational considerations of designing a SAR system for young children that may later be personalized along multiple dimensions. We conducted an initial data collection to validate that the system is at the proper challenge level for our target population, and discovered promising patterns in participants' learning styles, nonverbal behavior, and performance. We discuss our plans to leverage the data collected to learn and validate a computational, multidimensional model of number concepts learning.", "title": "" }, { "docid": "neg:1840051_9", "text": "Fossil fuels currently supply most of the world's energy needs, and however unacceptable their long-term consequences, the supplies are likely to remain adequate for the next few generations. Scientists and policy makers must make use of this period of grace to assess alternative sources of energy and determine what is scientifically possible, environmentally acceptable and technologically promising.", "title": "" }, { "docid": "neg:1840051_10", "text": "Most P2P systems that provide a DHT abstraction distribute objects among “peer nodes” by choosing random identifiers for the objects. This could result in an O(log N) imbalance. Besides, P2P systems can be highly heterogeneous, i.e. they may consist of peers that range from old desktops behind modem lines to powerful servers connected to the Internet through high-bandwidth lines. In this paper, we address the problem of load balancing in such P2P systems. We explore the space of designing load-balancing algorithms that uses the notion of “virtual servers”. We present three schemes that differ primarily in the amount of information used to decide how to re-arrange load. Our simulation results show that even the simplest scheme is able to balance the load within 80% of the optimal value, while the most complex scheme is able to balance the load within 95% of the optimal value.", "title": "" }, { "docid": "neg:1840051_11", "text": "Physical fatigue has been identified as a risk factor associated with the onset of occupational injury. Muscular fatigue developed from repetitive hand-gripping tasks is of particular concern. This study examined the use of a maximal, repetitive, static power grip test of strength-endurance in detecting differences in exertions between workers with uninjured and injured hands, and workers who were asked to provide insincere exertions. The main dependent variable of interest was power grip muscular force measured with a force strain gauge. Group data showed that the power grip protocol, used in this study, provided a valid and reliable estimate of wrist-hand strength-endurance. Force fatigue curves showed both linear and curvilinear effects among the study groups. An endurance index based on force decrement during repetitive power grip was shown to differentiate between uninjured, injured, and insincere groups.", "title": "" }, { "docid": "neg:1840051_12", "text": "One of the mechanisms by which the innate immune system senses the invasion of pathogenic microorganisms is through the Toll-like receptors (TLRs), which recognize specific molecular patterns that are present in microbial components. Stimulation of different TLRs induces distinct patterns of gene expression, which not only leads to the activation of innate immunity but also instructs the development of antigen-specific acquired immunity. Here, we review the rapid progress that has recently improved our understanding of the molecular mechanisms that mediate TLR signalling.", "title": "" }, { "docid": "neg:1840051_13", "text": "In psychology the Rubber Hand Illusion (RHI) is an experiment where participants get the feeling that a fake hand is becoming their own. Recently, new testing methods using an action based paradigm have induced stronger RHI. However, these experiments are facing limitations because they are difficult to implement and lack of rigorous experimental conditions. This paper proposes a low-cost open source robotic hand which is easy to manufacture and removes these limitations. This device reproduces fingers movement of the participants in real time. A glove containing sensors is worn by the participant and records fingers flexion. Then a microcontroller drives hobby servo-motors on the robotic hand to reproduce the corresponding fingers position. A connection between the robotic device and a computer can be established, enabling the experimenters to tune precisely the desired parameters using Matlab. Since this is the first time a robotic hand is developed for the RHI, a validation study has been conducted. This study confirms previous results found in the literature. This study also illustrates the fact that the robotic hand can be used to conduct innovative experiments in the RHI field. Understanding such RHI is important because it can provide guidelines for prosthetic design.", "title": "" }, { "docid": "neg:1840051_14", "text": "An early warning system can help to identify at-risk students, or predict student learning performance by analyzing learning portfolios recorded in a learning management system (LMS). Although previous studies have shown the applicability of determining learner behaviors from an LMS, most investigated datasets are not assembled from online learning courses or from whole learning activities undertaken on courses that can be analyzed to evaluate students’ academic achievement. Previous studies generally focus on the construction of predictors for learner performance evaluation after a course has ended, and neglect the practical value of an ‘‘early warning’’ system to predict at-risk students while a course is in progress. We collected the complete learning activities of an online undergraduate course and applied data-mining techniques to develop an early warning system. Our results showed that, timedependent variables extracted from LMS are critical factors for online learning. After students have used an LMS for a period of time, our early warning system effectively characterizes their current learning performance. Data-mining techniques are useful in the construction of early warning systems; based on our experimental results, classification and regression tree (CART), supplemented by AdaBoost is the best classifier for the evaluation of learning performance investigated by this study. 2014 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "neg:1840051_15", "text": "Developing wireless sensor networks can enable information gathering, information processing and reliable monitoring of a variety of environments for both civil and military applications. It is however necessary to agree upon a basic architecture for building sensor network applications. This paper presents a general classification of sensor network applications based on their network configurations and discusses some of their architectural requirements. We propose a generic architecture for a specific subclass of sensor applications which we define as self-configurable systems where a large number of sensors coordinate amongst themselves to achieve a large sensing task. Throughout this paper we assume a certain subset of the sensors to be immobile. This paper lists the general architectural and infra-structural components necessary for building this class of sensor applications. Given the various architectural components, we present an algorithm that self-organizes the sensors into a network in a transparent manner. Some of the basic goals of our algorithm include minimizing power utilization, localizing operations and tolerating node and link failures.", "title": "" }, { "docid": "neg:1840051_16", "text": "Retrieving object instances among cluttered scenes efficiently requires compact yet comprehensive regional image representations. Intuitively, object semantics can help build the index that focuses on the most relevant regions. However, due to the lack of bounding-box datasets for objects of interest among retrieval benchmarks, most recent work on regional representations has focused on either uniform or class-agnostic region selection. In this paper, we first fill the void by providing a new dataset of landmark bounding boxes, based on the Google Landmarks dataset, that includes 94k images with manually curated boxes from 15k unique landmarks. Then, we demonstrate how a trained landmark detector, using our new dataset, can be leveraged to index image regions and improve retrieval accuracy while being much more efficient than existing regional methods. In addition, we further introduce a novel regional aggregated selective match kernel (R-ASMK) to effectively combine information from detected regions into an improved holistic image representation. R-ASMK boosts image retrieval accuracy substantially at no additional memory cost, while even outperforming systems that index image regions independently. Our complete image retrieval system improves upon the previous state-of-the-art by significant margins on the Revisited Oxford and Paris datasets. Code and data will be released.", "title": "" }, { "docid": "neg:1840051_17", "text": "At the forefront of debates on language are new data demonstrating infants' early acquisition of information about their native language. The data show that infants perceptually \"map\" critical aspects of ambient language in the first year of life before they can speak. Statistical properties of speech are picked up through exposure to ambient language. Moreover, linguistic experience alters infants' perception of speech, warping perception in the service of language. Infants' strategies are unexpected and unpredicted by historical views. A new theoretical position has emerged, and six postulates of this position are described.", "title": "" }, { "docid": "neg:1840051_18", "text": "Domain Name System (DNS) traffic has become a rich source of information from a security perspective. However, the volume of DNS traffic has been skyrocketing, such that security analyzers experience difficulties in collecting, retrieving, and analyzing the DNS traffic in response to modern Internet threats. More precisely, much of the research relating to DNS has been negatively affected by the dramatic increase in the number of queries and domains. This phenomenon has necessitated a scalable approach, which is not dependent on the volume of DNS traffic. In this paper, we introduce a fast and scalable approach, called PsyBoG, for detecting malicious behavior within large volumes of DNS traffic. PsyBoG leverages a signal processing technique, power spectral density (PSD) analysis, to discover the major frequencies resulting from the periodic DNS queries of botnets. The PSD analysis allows us to detect sophisticated botnets regardless of their evasive techniques, sporadic behavior, and even normal users’ traffic. Furthermore, our method allows us to deal with large-scale DNS data by only utilizing the timing information of query generation regardless of the number of queries and domains. Finally, PsyBoG discovers groups of hosts which show similar patterns of malicious behavior. PsyBoG was evaluated by conducting experiments with two different data sets, namely DNS traces generated by real malware in controlled environments and a large number of real-world DNS traces collected from a recursive DNS server, an authoritative DNS server, and Top-Level Domain (TLD) servers. We utilized the malware traces as the ground truth, and, as a result, PsyBoG performed with a detection accuracy of 95%. By using a large number of DNS traces, we were able to demonstrate the scalability and effectiveness of PsyBoG in terms of practical usage. Finally, PsyBoG detected 23 unknown and 26 known botnet groups with 0.1% false positives. © 2016 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "neg:1840051_19", "text": "Connecting mathematical logic and computation, it ensures that some aspects of programming are absolute.", "title": "" } ]
1840052
Recurrent Neural Networks for Customer Purchase Prediction on Twitter
[ { "docid": "pos:1840052_0", "text": "In present times, social forums such as Quora and Yahoo! Answers constitute powerful media through which people discuss on a variety of topics and express their intentions and thoughts. Here they often reveal their potential intent to purchase ‘Purchase Intent’ (PI). A purchase intent is defined as a text expression showing a desire to purchase a product or a service in future. Extracting posts having PI from a user’s social posts gives huge opportunities towards web personalization, targeted marketing and improving community observing systems. In this paper, we explore the novel problem of detecting PIs from social posts and classifying them. We find that using linguistic features along with statistical features of PI expressions achieves a significant improvement in PI classification over ‘bag-ofwords’ based features used in many present day socialmedia classification tasks. Our approach takes into consideration the specifics of social posts like limited contextual information, incorrect grammar, language ambiguities, etc. by extracting features at two different levels of text granularity word and phrase based features and grammatical dependency based features. Apart from these, the patterns observed in PI posts help us to identify some specific features.", "title": "" }, { "docid": "pos:1840052_1", "text": "We describe a new dependency parser for English tweets, TWEEBOPARSER. The parser builds on several contributions: new syntactic annotations for a corpus of tweets (TWEEBANK), with conventions informed by the domain; adaptations to a statistical parsing algorithm; and a new approach to exploiting out-of-domain Penn Treebank data. Our experiments show that the parser achieves over 80% unlabeled attachment accuracy on our new, high-quality test set and measure the benefit of our contributions. Our dataset and parser can be found at http://www.ark.cs.cmu.edu/TweetNLP.", "title": "" }, { "docid": "pos:1840052_2", "text": "The compositionality of meaning extends beyond the single sentence. Just as words combine to form the meaning of sentences, so do sentences combine to form the meaning of paragraphs, dialogues and general discourse. We introduce both a sentence model and a discourse model corresponding to the two levels of compositionality. The sentence model adopts convolution as the central operation for composing semantic vectors and is based on a novel hierarchical convolutional neural network. The discourse model extends the sentence model and is based on a recurrent neural network that is conditioned in a novel way both on the current sentence and on the current speaker. The discourse model is able to capture both the sequentiality of sentences and the interaction between different speakers. Without feature engineering or pretraining and with simple greedy decoding, the discourse model coupled to the sentence model obtains state of the art performance on a dialogue act classification experiment.", "title": "" } ]
[ { "docid": "neg:1840052_0", "text": "Waveguide twists are often necessary to provide polarization rotation between waveguide-based components. At terahertz frequencies, it is desirable to use a twist design that is compact in order to reduce loss; however, these designs are difficult if not impossible to realize using standard machining. This paper presents a micromachined compact waveguide twist for terahertz frequencies. The Rud-Kirilenko twist geometry is ideally suited to the micromachining processes developed at the University of Virginia. Measurements of a WR-1.5 micromachined twist exhibit a return loss near 20 dB and a median insertion loss of 0.5 dB from 600 to 750 GHz.", "title": "" }, { "docid": "neg:1840052_1", "text": "Identifying that a given binary program implements a specific cryptographic algorithm and finding out more information about the cryptographic code is an important problem. Proprietary programs and especially malicious software (so called malware) often use cryptography and we want to learn more about the context, e.g., which algorithms and keys are used by the program. This helps an analyst to quickly understand what a given binary program does and eases analysis. In this paper, we present several methods to identify cryptographic primitives (e.g., entire algorithms or only keys) within a given binary program in an automated way. We perform fine-grained dynamic binary analysis and use the collected information as input for several heuristics that characterize specific, unique aspects of cryptographic code. Our evaluation shows that these methods improve the state-of-the-art approaches in this area and that we can successfully extract cryptographic keys from a given malware binary.", "title": "" }, { "docid": "neg:1840052_2", "text": "This paper presents a three-phase single-stage bidirectional isolated matrix based AC-DC converter for energy storage. The matrix (3 × 1) topology directly converts the three-phase line voltages into high-frequency AC voltage which is subsequently, processed using a high-frequency transformer followed by a controlled rectifier. A modified Space Vector Modulation (SVM) based switching scheme is proposed to achieve high input power quality with high power conversion efficiency. Compared to the conventional two stage converter, the proposed converter provides single-stage conversion resulting in higher power conversion efficiency and higher power density. The operating principles of the proposed converter in both AC-DC and DC-AC mode are explained followed by steady state analysis. Simulation results are presented for 230 V, 50 Hz to 48 V isolated bidirectional converter at 2 kW output power to validate the theoretical claims.", "title": "" }, { "docid": "neg:1840052_3", "text": "In this paper, a vision-guided autonomous quadrotor in an air-ground multi-robot system has been proposed. This quadrotor is equipped with a monocular camera, IMUs and a flight computer, which enables autonomous flights. Two complementary pose/motion estimation methods, respectively marker-based and optical-flow-based, are developed by considering different altitudes in a flight. To achieve smooth take-off, stable tracking and safe landing with respect to a moving ground robot and desired trajectories, appropriate controllers are designed. Additionally, data synchronization and time delay compensation are applied to improve the system performance. Real-time experiments are conducted in both indoor and outdoor environments.", "title": "" }, { "docid": "neg:1840052_4", "text": "This paper presents a new approach to translate between Building Information Modeling (BIM) and Building Energy Modeling (BEM) that uses Modelica, an object-oriented declarative, equation-based simulation environment. The approach (BIM2BEM) has been developed using a data modeling method to enable seamless model translations of building geometry, materials, and topology. Using data modeling, we created a Model View Definition (MVD) consisting of a process model and a class diagram. The process model demonstrates object-mapping between BIM and Modelica-based BEM (ModelicaBEM) and facilitates the definition of required information during model translations. The class diagram represents the information and object relationships to produce a class package intermediate between the BIM and BEM. The implementation of the intermediate class package enables system interface (Revit2Modelica) development for automatic BIM data translation into ModelicaBEM. In order to demonstrate and validate our approach, simulation result comparisons have been conducted via three test cases using (1) the BIM-based Modelica models generated from Revit2Modelica and (2) BEM models manually created using LBNL Modelica Buildings library. Our implementation shows that BIM2BEM (1) enables BIM models to be translated into ModelicaBEM models, (2) enables system interface development based on the MVD for thermal simulation, and (3) facilitates the reuse of original BIM data into building energy simulation without an import/export process.", "title": "" }, { "docid": "neg:1840052_5", "text": "In this paper, shadow detection and compensation are treated as image enhancement tasks. The principal components analysis (PCA) and luminance based multi-scale Retinex (LMSR) algorithm are explored to detect and compensate shadow in high resolution satellite image. PCA provides orthogonally channels, thus allow the color to remain stable despite the modification of luminance. Firstly, the PCA transform is used to obtain the luminance channel, which enables us to detect shadow regions using histogram threshold technique. After detection, the LMSR technique is used to enhance the image only in luminance channel to compensate for shadows. Then the enhanced image is obtained by inverse transform of PCA. The final shadow compensation image is obtained by comparison of the original image, the enhanced image and the shadow detection image. Experiment results show the effectiveness of the proposed method.", "title": "" }, { "docid": "neg:1840052_6", "text": "This paper investigates conditions under which modi cations to the reward function of a Markov decision process preserve the op timal policy It is shown that besides the positive linear transformation familiar from utility theory one can add a reward for tran sitions between states that is expressible as the di erence in value of an arbitrary poten tial function applied to those states Further more this is shown to be a necessary con dition for invariance in the sense that any other transformation may yield suboptimal policies unless further assumptions are made about the underlying MDP These results shed light on the practice of reward shap ing a method used in reinforcement learn ing whereby additional training rewards are used to guide the learning agent In par ticular some well known bugs in reward shaping procedures are shown to arise from non potential based rewards and methods are given for constructing shaping potentials corresponding to distance based and subgoal based heuristics We show that such po tentials can lead to substantial reductions in learning time", "title": "" }, { "docid": "neg:1840052_7", "text": "Robots that work with people foster social relationships between people and systems. The home is an interesting place to study the adoption and use of these systems. The home provides challenges from both technical and interaction perspectives. In addition, the home is a seat for many specialized human behaviors and needs, and has a long history of what is collected and used to functionally, aesthetically, and symbolically fit the home. To understand the social impact of robotic technologies, this paper presents an ethnographic study of consumer robots in the home. Six families' experience of floor cleaning after receiving a new vacuum (a Roomba robotic vacuum or the Flair, a handheld upright) was studied. While the Flair had little impact, the Roomba changed people, cleaning activities, and other product use. In addition, people described the Roomba in aesthetic and social terms. The results of this study, while initial, generate implications for how robots should be designed for the home.", "title": "" }, { "docid": "neg:1840052_8", "text": "Instabilities in MOS-based devices with various substrates ranging from Si, SiGe, IIIV to 2D channel materials, can be explained by defect levels in the dielectrics and non-radiative multi-phonon (NMP) barriers. However, recent results obtained on single defects have demonstrated that they can show a highly complex behaviour since they can transform between various states. As a consequence, detailed physical models are complicated and computationally expensive. As will be shown here, as long as only lifetime predictions for an ensemble of defects is needed, considerable simplifications are possible. We present and validate an oxide defect model that captures the essence of full physical models while reducing the complexity substantially. We apply this model to investigate the improvement in positive bias temperature instabilities due to a reliability anneal. Furthermore, we corroborate the simulated defect bands with prior defect-centric studies and perform lifetime projections.", "title": "" }, { "docid": "neg:1840052_9", "text": "News agencies and other news providers or consumers are confronted with the task of extracting events from news articles. This is done i) either to monitor and, hence, to be informed about events of specific kinds over time and/or ii) to react to events immediately. In the past, several promising approaches to extracting events from text have been proposed. Besides purely statistically-based approaches there are methods to represent events in a semantically-structured form, such as graphs containing actions (predicates), participants (entities), etc. However, it turns out to be very difficult to automatically determine whether an event is real or not. In this paper, we give an overview of approaches which proposed solutions for this research problem. We show that there is no gold standard dataset where real events are annotated in text documents in a fine-grained, semantically-enriched way. We present a methodology of creating such a dataset with the help of crowdsourcing and present preliminary results.", "title": "" }, { "docid": "neg:1840052_10", "text": "As 100-Gb/s coherent systems based on polarization- division multiplexed quadrature phase shift keying (PDM-QPSK), with aggregate wavelength-division multiplexed (WDM) capacities close to 10 Tb/s, are getting widely deployed, the use of high-spectral-efficiency quadrature amplitude modulation (QAM) to increase both per-channel interface rates and aggregate WDM capacities is the next evolutionary step. In this paper we review high-spectral-efficiency optical modulation formats for use in digital coherent systems. We look at fundamental as well as at technological scaling trends and highlight important trade-offs pertaining to the design and performance of coherent higher-order QAM transponders.", "title": "" }, { "docid": "neg:1840052_11", "text": "Gallium Nitride (GaN) based power devices have the potential to achieve higher efficiency and higher switching frequency than those possible with Silicon (Si) power devices. In literature, GaN based converters are claimed to offer higher power density. However, a detailed comparative analysis on the power density of GaN and Si based low power dc-dc flyback converter is not reported. In this paper, comparison of a 100 W, dc-dc flyback converter based on GaN and Si is presented. Both the converters are designed to ensure an efficiency of 80%. Based on this, the switching frequency for both the converters are determined. The analysis shows that the GaN based converter can be operated at approximately ten times the switching frequency of Si-based converter. This leads to a reduction in the area product of the flyback transformer required in GaN based converter. It is found that the volume of the flyback transformer can be reduced by a factor of six for a GaN based converter as compared to a Si based converter. Further, it is observed that the value of output capacitance used in the GaN based converter reduces by a factor of ten as compared to the Si based converter, implying a reduction in the size of the output capacitors. Therefore, a significant improvement in the power density of the GaN based converter as compared to the Si based converter is seen.", "title": "" }, { "docid": "neg:1840052_12", "text": "The Rey–Osterrieth Complex Figure Test (ROCF), which was developed by Rey in 1941 and standardized by Osterrieth in 1944, is a widely used neuropsychological test for the evaluation of visuospatial constructional ability and visual memory. Recently, the ROCF has been a useful tool for measuring executive function that is mediated by the prefrontal lobe. The ROCF consists of three test conditions: Copy, Immediate Recall and Delayed Recall. At the first step, subjects are given the ROCF stimulus card, and then asked to draw the same figure. Subsequently, they are instructed to draw what they remembered. Then, after a delay of 30 min, they are required to draw the same figure once again. The anticipated results vary according to the scoring system used, but commonly include scores related to location, accuracy and organization. Each condition of the ROCF takes 10 min to complete and the overall time of completion is about 30 min.", "title": "" }, { "docid": "neg:1840052_13", "text": "0747-5632/$ see front matter 2012 Elsevier Ltd. All rights reserved. http://dx.doi.org/10.1016/j.chb.2012.11.017 ⇑ Corresponding author. Address: School of Psychology, Australian Catholic University, 1100 Nudgee Rd., Banyo, QLD 4014, Australia. Tel.: +61 7 3623 7346; fax: +61 7 3623 7277. E-mail address: rachel.grieve@acu.edu.au (R. Grieve). Rachel Grieve ⇑, Michaelle Indian, Kate Witteveen, G. Anne Tolan, Jessica Marrington", "title": "" }, { "docid": "neg:1840052_14", "text": "This paper presents ROC curve, lift chart and calibration plot, three well known graphical techniques that are useful for evaluating the quality of classification models used in data mining and machine learning. Each technique, normally used and studied separately, defines its own measure of classification quality and its visualization. Here, we give a brief survey of the methods and establish a common mathematical framework which adds some new aspects, explanations and interrelations between these techniques. We conclude with an empirical evaluation and a few examples on how to use the presented techniques to boost classification accuracy.", "title": "" }, { "docid": "neg:1840052_15", "text": "Autonomous cars will likely hit the market soon, but trust into such a technology is one of the big discussion points in the public debate. Drivers who have always been in complete control of their car are expected to willingly hand over control and blindly trust a technology that could kill them. We argue that trust in autonomous driving can be increased by means of a driver interface that visualizes the car’s interpretation of the current situation and its corresponding actions. To verify this, we compared different visualizations in a user study, overlaid to a driving scene: (1) a chauffeur avatar, (2) a world in miniature, and (3) a display of the car’s indicators as the baseline. The world in miniature visualization increased trust the most. The human-like chauffeur avatar can also increase trust, however, we did not find a significant difference between the chauffeur and the baseline. ACM Classification", "title": "" }, { "docid": "neg:1840052_16", "text": "In this paper, we propose an innovative touch-less palm print recognition system. This project is motivated by the public’s demand for non-invasive and hygienic biometric technology. For various reasons, users are concerned about touching the biometric scanners. Therefore, we propose to use a low-resolution web camera to capture the user’s hand at a distance for recognition. The users do not need to touch any device for their palm print to be acquired. A novel hand tracking and palm print region of interest (ROI) extraction technique are used to track and capture the user’s palm in real-time video stream. The discriminative palm print features are extracted based on a new method that applies local binary pattern (LBP) texture descriptor on the palm print directional gradient responses. Experiments show promising result using the proposed method. Performance can be further improved when a modified probabilistic neural network (PNN) is used for feature matching. Verification can be performed in less than one second in the proposed system. 2008 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "neg:1840052_17", "text": "OBJECTIVES\nExtracting data from publication reports is a standard process in systematic review (SR) development. However, the data extraction process still relies too much on manual effort which is slow, costly, and subject to human error. In this study, we developed a text summarization system aimed at enhancing productivity and reducing errors in the traditional data extraction process.\n\n\nMETHODS\nWe developed a computer system that used machine learning and natural language processing approaches to automatically generate summaries of full-text scientific publications. The summaries at the sentence and fragment levels were evaluated in finding common clinical SR data elements such as sample size, group size, and PICO values. We compared the computer-generated summaries with human written summaries (title and abstract) in terms of the presence of necessary information for the data extraction as presented in the Cochrane review's study characteristics tables.\n\n\nRESULTS\nAt the sentence level, the computer-generated summaries covered more information than humans do for systematic reviews (recall 91.2% vs. 83.8%, p<0.001). They also had a better density of relevant sentences (precision 59% vs. 39%, p<0.001). At the fragment level, the ensemble approach combining rule-based, concept mapping, and dictionary-based methods performed better than individual methods alone, achieving an 84.7% F-measure.\n\n\nCONCLUSION\nComputer-generated summaries are potential alternative information sources for data extraction in systematic review development. Machine learning and natural language processing are promising approaches to the development of such an extractive summarization system.", "title": "" }, { "docid": "neg:1840052_18", "text": "Cloud computing and its pay-as-you-go model continue to provide significant cost benefits and a seamless service delivery model for cloud consumers. The evolution of small-scale and large-scale geo-distributed datacenters operated and managed by individual cloud service providers raises new challenges in terms of effective global resource sharing and management of autonomously-controlled individual datacenter resources. Earlier solutions for geo-distributed clouds have focused primarily on achieving global efficiency in resource sharing that results in significant inefficiencies in local resource allocation for individual datacenters leading to unfairness in revenue and profit earned. In this paper, we propose a new contracts-based resource sharing model for federated geo-distributed clouds that allows cloud service providers to establish resource sharing contracts with individual datacenters apriori for defined time intervals during a 24 hour time period. Based on the established contracts, individual cloud service providers employ a cost-aware job scheduling and provisioning algorithm that enables tasks to complete and meet their response time requirements. The proposed techniques are evaluated through extensive experiments using realistic workloads and the results demonstrate the effectiveness, scalability and resource sharing efficiency of the proposed model.", "title": "" } ]
1840053
A typology of crowdfunding sponsors: Birds of a feather flock together?
[ { "docid": "pos:1840053_0", "text": "Consumers have recently begun to play a new role in some markets: that of providing capital and investment support to the offering. This phenomenon, called crowdfunding, is a collective effort by people who network and pool their money together, usually via the Internet, in order to invest in and support efforts initiated by other people or organizations. Successful service businesses that organize crowdfunding and act as intermediaries are emerging, attesting to the viability of this means of attracting investment. Employing a “Grounded Theory” approach, this paper performs an in-depth qualitative analysis of three cases involving crowdfunding initiatives: SellaBand in the music business, Trampoline in financial services, and Kapipal in non-profit services. These cases were selected to represent a diverse set of crowdfunding operations that vary in terms of risk/return for the investorconsumer and the type of consumer involvement. The analysis offers important insights about investor behaviour in crowdfunding service models, the potential determinants of such behaviour, and variations in behaviour and determinants across different service models. The findings have implications for service managers interested in launching and/or managing crowdfunding initiatives, and for service theory in terms of extending the consumer’s role from co-production and co-creation to investment.", "title": "" }, { "docid": "pos:1840053_1", "text": "Ideas competitions appear to be a promising tool for crowdsourcing and open innovation processes, especially for business-to-business software companies. active participation of potential lead users is the key to success. Yet a look at existing ideas competitions in the software field leads to the conclusion that many information technology (It)–based ideas competitions fail to meet requirements upon which active participation is established. the paper describes how activation-enabling functionalities can be systematically designed and implemented in an It-based ideas competition for enterprise resource planning software. We proceeded to evaluate the outcomes of these design measures and found that participation can be supported using a two-step model. the components of the model support incentives and motives of users. Incentives and motives of the users then support the process of activation and consequently participation throughout the ideas competition. this contributes to the successful implementation and maintenance of the ideas competition, thereby providing support for the development of promising innovative ideas. the paper concludes with a discussion of further activation-supporting components yet to be implemented and points to rich possibilities for future research in these areas.", "title": "" } ]
[ { "docid": "neg:1840053_0", "text": "This paper describes and evaluates log-linear parsing models for Combinatory Categorial Grammar (CCG). A parallel implementation of the L-BFGS optimisation algorithm is described, which runs on a Beowulf cluster allowing the complete Penn Treebank to be used for estimation. We also develop a new efficient parsing algorithm for CCG which maximises expected recall of dependencies. We compare models which use all CCG derivations, including nonstandard derivations, with normal-form models. The performances of the two models are comparable and the results are competitive with existing wide-coverage CCG parsers.", "title": "" }, { "docid": "neg:1840053_1", "text": "Recently a revision of the cell theory has been proposed, which has several implications both for physiology and pathology. This revision is founded on adapting the old Julius von Sach’s proposal (1892) of the Energide as the fundamental universal unit of eukaryotic life. This view maintains that, in most instances, the living unit is the symbiotic assemblage of the cell periphery complex organized around the plasma membrane, some peripheral semi-autonomous cytosol organelles (as mitochondria and plastids, which may be or not be present), and of the Energide (formed by the nucleus, microtubules, and other satellite structures). A fundamental aspect is the proposal that the Energide plays a pivotal and organizing role of the entire symbiotic assemblage (see Appendix 1). The present paper discusses how the Energide paradigm implies a revision of the concept of the internal milieu. As a matter of fact, the Energide interacts with the cytoplasm that, in turn, interacts with the interstitial fluid, and hence with the medium that has been, classically, known as the internal milieu. Some implications of this aspect have been also presented with the help of a computational model in a mathematical Appendix 2 to the paper. Finally, relevances of the Energide concept for the information handling in the central nervous system are discussed especially in relation to the inter-Energide exchange of information.", "title": "" }, { "docid": "neg:1840053_2", "text": "Statistical relational AI (StarAI) aims at reasoning and learning in noisy domains described in terms of objects and relationships by combining probability with first-order logic. With huge advances in deep learning in the current years, combining deep networks with first-order logic has been the focus of several recent studies. Many of the existing attempts, however, only focus on relations and ignore object properties. The attempts that do consider object properties are limited in terms of modelling power or scalability. In this paper, we develop relational neural networks (RelNNs) by adding hidden layers to relational logistic regression (the relational counterpart of logistic regression). We learn latent properties for objects both directly and through general rules. Back-propagation is used for training these models. A modular, layer-wise architecture facilitates utilizing the techniques developed within deep learning community to our architecture. Initial experiments on eight tasks over three real-world datasets show that RelNNs are promising models for relational learning.", "title": "" }, { "docid": "neg:1840053_3", "text": "For artificial general intelligence (AGI) it would be efficient if multiple users trained the same giant neural network, permitting parameter reuse, without catastrophic forgetting. PathNet is a first step in this direction. It is a neural network algorithm that uses agents embedded in the neural network whose task is to discover which parts of the network to re-use for new tasks. Agents are pathways (views) through the network which determine the subset of parameters that are used and updated by the forwards and backwards passes of the backpropogation algorithm. During learning, a tournament selection genetic algorithm is used to select pathways through the neural network for replication and mutation. Pathway fitness is the performance of that pathway measured according to a cost function. We demonstrate successful transfer learning; fixing the parameters along a path learned on task A and re-evolving a new population of paths for task B, allows task B to be learned faster than it could be learned from scratch or after fine-tuning. Paths evolved on task B re-use parts of the optimal path evolved on task A. Positive transfer was demonstrated for binary MNIST, CIFAR, and SVHN supervised learning classification tasks, and a set of Atari and Labyrinth reinforcement learning tasks, suggesting PathNets have general applicability for neural network training. Finally, PathNet also significantly improves the robustness to hyperparameter choices of a parallel asynchronous reinforcement learning algorithm (A3C).", "title": "" }, { "docid": "neg:1840053_4", "text": "Establishing unique identities for both humans and end systems has been an active research problem in the security community, giving rise to innovative machine learning-based authentication techniques. Although such techniques offer an automated method to establish identity, they have not been vetted against sophisticated attacks that target their core machine learning technique. This paper demonstrates that mimicking the unique signatures generated by host fingerprinting and biometric authentication systems is possible. We expose the ineffectiveness of underlying machine learning classification models by constructing a blind attack based around the query synthesis framework and utilizing Explainable–AI (XAI) techniques. We launch an attack in under 130 queries on a state-of-the-art face authentication system, and under 100 queries on a host authentication system. We examine how these attacks can be defended against and explore their limitations. XAI provides an effective means for adversaries to infer decision boundaries and provides a new way forward in constructing attacks against systems using machine learning models for authentication.", "title": "" }, { "docid": "neg:1840053_5", "text": "Who did what to whom is a major focus in natural language understanding, which is right the aim of semantic role labeling (SRL). Although SRL is naturally essential to text comprehension tasks, it is surprisingly ignored in previous work. This paper thus makes the first attempt to let SRL enhance text comprehension and inference through specifying verbal arguments and their corresponding semantic roles. In terms of deep learning models, our embeddings are enhanced by semantic role labels for more fine-grained semantics. We show that the salient labels can be conveniently added to existing models and significantly improve deep learning models in challenging text comprehension tasks. Extensive experiments on benchmark machine reading comprehension and inference datasets verify that the proposed semantic learning helps our system reach new state-of-the-art.", "title": "" }, { "docid": "neg:1840053_6", "text": "In this paper, a new topology for rectangular waveguide bandpass and low-pass filters is presented. A simple, accurate, and robust design technique for these novel meandered waveguide filters is provided. The proposed filters employ a concatenation of ±90° $E$ -plane mitered bends (±90° EMBs) with different heights and lengths, whose dimensions are consecutively and independently calculated. Each ±90° EMB satisfies a local target reflection coefficient along the device so that they can be calculated separately. The novel structures allow drastically reduce the total length of the filters and embed bends if desired, or even to provide routing capabilities. Furthermore, the new meandered topology allows the introduction of transmission zeros above the passband of the low-pass filter, which can be controlled by the free parameters of the ±90° EMBs. A bandpass and a low-pass filter with meandered topology have been designed following the proposed novel technique. Measurements of the manufactured prototypes are also included to validate the novel topology and design technique, achieving excellent agreement with the simulation results.", "title": "" }, { "docid": "neg:1840053_7", "text": "In this paper we propose a deep architecture for detecting people attributes (e.g. gender, race, clothing …) in surveillance contexts. Our proposal explicitly deal with poor resolution and occlusion issues that often occur in surveillance footages by enhancing the images by means of Deep Convolutional Generative Adversarial Networks (DCGAN). Experiments show that by combining both our Generative Reconstruction and Deep Attribute Classification Network we can effectively extract attributes even when resolution is poor and in presence of strong occlusions up to 80% of the whole person figure.", "title": "" }, { "docid": "neg:1840053_8", "text": "In recent years, advances in the design of convolutional neural networks have resulted in signi€cant improvements on the image classi€cation and object detection problems. One of the advances is networks built by stacking complex cells, seen in such networks as InceptionNet and NasNet. Œese cells are either constructed by hand, generated by generative networks or discovered by search. Unlike conventional networks (where layers consist of a convolution block, sampling and non linear unit), the new cells feature more complex designs consisting of several €lters and other operators connected in series and parallel. Recently, several cells have been proposed or generated that are supersets of previously proposed custom or generated cells. Inƒuenced by this, we introduce a network construction method based on EnvelopeNets. An EnvelopeNet is a deep convolutional neural network of stacked EnvelopeCells. EnvelopeCells are supersets (or envelopes) of previously proposed handcra‰ed and generated cells. We propose a method to construct improved network architectures by restructuring EnvelopeNets. Œe algorithm restructures an EnvelopeNet by rearranging blocks in the network. It identi€es blocks to be restructured using metrics derived from the featuremaps collected during a partial training run of the EnvelopeNet. Œe method requires less computation resources to generate an architecture than an optimized architecture search over the entire search space of blocks. Œe restructured networks have higher accuracy on the image classi€cation problem on a representative dataset than both the generating EnvelopeNet and an equivalent arbitrary network.", "title": "" }, { "docid": "neg:1840053_9", "text": "Open domain response generation has achieved remarkable progress in recent years, but sometimes yields short and uninformative responses. We propose a new paradigm, prototypethen-edit for response generation, that first retrieves a prototype response from a pre-defined index and then edits the prototype response according to the differences between the prototype context and current context. Our motivation is that the retrieved prototype provides a good start-point for generation because it is grammatical and informative, and the post-editing process further improves the relevance and coherence of the prototype. In practice, we design a contextaware editing model that is built upon an encoder-decoder framework augmented with an editing vector. We first generate an edit vector by considering lexical differences between a prototype context and current context. After that, the edit vector and the prototype response representation are fed to a decoder to generate a new response. Experiment results on a large scale dataset demonstrate that our new paradigm significantly increases the relevance, diversity and originality of generation results, compared to traditional generative models. Furthermore, our model outperforms retrieval-based methods in terms of relevance and originality.", "title": "" }, { "docid": "neg:1840053_10", "text": "This paper addresses the task of document retrieval based on the degree of document relatedness to the meanings of a query by presenting a semantic-enabled language model. Our model relies on the use of semantic linking systems for forming a graph representation of documents and queries, where nodes represent concepts extracted from documents and edges represent semantic relatedness between concepts. Based on this graph, our model adopts a probabilistic reasoning model for calculating the conditional probability of a query concept given values assigned to document concepts. We present an integration framework for interpolating other retrieval systems with the presented model in this paper. Our empirical experiments on a number of TREC collections show that the semantic retrieval has a synergetic impact on the results obtained through state of the art keyword-based approaches, and the consideration of semantic information obtained from entity linking on queries and documents can complement and enhance the performance of other retrieval models.", "title": "" }, { "docid": "neg:1840053_11", "text": "Despite rapid technological advances in computer hardware and software, insecure behavior by individual computer users continues to be a significant source of direct cost and productivity loss. Why do individuals, many of whom are aware of the possible grave consequences of low-level insecure behaviors such as failure to backup work and disclosing passwords, continue to engage in unsafe computing practices? In this article we propose a conceptual model of this behavior as the outcome of a boundedly-rational choice process. We explore this model in a survey of undergraduate students (N = 167) at two large public universities. We asked about the frequency with which they engaged in five commonplace but unsafe computing practices, and probed their decision processes with regard to these practices. Although our respondents saw themselves as knowledgeable, competent users, and were broadly aware that serious consequences were quite likely to result, they reported frequent unsafe computing behaviors. We discuss the implications of these findings both for further research on risky computing practices and for training and enforcement policies that will be needed in the organizations these students will shortly be entering.", "title": "" }, { "docid": "neg:1840053_12", "text": "In the fashion industry, demand forecasting is particularly complex: companies operate with a large variety of short lifecycle products, deeply influenced by seasonal sales, promotional events, weather conditions, advertising and marketing campaigns, on top of festivities and socio-economic factors. At the same time, shelf-out-of-stock phenomena must be avoided at all costs. Given the strong seasonal nature of the products that characterize the fashion sector, this paper aims to highlight how the Fourier method can represent an easy and more effective forecasting method compared to other widespread heuristics normally used. For this purpose, a comparison between the fast Fourier transform algorithm and another two techniques based on moving average and exponential smoothing was carried out on a set of 4year historical sales data of a €60+ million turnover mediumto large-sized Italian fashion company, which operates in the women’s textiles apparel and clothing sectors. The entire analysis was performed on a common spreadsheet, in order to demonstrate that accurate results exploiting advanced numerical computation techniques can be carried out without necessarily using expensive software.", "title": "" }, { "docid": "neg:1840053_13", "text": "Detecting system anomalies is an important problem in many fields such as security, fault management, and industrial optimization. Recently, invariant network has shown to be powerful in characterizing complex system behaviours. In the invariant network, a node represents a system component and an edge indicates a stable, significant interaction between two components. Structures and evolutions of the invariance network, in particular the vanishing correlations, can shed important light on locating causal anomalies and performing diagnosis. However, existing approaches to detect causal anomalies with the invariant network often use the percentage of vanishing correlations to rank possible casual components, which have several limitations: (1) fault propagation in the network is ignored, (2) the root casual anomalies may not always be the nodes with a high percentage of vanishing correlations, (3) temporal patterns of vanishing correlations are not exploited for robust detection, and (4) prior knowledge on anomalous nodes are not exploited for (semi-)supervised detection. To address these limitations, in this article we propose a network diffusion based framework to identify significant causal anomalies and rank them. Our approach can effectively model fault propagation over the entire invariant network and can perform joint inference on both the structural and the time-evolving broken invariance patterns. As a result, it can locate high-confidence anomalies that are truly responsible for the vanishing correlations and can compensate for unstructured measurement noise in the system. Moreover, when the prior knowledge on the anomalous status of some nodes are available at certain time points, our approach is able to leverage them to further enhance the anomaly inference accuracy. When the prior knowledge is noisy, our approach also automatically learns reliable information and reduces impacts from noises. By performing extensive experiments on synthetic datasets, bank information system datasets, and coal plant cyber-physical system datasets, we demonstrate the effectiveness of our approach.", "title": "" }, { "docid": "neg:1840053_14", "text": "Event-related desynchronization/synchronization patterns during right/left motor imagery (MI) are effective features for an electroencephalogram-based brain-computer interface (BCI). As MI tasks are subject-specific, selection of subject-specific discriminative frequency components play a vital role in distinguishing these patterns. This paper proposes a new discriminative filter bank (FB) common spatial pattern algorithm to extract subject-specific FB for MI classification. The proposed method enhances the classification accuracy in BCI competition III dataset IVa and competition IV dataset IIb. Compared to the performance offered by the existing FB-based method, the proposed algorithm offers error rate reductions of 17.42% and 8.9% for BCI competition datasets III and IV, respectively.", "title": "" }, { "docid": "neg:1840053_15", "text": "Five different proposed measures of similarity or semantic distance in WordNet were experimentally compared by examining their performance in a real-word spelling correction system. It was found that Jiang and Conrath’s measure gave the best results overall. That of Hirst and St-Onge seriously over-related, that of Resnik seriously under-related, and those of Lin and of Leacock and Chodorow fell in between.", "title": "" }, { "docid": "neg:1840053_16", "text": "Analytical calculation methods for all the major components of the synchronous inductance of tooth-coil permanent-magnet synchronous machines are reevaluated in this paper. The inductance estimation is different in the tooth-coil machine compared with the one in the traditional rotating field winding machine. The accuracy of the analytical torque calculation highly depends on the estimated synchronous inductance. Despite powerful finite element method (FEM) tools, an accurate and fast analytical method is required at an early design stage to find an initial machine design structure with the desired performance. The results of the analytical inductance calculation are verified and assessed in terms of accuracy with the FEM simulation results and with the prototype measurement results.", "title": "" }, { "docid": "neg:1840053_17", "text": "The paper investigates techniques for extracting data from HTML sites through the use of automatically generated wrappers. To automate the wrapper generation and the data extraction process, the paper develops a novel technique to compare HTML pages and generate a wrapper based on their similarities and differences. Experimental results on real-life data-intensive Web sites confirm the feasibility of the approach.", "title": "" }, { "docid": "neg:1840053_18", "text": "Nucleic acids have emerged as powerful biological and nanotechnological tools. In biological and nanotechnological experiments, methods of extracting and purifying nucleic acids from various types of cells and their storage are critical for obtaining reproducible experimental results. In nanotechnological experiments, methods for regulating the conformational polymorphism of nucleic acids and increasing sequence selectivity for base pairing of nucleic acids are important for developing nucleic acid-based nanomaterials. However, dearth of media that foster favourable behaviour of nucleic acids has been a bottleneck for promoting the biology and nanotechnology using the nucleic acids. Ionic liquids (ILs) are solvents that may be potentially used for controlling the properties of the nucleic acids. Here, we review researches regarding the behaviour of nucleic acids in ILs. The efficiency of extraction and purification of nucleic acids from biological samples is increased by IL addition. Moreover, nucleic acids in ILs show long-term stability, which maintains their structures and enhances nuclease resistance. Nucleic acids in ILs can be used directly in polymerase chain reaction and gene expression analysis with high efficiency. Moreover, the stabilities of the nucleic acids for duplex, triplex, and quadruplex (G-quadruplex and i-motif) structures change drastically with IL cation-nucleic acid interactions. Highly sensitive DNA sensors have been developed based on the unique changes in the stability of nucleic acids in ILs. The behaviours of nucleic acids in ILs detailed here should be useful in the design of nucleic acids to use as biological and nanotechnological tools.", "title": "" } ]
1840054
Habits in everyday life: thought, emotion, and action.
[ { "docid": "pos:1840054_0", "text": "The authors review evidence that self-control may consume a limited resource. Exerting self-control may consume self-control strength, reducing the amount of strength available for subsequent self-control efforts. Coping with stress, regulating negative affect, and resisting temptations require self-control, and after such self-control efforts, subsequent attempts at self-control are more likely to fail. Continuous self-control efforts, such as vigilance, also degrade over time. These decrements in self-control are probably not due to negative moods or learned helplessness produced by the initial self-control attempt. These decrements appear to be specific to behaviors that involve self-control; behaviors that do not require self-control neither consume nor require self-control strength. It is concluded that the executive component of the self--in particular, inhibition--relies on a limited, consumable resource.", "title": "" } ]
[ { "docid": "neg:1840054_0", "text": "This paper presents a LVDS (low voltage differential signal) driver, which works at 2Gbps, with a pre-emphasis circuit compensating the attenuation of limited bandwidth of channel. To make the output common-mode (CM) voltage stable over process, temperature, and supply voltage variations, a closed-loop negative feedback circuit is added in this work. The LVDS driver is designed in 0.13um CMOS technology using both thick (3.3V) and thin (1.2V) gate oxide device, simulated with transmission line model and package parasitic model. The simulated results show that this driver can operate up to 2Gbps with random data patterns.", "title": "" }, { "docid": "neg:1840054_1", "text": "Reinforcement learning provides both qualitative and quantitative frameworks for understanding and modeling adaptive decision-making in the face of rewards and punishments. Here we review the latest dispatches from the forefront of this field, and map out some of the territories where lie monsters.", "title": "" }, { "docid": "neg:1840054_2", "text": "We describe an algorithm for approximate inference in graphical models based on Hölder’s inequality that provides upper and lower bounds on common summation problems such as computing the partition function or probability of evidence in a graphical model. Our algorithm unifies and extends several existing approaches, including variable elimination techniques such as minibucket elimination and variational methods such as tree reweighted belief propagation and conditional entropy decomposition. We show that our method inherits benefits from each approach to provide significantly better bounds on sum-product tasks.", "title": "" }, { "docid": "neg:1840054_3", "text": "This introductory overview tutorial on social network analysis (SNA) demonstrates through theory and practical case studies applications to research, particularly on social media, digital interaction and behavior records. NodeXL provides an entry point for non-programmers to access the concepts and core methods of SNA and allows anyone who can make a pie chart to now build, analyze and visualize complex networks.", "title": "" }, { "docid": "neg:1840054_4", "text": "In cybersecurity competitions, participants either create new or protect preconfigured information systems and then defend these systems against attack in a real-world setting. Institutions should consider important structural and resource-related issues before establishing such a competition. Critical infrastructures increasingly rely on information systems and on the Internet to provide connectivity between systems. Maintaining and protecting these systems requires an education in information warfare that doesn't merely theorize and describe such concepts. A hands-on, active learning experience lets students apply theoretical concepts in a physical environment. Craig Kaucher and John Saunders found that even for management-oriented graduate courses in information assurance, such an experience enhances the students' understanding of theoretical concepts. Cybersecurity exercises aim to provide this experience in a challenging and competitive environment. Many educational institutions use and implement these exercises as part of their computer science curriculum, and some are organizing competitions with commercial partners as capstone exercises, ad hoc hack-a-thons, and scenario-driven, multiday, defense-only competitions. Participants have exhibited much enthusiasm for these exercises, from the DEFCON capture-the-flag exercise to the US Military Academy's Cyber Defense Exercise (CDX). In February 2004, the US National Science Foundation sponsored the Cyber Security Exercise Workshop aimed at harnessing this enthusiasm and interest. The educators, students, and government and industry representatives attending the workshop discussed the feasibility and desirability of establishing regular cybersecurity exercises for postsecondary-level students. This article summarizes the workshop report.", "title": "" }, { "docid": "neg:1840054_5", "text": "INTRODUCTION\nIn the developing countries, diabetes mellitus as a chronic diseases, have replaced infectious diseases as the main causes of morbidity and mortality. International Diabetes Federation (IDF) recently estimates 382 million people have diabetes globally and more than 34.6 million people in the Middle East Region and this number will increase to 67.9 million by 2035. The aim of this study was to analyze Iran's research performance on diabetes in national and international context.\n\n\nMETHODS\nThis Scientometric analysis is based on the Iranian publication data in diabetes research retrieved from the Scopus citation database till the end of 2014. The string used to retrieve the data was developed using \"diabetes\" keyword in title, abstract and keywords, and finally Iran in the affiliation field was our main string.\n\n\nRESULTS\nIran's cumulative publication output in diabetes research consisted of 4425 papers from 1968 to 2014, with an average number of 96.2 papers per year and an annual average growth rate of 25.5%. Iran ranked 25th place with 4425 papers among top 25 countries with a global share of 0.72 %. Average of Iran's publication output was 6.19 citations per paper. The average citation per paper for Iranian publications in diabetes research increased from 1.63 during 1968-1999 to 10.42 for 2014.\n\n\nCONCLUSIONS\nAlthough diabetic population of Iran is increasing, number of diabetes research is not remarkable. International Diabetes Federation suggested increased funding for research in diabetes in Iran for cost-effective diabetes prevention and treatment. In addition to universal and comprehensive services for diabetes care and treatment provided by Iranian health care system, Iranian policy makers should invest more on diabetes research.", "title": "" }, { "docid": "neg:1840054_6", "text": "We capitalize on large amounts of unlabeled video in order to learn a model of scene dynamics for both video recognition tasks (e.g. action classification) and video generation tasks (e.g. future prediction). We propose a generative adversarial network for video with a spatio-temporal convolutional architecture that untangles the scene’s foreground from the background. Experiments suggest this model can generate tiny videos up to a second at full frame rate better than simple baselines, and we show its utility at predicting plausible futures of static images. Moreover, experiments and visualizations show the model internally learns useful features for recognizing actions with minimal supervision, suggesting scene dynamics are a promising signal for representation learning. We believe generative video models can impact many applications in video understanding and simulation.", "title": "" }, { "docid": "neg:1840054_7", "text": "Obstacle fusion algorithms usually perform obstacle association and gating in order to improve the obstacle position if it was detected by multiple sensors. However, this strategy is not common in multi sensor occupancy grid fusion. Thus, the quality of the fused grid, in terms of obstacle position accuracy, largely depends on the sensor with the lowest accuracy. In this paper an efficient method to associate obstacles across sensor grids is proposed. Imprecise sensors are discounted locally in cells where a more accurate sensor, that detected the same obstacle, derived free space. Furthermore, fixed discount factors to optimize false negative and false positive rates are used. Because of its generic formulation with the covariance of each sensor grid, the method is scalable to any sensor setup. The quantitative evaluation with a highly precise navigation map shows an increased obstacle position accuracy compared to standard evidential occupancy grid fusion.", "title": "" }, { "docid": "neg:1840054_8", "text": "Four experiments demonstrate effects of prosodic structure on speech production latencies. Experiments 1 to 3 exploit a modified version of the Sternberg et al. (1978, 1980) prepared speech production paradigm to look for evidence of the generation of prosodic structure during the final stages of sentence production. Experiment 1 provides evidence that prepared sentence production latency is a function of the number of phonological words that a sentence comprises when syntactic structure, number of lexical items, and number of syllables are held constant. Experiment 2 demonstrated that production latencies in Experiment 1 were indeed determined by prosodic structure rather than the number of content words that a sentence comprised. The phonological word effect was replicated in Experiment 3 using utterances with a different intonation pattern and phrasal structure. Finally, in Experiment 4, an on-line version of the sentence production task provides evidence for the phonological word as the preferred unit of articulation during the on-line production of continuous speech. Our findings are consistent with the hypothesis that the phonological word is a unit of processing during the phonological encoding of connected speech. q 1997 Academic Press", "title": "" }, { "docid": "neg:1840054_9", "text": "The wheelchair is the major means of transport for physically disabled people. However, it cannot overcome architectural barriers such as curbs and stairs. In this paper, the authors proposed a method to avoid falling down of a wheeled inverted pendulum type robotic wheelchair for climbing stairs. The problem of this system is that the feedback gain of the wheels cannot be set high due to modeling errors and gear backlash, which results in the movement of wheels. Therefore, the wheels slide down the stairs or collide with the side of the stairs, and finally the wheelchair falls down. To avoid falling down, the authors proposed a slider control strategy based on skyhook model in order to decrease the movement of wheels, and a rotary link control strategy based on the staircase dimensions in order to avoid collision or slide down. The effectiveness of the proposed fall avoidance control strategy was validated by ODE simulations and the prototype wheelchair. Keywords—EPW, fall avoidance control, skyhook, wheeled inverted pendulum.", "title": "" }, { "docid": "neg:1840054_10", "text": "A fundamental question in frontal lobe function is how motivational and emotional parameters of behavior apply to executive processes. Recent advances in mood and personality research and the technology and methodology of brain research provide opportunities to address this question empirically. Using event-related-potentials to track error monitoring in real time, the authors demonstrated that variability in the amplitude of the error-related negativity (ERN) is dependent on mood and personality variables. College students who are high on negative affect (NA) and negative emotionality (NEM) displayed larger ERN amplitudes early in the experiment than participants who are low on these dimensions. As the high-NA and -NEM participants disengaged from the task, the amplitude of the ERN decreased. These results reveal that affective distress and associated behavioral patterns are closely related with frontal lobe executive functions.", "title": "" }, { "docid": "neg:1840054_11", "text": "CLINICAL HISTORY A 54-year-old white female was seen with a 10-year history of episodes of a burning sensation of the left ear. The episodes are preceded by nausea and a hot feeling for about 15 seconds and then the left ear becomes visibly red for an average of about 1 hour, with a range from about 30 minutes to 2 hours. About once every 2 years, she would have a flurry of episodes occurring over about a 1-month period during which she would average about five episodes with a range of 1 to 6. There was also an 18-year history of migraine without aura occurring about once a year. At the age of 36 years, she developed left-sided pulsatile tinnitus. A cerebral arteriogram revealed a proximal left internal carotid artery occlusion of uncertain etiology after extensive testing. An MRI scan at the age of 45 years was normal. Neurological examination was normal. A carotid ultrasound study demonstrated complete occlusion of the left internal carotid artery and a normal right. Question.—What is the diagnosis?", "title": "" }, { "docid": "neg:1840054_12", "text": "• Users may freely distribute the URL that is used to identify this publication. • Users may download and/or print one copy of the publication from the University of Birmingham research portal for the purpose of private study or non-commercial research. • User may use extracts from the document in line with the concept of ‘fair dealing’ under the Copyright, Designs and Patents Act 1988 (?) • Users may not further distribute the material nor use it for the purposes of commercial gain.", "title": "" }, { "docid": "neg:1840054_13", "text": "Machine comprehension of text is the overarching goal of a great deal of research in natural language processing. The Machine Comprehension Test (Richardson et al., 2013) was recently proposed to assess methods on an open-domain, extensible, and easy-to-evaluate task consisting of two datasets. In this paper we develop a lexical matching method that takes into account multiple context windows, question types and coreference resolution. We show that the proposed method outperforms the baseline of Richardson et al. (2013), and despite its relative simplicity, is comparable to recent work using machine learning. We hope that our approach will inform future work on this task. Furthermore, we argue that MC500 is harder than MC160 due to the way question answer pairs were created.", "title": "" }, { "docid": "neg:1840054_14", "text": "Intensity inhomogeneity often occurs in real-world images, which presents a considerable challenge in image segmentation. The most widely used image segmentation algorithms are region-based and typically rely on the homogeneity of the image intensities in the regions of interest, which often fail to provide accurate segmentation results due to the intensity inhomogeneity. This paper proposes a novel region-based method for image segmentation, which is able to deal with intensity inhomogeneities in the segmentation. First, based on the model of images with intensity inhomogeneities, we derive a local intensity clustering property of the image intensities, and define a local clustering criterion function for the image intensities in a neighborhood of each point. This local clustering criterion function is then integrated with respect to the neighborhood center to give a global criterion of image segmentation. In a level set formulation, this criterion defines an energy in terms of the level set functions that represent a partition of the image domain and a bias field that accounts for the intensity inhomogeneity of the image. Therefore, by minimizing this energy, our method is able to simultaneously segment the image and estimate the bias field, and the estimated bias field can be used for intensity inhomogeneity correction (or bias correction). Our method has been validated on synthetic images and real images of various modalities, with desirable performance in the presence of intensity inhomogeneities. Experiments show that our method is more robust to initialization, faster and more accurate than the well-known piecewise smooth model. As an application, our method has been used for segmentation and bias correction of magnetic resonance (MR) images with promising results.", "title": "" }, { "docid": "neg:1840054_15", "text": "This paper presents a formulation to the obstacle avoidance problem for semi-autonomous ground vehicles. The planning and tracking problems have been divided into a two-level hierarchical controller. The high level solves a nonlinear model predictive control problem to generate a feasible and obstacle free path. It uses a nonlinear vehicle model and utilizes a coordinate transformation which uses vehicle position along a path as the independent variable. The low level uses a higher fidelity model and solves the MPC problem with a sequential quadratic programming approach to track the planned path. Simulations show the method’s ability to safely avoid multiple obstacles while tracking the lane centerline. Experimental tests on a semi-autonomous passenger vehicle driving at high speed on ice show the effectiveness of the approach.", "title": "" }, { "docid": "neg:1840054_16", "text": "Most theorizing on the relationship between corporate social/environmental performance (CSP) and corporate financial performance (CFP) assumes that the current evidence is too fractured or too variable to draw any generalizable conclusions. With this integrative, quantitative study, we intend to show that the mainstream claim that we have little generalizable knowledge about CSP and CFP is built on shaky grounds. Providing a methodologically more rigorous review than previous efforts, we conduct a meta-analysis of 52 studies (which represent the population of prior quantitative inquiry) yielding a total sample size of 33,878 observations. The metaanalytic findings suggest that corporate virtue in the form of social responsibility and, to a lesser extent, environmental responsibility is likely to pay off, although the operationalizations of CSP and CFP also moderate the positive association. For example, CSP appears to be more highly correlated with accounting-based measures of CFP than with market-based indicators, and CSP reputation indices are more highly correlated with CFP than are other indicators of CSP. This meta-analysis establishes a greater degree of certainty with respect to the CSP–CFP relationship than is currently assumed to exist by many business scholars.", "title": "" }, { "docid": "neg:1840054_17", "text": "In this paper we present a prototype of a Microwave Imaging (MI) system for breast cancer detection. Our system is based on low-cost off-the-shelf microwave components, custom-made antennas, and a small form-factor processing system with an embedded Field-Programmable Gate Array (FPGA) for accelerating the execution of the imaging algorithm. We show that our system can compete with a vector network analyzer in terms of accuracy, and it is more than 20x faster than a high-performance server at image reconstruction.", "title": "" }, { "docid": "neg:1840054_18", "text": "With its three-term functionality offering treatment of both transient and steady-state responses, proportional-integral-derivative (PID) control provides a generic and efficient solution to real-world control problems. The wide application of PID control has stimulated and sustained research and development to \"get the best out of PID\", and \"the search is on to find the next key technology or methodology for PID tuning\". This article presents remedies for problems involving the integral and derivative terms. PID design objectives, methods, and future directions are discussed. Subsequently, a computerized simulation-based approach is presented, together with illustrative design results for first-order, higher order, and nonlinear plants. Finally, we discuss differences between academic research and industrial practice, so as to motivate new research directions in PID control.", "title": "" }, { "docid": "neg:1840054_19", "text": "With the development of location-based social networks, an increasing amount of individual mobility data accumulate over time. The more mobility data are collected, the better we can understand the mobility patterns of users. At the same time, we know a great deal about online social relationships between users, providing new opportunities for mobility prediction. This paper introduces a noveltyseeking driven predictive framework for mining location-based social networks that embraces not only a bunch of Markov-based predictors but also a series of location recommendation algorithms. The core of this predictive framework is the cooperation mechanism between these two distinct models, determining the propensity of seeking novel and interesting locations.", "title": "" } ]
1840055
A new approach to wafer sawing: stealth laser dicing technology
[ { "docid": "pos:1840055_0", "text": "\"Stealth Dicing (SD) \" was developed to solve such inherent problems of dicing process as debris contaminants and unnecessary thermal damage on work wafer. In SD, laser beam power of transmissible wavelength is absorbed only around focal point in the wafer by utilizing temperature dependence of absorption coefficient of the wafer. And these absorbed power forms modified layer in the wafer, which functions as the origin of separation in followed separation process. Since only the limited interior region of a wafer is processed by laser beam irradiation, damages and debris contaminants can be avoided in SD. Besides characteristics of devices will not be affected. Completely dry process of SD is another big advantage over other dicing methods.", "title": "" }, { "docid": "pos:1840055_1", "text": "In the semiconductor market, the trend of packaging for die stacking technology moves to high density with thinner chips and higher capacity of memory devices. Moreover, the wafer sawing process is becoming more important for thin wafer, because its process speed tends to affect sawn quality and yield. ULK (Ultra low-k) device could require laser grooving application to reduce the stress during wafer sawing. Furthermore under 75um-thick thin low-k wafer is not easy to use the laser grooving application. So, UV laser dicing technology that is very useful tool for Si wafer was selected as full cut application, which has been being used on low-k wafer as laser grooving method.", "title": "" } ]
[ { "docid": "neg:1840055_0", "text": "The most widely used machine learning frameworks require users to carefully tune their memory usage so that the deep neural network (DNN) fits into the DRAM capacity of a GPU. This restriction hampers a researcher’s flexibility to study different machine learning algorithms, forcing them to either use a less desirable network architecture or parallelize the processing across multiple GPUs. We propose a runtime memory manager that virtualizes the memory usage of DNNs such that both GPU and CPU memory can simultaneously be utilized for training larger DNNs. Our virtualized DNN (vDNN) reduces the average memory usage of AlexNet by 61% and OverFeat by 83%, a significant reduction in memory requirements of DNNs. Similar experiments on VGG-16, one of the deepest and memory hungry DNNs to date, demonstrate the memory-efficiency of our proposal. vDNN enables VGG-16 with batch size 256 (requiring 28 GB of memory) to be trained on a single NVIDIA K40 GPU card containing 12 GB of memory, with 22% performance loss compared to a hypothetical GPU with enough memory to hold the entire DNN.", "title": "" }, { "docid": "neg:1840055_1", "text": "Research in texture recognition often concentrates on the problem of material recognition in uncluttered conditions, an assumption rarely met by applications. In this work we conduct a first study of material and describable texture attributes recognition in clutter, using a new dataset derived from the OpenSurface texture repository. Motivated by the challenge posed by this problem, we propose a new texture descriptor, FV-CNN, obtained by Fisher Vector pooling of a Convolutional Neural Network (CNN) filter bank. FV-CNN substantially improves the state-of-the-art in texture, material and scene recognition. Our approach achieves 79.8% accuracy on Flickr material dataset and 81% accuracy on MIT indoor scenes, providing absolute gains of more than 10% over existing approaches. FV-CNN easily transfers across domains without requiring feature adaptation as for methods that build on the fully-connected layers of CNNs. Furthermore, FV-CNN can seamlessly incorporate multi-scale information and describe regions of arbitrary shapes and sizes. Our approach is particularly suited at localizing “stuff” categories and obtains state-of-the-art results on MSRC segmentation dataset, as well as promising results on recognizing materials and surface attributes in clutter on the OpenSurfaces dataset.", "title": "" }, { "docid": "neg:1840055_2", "text": "Hierarchical clustering is a recursive partitioning of a dataset into clusters at an increasingly finer granularity. Motivated by the fact that most work on hierarchical clustering was based on providing algorithms, rather than optimizing a specific objective, Dasgupta framed similarity-based hierarchical clustering as a combinatorial optimization problem, where a ‘good’ hierarchical clustering is one that minimizes a particular cost function [21]. He showed that this cost function has certain desirable properties: in order to achieve optimal cost, disconnected components (namely, dissimilar elements) must be separated at higher levels of the hierarchy and when the similarity between data elements is identical, all clusterings achieve the same cost. We take an axiomatic approach to defining ‘good’ objective functions for both similarity and dissimilarity-based hierarchical clustering. We characterize a set of admissible objective functions having the property that when the input admits a ‘natural’ ground-truth hierarchical clustering, the ground-truth clustering has an optimal value. We show that this set includes the objective function introduced by Dasgupta. Equipped with a suitable objective function, we analyze the performance of practical algorithms, as well as develop better and faster algorithms for hierarchical clustering. We also initiate a beyond worst-case analysis of the complexity of the problem, and design algorithms for this scenario.", "title": "" }, { "docid": "neg:1840055_3", "text": "One task of heterogeneous face recognition is to match a near infrared (NIR) face image to a visible light (VIS) image. In practice, there are often a few pairwise NIR-VIS face images but it is easy to collect lots of VIS face images. Therefore, how to use these unpaired VIS images to improve the NIR-VIS recognition accuracy is an ongoing issue. This paper presents a deep TransfeR NIR-VIS heterogeneous facE recognition neTwork (TRIVET) for NIR-VIS face recognition. First, to utilize large numbers of unpaired VIS face images, we employ the deep convolutional neural network (CNN) with ordinal measures to learn discriminative models. The ordinal activation function (Max-Feature-Map) is used to select discriminative features and make the models robust and lighten. Second, we transfer these models to NIR-VIS domain by fine-tuning with two types of NIR-VIS triplet loss. The triplet loss not only reduces intra-class NIR-VIS variations but also augments the number of positive training sample pairs. It makes fine-tuning deep models on a small dataset possible. The proposed method achieves state-of-the-art recognition performance on the most challenging CASIA NIR-VIS 2.0 Face Database. It achieves a new record on rank-1 accuracy of 95.74% and verification rate of 91.03% at FAR=0.001. It cuts the error rate in comparison with the best accuracy [27] by 69%.", "title": "" }, { "docid": "neg:1840055_4", "text": "Fast and accurate side-chain conformation prediction is important for homology modeling, ab initio protein structure prediction, and protein design applications. Many methods have been presented, although only a few computer programs are publicly available. The SCWRL program is one such method and is widely used because of its speed, accuracy, and ease of use. A new algorithm for SCWRL is presented that uses results from graph theory to solve the combinatorial problem encountered in the side-chain prediction problem. In this method, side chains are represented as vertices in an undirected graph. Any two residues that have rotamers with nonzero interaction energies are considered to have an edge in the graph. The resulting graph can be partitioned into connected subgraphs with no edges between them. These subgraphs can in turn be broken into biconnected components, which are graphs that cannot be disconnected by removal of a single vertex. The combinatorial problem is reduced to finding the minimum energy of these small biconnected components and combining the results to identify the global minimum energy conformation. This algorithm is able to complete predictions on a set of 180 proteins with 34342 side chains in <7 min of computer time. The total chi(1) and chi(1 + 2) dihedral angle accuracies are 82.6% and 73.7% using a simple energy function based on the backbone-dependent rotamer library and a linear repulsive steric energy. The new algorithm will allow for use of SCWRL in more demanding applications such as sequence design and ab initio structure prediction, as well addition of a more complex energy function and conformational flexibility, leading to increased accuracy.", "title": "" }, { "docid": "neg:1840055_5", "text": "BACKGROUND\nAbnormal forms of grief, currently referred to as complicated grief or prolonged grief disorder, have been discussed extensively in recent years. While the diagnostic criteria are still debated, there is no doubt that prolonged grief is disabling and may require treatment. To date, few interventions have demonstrated efficacy.\n\n\nMETHODS\nWe investigated whether outpatients suffering from prolonged grief disorder (PGD) benefit from a newly developed integrative cognitive behavioural therapy for prolonged grief (PG-CBT). A total of 51 patients were randomized into two groups, stratified by the type of death and their relationship to the deceased; 24 patients composed the treatment group and 27 patients composed the wait list control group (WG). Treatment consisted of 20-25 sessions. Main outcome was change in grief severity; secondary outcomes were reductions in general psychological distress and in comorbidity.\n\n\nRESULTS\nPatients on average had 2.5 comorbid diagnoses in addition to PGD. Between group effect sizes were large for the improvement of grief symptoms in treatment completers (Cohen׳s d=1.61) and in the intent-to-treat analysis (d=1.32). Comorbid depressive symptoms also improved in PG-CBT compared to WG. The completion rate was 79% in PG-CBT and 89% in WG.\n\n\nLIMITATIONS\nThe major limitations of this study were a small sample size and that PG-CBT took longer than the waiting time.\n\n\nCONCLUSIONS\nPG-CBT was found to be effective with an acceptable dropout rate. Given the number of bereaved people who suffer from PGD, the results are of high clinical relevance.", "title": "" }, { "docid": "neg:1840055_6", "text": "There are over 1.2 million Australians registered as having vision impairment. In most cases, vision impairment severely affects the mobility and orientation of the person, resulting in loss of independence and feelings of isolation. GPS technology and its applications have now become omnipresent and are used daily to improve and facilitate the lives of many. Although a number of products specifically designed for the Blind and Vision Impaired (BVI) and relying on GPS technology have been launched, this domain is still a niche and ongoing R&D is needed to bring all the benefits of GPS in terms of information and mobility to the BVI. The limitations of GPS indoors and in urban canyons have led to the development of new systems and signals that bridge the gap and provide positioning in those environments. Although still in their infancy, there is no doubt indoor positioning technologies will one day become as pervasive as GPS. It is therefore important to design those technologies with the BVI in mind, to make them accessible from scratch. This paper will present an indoor positioning system that has been designed in that way, examining the requirements of the BVI in terms of accuracy, reliability and interface design. The system runs locally on a mid-range smartphone and relies at its core on a Kalman filter that fuses the information of all the sensors available on the phone (Wi-Fi chipset, accelerometers and magnetic field sensor). Each part of the system is tested separately as well as the final solution quality.", "title": "" }, { "docid": "neg:1840055_7", "text": "The relationship between the different approaches to quality in ISO standards is reviewed, contrasting the manufacturing approach to quality in ISO 9000 (quality is conformance to requirements) with the product orientation of ISO 8402 (quality is the presence of specified features) and the goal orientation of quality in use in ISO 14598-1 (quality is meeting user needs). It is shown how ISO 9241-11 enables quality in use to be measured, and ISO 13407 defines the activities necessary in the development lifecycle for achieving quality in use. APPROACHES TO QUALITY Although the term quality seems self-explanatory in everyday usage, in practice there are many different views of what it means and how it should be achieved as part of a software production process. ISO DEFINITIONS OF QUALITY ISO 9000 is concerned with quality assurance to provide confidence that a product will satisfy given requirements. Interpreted literally, this puts quality in the hands of the person producing the requirements specification a product may be deemed to have quality even if the requirements specification is inappropriate. This is one of the interpretations of quality reviewed by Garvin (1984). He describes it as Manufacturing quality: a product which conforms to specified requirements. A different emphasis is given in ISO 8402 which defines quality as the totality of characteristics of an entity that bear on its ability to satisfy stated and implied needs. This is an example of what Garvin calls Product quality: an inherent characteristic of the product determined by the presence or absence of measurable product attributes. Many organisations would like to be able to identify those attributes which can be designed into a product or evaluated to ensure quality. ISO 9126 (1992) takes this approach, and categorises the attributes of software quality as: functionality, efficiency, usability, reliability, maintainability and portability. To the extent that user needs are well-defined and common to the intended users this implies that quality is an inherent attribute of the product. However, if different groups of users have different needs, then they may require different characteristics for a product to have quality for their purposes. Assessment of quality thus becomes dependent on the perception of the user. USER PERCEIVED QUALITY AND QUALITY IN USE Garvin defines User perceived quality as the combination of product attributes which provide the greatest satisfaction to a specified user. Most approaches to quality do not deal explicitly with userperceived quality. User-perceived quality is regarded as an intrinsically inaccurate judgement of product quality. For instance Garvin, 1984, observes that \"Perceptions of quality can be as subjective as assessments of aesthetics\". However, there is a more fundamental reason for being concerned with user-perceived quality. Products can only have quality in relation to their intended purpose. For instance, the quality attributes required of an office carpet may be very different from those required of a bedroom carpet. For conventional products this is assumed to be selfevident. For general-purpose products it creates a problem. A text editor could be used by programmers for producing code, or by secretaries for producing letters. Some of the quality attributes required will be the same, but others will be different. Even for a word processor, the functionality, usability and efficiency attributes required by a trained user may be very different from those required by an occasional user. Reconciling work on usability with traditional approaches to software quality has led to another broader and potentially important view of quality which has been outside the scope of most existing quality systems. This embraces userperceived quality by relating quality to the needs of the user of an interactive product. ISO 14598-1 defines External quality as the extent to which a product satisfies stated and implied needs when used under specified conditions. This moves the focus of quality from the product in isolation to the satisfaction of the needs of particular users in particular situations. The purpose of a product is to help users achieve particular goals, which leads to the definition of Quality in use in ISO DIS 14598-1 as the effectiveness, efficiency and satisfaction with which specified users can achieve specified goals in specified environments. A product meets the requirements of the user if it is effective (accurate and complete), efficient in use of time and resources, and satisfying, regardless of the specific attributes it possesses. Specifying requirements in terms of performance has many benefits. This is recognised in the rules for drafting ISO standards (ISO, 1992) which suggest that to provide design flexibility, standards should specify the performance required of a product rather than the technical attributes needed to achieve the performance. Quality in use is a means of applying this principle to the performance which a product enables a human to achieve. An example is the ISO standard for VDT display screens (ISO 9241-3). The purpose of the standard is to ensure that the screen has the technical attributes required to achieve quality in use. The current version of the standard is specified in terms of the technical attributes of a traditional CRT. It is intended to extend the standard to permit alternative new technology screens to conform if it can be demonstrated that users are as effective, efficient and satisfied with the new screen as with an existing screen which meets the technical specifications. SOFTWARE QUALITY IN USE: ISO 14598-1 The purpose of designing an interactive system is to meet the needs of users: to provide quality in use (see Figure 1, from ISO/IEC 14598-1). The internal software attributes will determine the quality of a software product in use in a particular context. Software quality attributes are the cause, quality in use the effect. Quality in use is (or at least should be) the objective, software product quality is the means of achieving it. system behaviour external quality requirements External quality internal quality requirements Internal quality software attributes Specification Design and development Needs Quality in use Operation", "title": "" }, { "docid": "neg:1840055_8", "text": "In this paper, a Y-Δ hybrid connection for a high-voltage induction motor is described. Low winding harmonic content is achieved by careful consideration of the interaction between the Y- and Δ-connected three-phase winding sets so that the magnetomotive force (MMF) in the air gap is close to sinusoid. Essentially, the two winding sets operate in a six-phase mode. This paper goes on to verify that the fundamental distribution coefficient for the stator MMF is enhanced compared to a standard three-phase winding set. The design method for converting a conventional double-layer lap winding in a high-voltage induction motor into a Y-Δ hybrid lap winding is described using standard winding theory as often applied to small- and medium-sized motors. The main parameters addressed when designing the winding are the conductor wire gauge, coil turns, and parallel winding branches in the Y and Δ connections. A winding design scheme for a 1250-kW 6-kV induction motor is put forward and experimentally validated; the results show that the efficiency can be raised effectively without increasing the cost.", "title": "" }, { "docid": "neg:1840055_9", "text": "Since the beginning of the epidemic, human immunodeficiency virus (HIV) has infected around 70 million people worldwide, most of whom reside is sub-Saharan Africa. There have been very promising developments in the treatment of HIV with anti-retroviral drug cocktails. However, drug resistance to anti-HIV drugs is emerging, and many people infected with HIV have adverse reactions or do not have ready access to currently available HIV chemotherapies. Thus, there is a need to discover new anti-HIV agents to supplement our current arsenal of anti-HIV drugs and to provide therapeutic options for populations with limited resources or access to currently efficacious chemotherapies. Plant-derived natural products continue to serve as a reservoir for the discovery of new medicines, including anti-HIV agents. This review presents a survey of plants that have shown anti-HIV activity, both in vitro and in vivo.", "title": "" }, { "docid": "neg:1840055_10", "text": "With companies such as Netflix and YouTube accounting for more than 50% of the peak download traffic on North American fixed networks in 2015, video streaming represents a significant source of Internet traffic. Multimedia delivery over the Internet has evolved rapidly over the past few years. The last decade has seen video streaming transitioning from User Datagram Protocol to Transmission Control Protocol-based technologies. Dynamic adaptive streaming over HTTP (DASH) has recently emerged as a standard for Internet video streaming. A range of rate adaptation mechanisms are proposed for DASH systems in order to deliver video quality that matches the throughput of dynamic network conditions for a richer user experience. This survey paper looks at emerging research into the application of client-side, server-side, and in-network rate adaptation techniques to support DASH-based content delivery. We provide context and motivation for the application of these techniques and review significant works in the literature from the past decade. These works are categorized according to the feedback signals used and the end-node that performs or assists with the adaptation. We also provide a review of several notable video traffic measurement and characterization studies and outline open research questions in the field.", "title": "" }, { "docid": "neg:1840055_11", "text": "Passwords continue to prevail on the web as the primary method for user authentication despite their well-known security and usability drawbacks. Password managers offer some improvement without requiring server-side changes. In this paper, we evaluate the security of dual-possession authentication, an authentication approach offering encrypted storage of passwords and theft-resistance without the use of a master password. We further introduce Tapas, a concrete implementation of dual-possession authentication leveraging a desktop computer and a smartphone. Tapas requires no server-side changes to websites, no master password, and protects all the stored passwords in the event either the primary or secondary device (e.g., computer or phone) is stolen. To evaluate the viability of Tapas as an alternative to traditional password managers, we perform a 30 participant user study comparing Tapas to two configurations of Firefox's built-in password manager. We found users significantly preferred Tapas. We then improve Tapas by incorporating feedback from this study, and reevaluate it with an additional 10 participants.", "title": "" }, { "docid": "neg:1840055_12", "text": "This paper presents the analysis and operation of a three-phase pulsewidth modulation rectifier system formed by the star-connection of three single-phase boost rectifier modules (Y-rectifier) without a mains neutral point connection. The current forming operation of the Y-rectifier is analyzed and it is shown that the phase current has the same high quality and low ripple as the Vienna rectifier. The isolated star point of Y-rectifier results in a mutual coupling of the individual phase module outputs and has to be considered for control of the module dc link voltages. An analytical expression for the coupling coefficients of the Y-rectifier phase modules is derived. Based on this expression, a control concept with reduced calculation effort is designed and it provides symmetric loading of the phase modules and solves the balancing problem of the dc link voltages. The analysis also provides insight that enables the derivation of a control concept for two phase operation, such as in the case of a mains phase failure. The theoretical and simulated results are proved by experimental analysis on a fully digitally controlled, 5.4-kW prototype.", "title": "" }, { "docid": "neg:1840055_13", "text": "• • The order of authorship on this paper is random and contributions were equal. We would like to thank Ron Burt, Jim March and Mike Tushman for many helpful suggestions. Olav Sorenson provided particularly extensive comments on this paper. We would like to acknowledge the financial support of the University of Chicago, Graduate School of Business and a grant from the Kauffman Center for Entrepreneurial Leadership. Clarifying the relationship between organizational aging and innovation processes is an important step in understanding the dynamics of high-technology industries, as well as for resolving debates in organizational theory about the effects of aging on organizational functioning. We argue that aging has two seemingly contradictory consequences for organizational innovation. First, we believe that aging is associated with increases in firms' rates of innovation. Simultaneously, however, we argue that the difficulties of keeping pace with incessant external developments causes firms' innovative outputs to become obsolete relative to the most current environmental demands. These seemingly contradictory outcomes are intimately related and reflect inherent trade-offs in organizational learning and innovation processes. Multiple longitudinal analyses of the relationship between firm age and patenting behavior in the semiconductor and biotechnology industries lend support to these arguments. Introduction In an increasingly knowledge-based economy, pinpointing the factors that shape the ability of organizations to produce influential ideas and innovations is a central issue for organizational studies. Among all organizational outputs, innovation is fundamental not only because of its direct impact on the viability of firms, but also because of its profound effects on the paths of social and economic change. In this paper, we focus on an ubiquitous organizational process-aging-and examine its multifaceted influence on organizational innovation. In so doing, we address an important unresolved issue in organizational theory, namely the nature of the relationship between aging and organizational behavior (Hannan 1998). Evidence clarifying the relationship between organizational aging and innovation promises to improve our understanding of the organizational dynamics of high-technology markets, and in particular the dynamics of technological leadership. For instance, consider the possibility that aging has uniformly positive consequences for innovative activity: on the foundation of accumulated experience, older firms innovate more frequently, and their innovations have greater significance than those of younger enterprises. In this scenario, technological change paradoxically may be associated with organizational stability, as incumbent organizations come to dominate the technological frontier and their preeminence only increases with their tenure. 1 Now consider the …", "title": "" }, { "docid": "neg:1840055_14", "text": "Chemical fingerprints are used to represent chemical molecules by recording the presence or absence, or by counting the number of occurrences, of particular features or substructures, such as labeled paths in the 2D graph of bonds, of the corresponding molecule. These fingerprint vectors are used to search large databases of small molecules, currently containing millions of entries, using various similarity measures, such as the Tanimoto or Tversky's measures and their variants. Here, we derive simple bounds on these similarity measures and show how these bounds can be used to considerably reduce the subset of molecules that need to be searched. We consider both the case of single-molecule and multiple-molecule queries, as well as queries based on fixed similarity thresholds or aimed at retrieving the top K hits. We study the speedup as a function of query size and distribution, fingerprint length, similarity threshold, and database size |D| and derive analytical formulas that are in excellent agreement with empirical values. The theoretical considerations and experiments show that this approach can provide linear speedups of one or more orders of magnitude in the case of searches with a fixed threshold, and achieve sublinear speedups in the range of O(|D|0.6) for the top K hits in current large databases. This pruning approach yields subsecond search times across the 5 million compounds in the ChemDB database, without any loss of accuracy.", "title": "" }, { "docid": "neg:1840055_15", "text": "BACKGROUND\nThe recovery period for patients who have been in an intensive care unitis often prolonged and suboptimal. Anxiety, depression and post-traumatic stress disorder are common psychological problems. Intensive care staff offer various types of intensive aftercare. Intensive care follow-up aftercare services are not standard clinical practice in Norway.\n\n\nOBJECTIVE\nThe overall aim of this study is to investigate how adult patients experience theirintensive care stay their recovery period, and the usefulness of an information pamphlet.\n\n\nMETHOD\nA qualitative, exploratory research with semi-structured interviews of 29 survivors after discharge from intensive care and three months after discharge from the hospital.\n\n\nRESULTS\nTwo main themes emerged: \"Being on an unreal, strange journey\" and \"Balancing between who I was and who I am\" Patients' recollection of their intensive care stay differed greatly. Continuity of care and the nurse's ability to see and value individual differences was highlighted. The information pamphlet helped intensive care survivors understand that what they went through was normal.\n\n\nCONCLUSIONS\nContinuity of care and an individual approach is crucial to meet patients' uniqueness and different coping mechanisms. Intensive care survivors and their families must be included when information material and rehabilitation programs are designed and evaluated.", "title": "" }, { "docid": "neg:1840055_16", "text": "This paper focuses on the design, fabrication and characterization of unimorph actuators for a microaerial flapping mechanism. PZT-5H and PZN-PT are investigated as piezoelectric layers in the unimorph actuators. Design issues for microaerial flapping actuators are discussed, and criteria for the optimal dimensions of actuators are determined. For low power consumption actuation, a square wave based electronic driving circuit is proposed. Fabricated piezoelectric unimorphs are characterized by an optical measurement system in quasi-static and dynamic mode. Experimental performance of PZT5H and PZN-PT based unimorphs is compared with desired design specifications. A 1 d.o.f. flapping mechanism with a PZT-5H unimorph is constructed, and 180◦ stroke motion at 95 Hz is achieved. Thus, it is shown that unimorphs could be promising flapping mechanism actuators.", "title": "" }, { "docid": "neg:1840055_17", "text": "To address the problem of underexposure, underrepresentation, and underproduction of diverse professionals in the field of computing, we target middle school education using an idea that combines computational thinking with dance and movement choreography. This lightning talk delves into a virtual reality education and entertainment application named Virtual Environment Interactions (VEnvI). Our in vivo study examines how VEnvI can be used to teach fundamental computer science concepts such as sequences, loops, variables, conditionals, functions, and parallel programming. We aim to reach younger students through a fun and intuitive interface for choreographing dance movements with a virtual character. Our study contrasts the highly immersive and embodied virtual reality metaphor of using VEnvI with a non-immersive desktop metaphor. Additionally, we examine the effects of user attachment by comparing the learning results gained with customizable virtual characters in contrast with character presets. By analyzing qualitative and quantitative user responses measuring cognition, presence, usability, and satisfaction, we hope to find how virtual reality can enhance interest in the field of computer science among middle school students.", "title": "" }, { "docid": "neg:1840055_18", "text": "This paper presents a digital low-dropout regulator (D-LDO) with a proposed transient-response boost technique, which enables the reduction of transient response time, as well as overshoot/undershoot, when the load current is abruptly drawn. The proposed D-LDO detects the deviation of the output voltage by overshoot/undershoot, and increases its loop gain, for the time that the deviation is beyond a limit. Once the output voltage is settled again, the loop gain is returned. With the D-LDO fabricated on an 110-nm CMOS technology, we measured its settling time and peak of undershoot, which were reduced by 60% and 72%, respectively, compared with and without the transient-response boost mode. Using the digital logic gates, the chip occupies a small area of 0.04 mm2, and it achieves a maximum current efficiency of 99.98%, by consuming the quiescent current of 15 μA at 0.7-V input voltage.", "title": "" } ]
1840056
Computation offloading and resource allocation for low-power IoT edge devices
[ { "docid": "pos:1840056_0", "text": "Mobile Edge Computing is an emerging technology that provides cloud and IT services within the close proximity of mobile subscribers. Traditional telecom network operators perform traffic control flow (forwarding and filtering of packets), but in Mobile Edge Computing, cloud servers are also deployed in each base station. Therefore, network operator has a great responsibility in serving mobile subscribers. Mobile Edge Computing platform reduces network latency by enabling computation and storage capacity at the edge network. It also enables application developers and content providers to serve context-aware services (such as collaborative computing) by using real time radio access network information. Mobile and Internet of Things devices perform computation offloading for compute intensive applications, such as image processing, mobile gaming, to leverage the Mobile Edge Computing services. In this paper, some of the promising real time Mobile Edge Computing application scenarios are discussed. Later on, a state-of-the-art research efforts on Mobile Edge Computing domain is presented. The paper also presents taxonomy of Mobile Edge Computing, describing key attributes. Finally, open research challenges in successful deployment of Mobile Edge Computing are identified and discussed.", "title": "" }, { "docid": "pos:1840056_1", "text": "The proliferation of Internet of Things (IoT) and the success of rich cloud services have pushed the horizon of a new computing paradigm, edge computing, which calls for processing the data at the edge of the network. Edge computing has the potential to address the concerns of response time requirement, battery life constraint, bandwidth cost saving, as well as data safety and privacy. In this paper, we introduce the definition of edge computing, followed by several case studies, ranging from cloud offloading to smart home and city, as well as collaborative edge to materialize the concept of edge computing. Finally, we present several challenges and opportunities in the field of edge computing, and hope this paper will gain attention from the community and inspire more research in this direction.", "title": "" }, { "docid": "pos:1840056_2", "text": "Merging the virtual World Wide Web with nearby physical devices that are part of the Internet of Things gives anyone with a mobile device and the appropriate authorization the power to monitor or control anything.", "title": "" }, { "docid": "pos:1840056_3", "text": "Despite increasing usage of mobile computing, exploiting its full potential is difficult due to its inherent problems such as resource scarcity, frequent disconnections, and mobility. Mobile cloud computing can address these problems by executing mobile applications on resource providers external to the mobile device. In this paper, we provide an extensive survey of mobile cloud computing research, while highlighting the specific concerns in mobile cloud computing. We present a taxonomy based on the key issues in this area, and discuss the different approaches taken to tackle these issues. We conclude the paper with a critical analysis of challenges that have not yet been fully met, and highlight directions for", "title": "" }, { "docid": "pos:1840056_4", "text": "Mobile Edge Computing (MEC), a new concept that emerged about a year ago, integrating the IT and the Telecom worlds will have a great impact on the openness of the Telecom market. Furthermore, the virtualization revolution that has enabled the Cloud computing success will benefit the Telecom domain, which in turn will be able to support the IaaS (Infrastructure as a Service). The main objective of MEC solution is the export of some Cloud capabilities to the user's proximity decreasing the latency, augmenting the available bandwidth and decreasing the load on the core network. On the other hand, the Internet of Things (IoT), the Internet of the future, has benefited from the proliferation in the mobile phones' usage. Many mobile applications have been developed to connect a world of things (wearables, home automation systems, sensors, RFID tags etc.) to the Internet. Even if it is not a complete solution for a scalable IoT architecture but the time sensitive IoT applications (e-healthcare, real time monitoring, etc.) will profit from the MEC architecture. Furthermore, IoT can extend this paradigm to other areas (e.g. Vehicular Ad-hoc NETworks) with the use of Software Defined Network (SDN) orchestration to cope with the challenges hindering the IoT real deployment, as we will illustrate in this paper.", "title": "" } ]
[ { "docid": "neg:1840056_0", "text": "Software-based MMU emulation lies at the heart of outof-VM live memory introspection, an important technique in the cloud setting that applications such as live forensics and intrusion detection depend on. Due to the emulation, the software-based approach is much slower compared to native memory access by the guest VM. The slowness not only results in undetected transient malicious behavior, but also inconsistent memory view with the guest; both undermine the effectiveness of introspection. We propose the immersive execution environment (ImEE) with which the guest memory is accessed at native speed without any emulation. Meanwhile, the address mappings used within the ImEE are ensured to be consistent with the guest throughout the introspection session. We have implemented a prototype of the ImEE on Linux KVM. The experiment results show that ImEE-based introspection enjoys a remarkable speed up, performing several hundred times faster than the legacy method. Hence, this design is especially useful for realtime monitoring, incident response and high-intensity introspection.", "title": "" }, { "docid": "neg:1840056_1", "text": "Instance weighting has been widely applied to phrase-based machine translation domain adaptation. However, it is challenging to be applied to Neural Machine Translation (NMT) directly, because NMT is not a linear model. In this paper, two instance weighting technologies, i.e., sentence weighting and domain weighting with a dynamic weight learning strategy, are proposed for NMT domain adaptation. Empirical results on the IWSLT EnglishGerman/French tasks show that the proposed methods can substantially improve NMT performance by up to 2.7-6.7 BLEU points, outperforming the existing baselines by up to 1.6-3.6 BLEU points.", "title": "" }, { "docid": "neg:1840056_2", "text": "We propose a method to improve the translation of pronouns by resolving their coreference to prior mentions. We report results using two different co-reference resolution methods and point to remaining challenges.", "title": "" }, { "docid": "neg:1840056_3", "text": "Although motorcycle safety helmets are known for preventing head injuries, in many countries, the use of motorcycle helmets is low due to the lack of police power to enforcing helmet laws. This paper presents a system which automatically detect motorcycle riders and determine that they are wearing safety helmets or not. The system extracts moving objects and classifies them as a motorcycle or other moving objects based on features extracted from their region properties using K-Nearest Neighbor (KNN) classifier. The heads of the riders on the recognized motorcycle are then counted and segmented based on projection profiling. The system classifies the head as wearing a helmet or not using KNN based on features derived from 4 sections of segmented head region. Experiment results show an average correct detection rate for near lane, far lane, and both lanes as 84%, 68%, and 74%, respectively.", "title": "" }, { "docid": "neg:1840056_4", "text": "Wayfinding is part of everyday life. This study concentrates on the development of a conceptual model of human navigation in the U.S. Interstate Highway Network. It proposes three different levels of conceptual understanding that constitute the cognitive map: the Planning Level, the Instructional Level, and the Driver Level. This paper formally defines these three levels and examines the conceptual objects that comprise them. The problem treated here is a simpler version of the open problem of planning and navigating a multi-mode trip. We expect the methods and preliminary results found here for the Interstate system to apply to other systems such as river transportation networks and railroad networks.", "title": "" }, { "docid": "neg:1840056_5", "text": "Convolutional neural network (CNN), which comprises one or more convolutional and pooling layers followed by one or more fully-connected layers, has gained popularity due to its ability to learn fruitful representations from images or speeches, capturing local dependency and slight-distortion invariance. CNN has recently been applied to the problem of activity recognition, where 1D kernels are applied to capture local dependency over time in a series of observations measured at inertial sensors (3-axis accelerometers and gyroscopes). In this paper we present a multi-modal CNN where we use 2D kernels in both convolutional and pooling layers, to capture local dependency over time as well as spatial dependency over sensors. Experiments on benchmark datasets demonstrate the high performance of our multi-modal CNN, compared to several state of the art methods.", "title": "" }, { "docid": "neg:1840056_6", "text": "a r t i c l e i n f o a b s t r a c t This review takes an evolutionary and chronological perspective on the development of strategic human resource management (SHRM) literature. We divide this body of work into seven themes that reflect the directions and trends researchers have taken over approximately thirty years of research. During this time the field took shape, developed rich conceptual foundations, and matured into a domain that has substantial influence on research activities in HR and related management disciplines. We trace how the field has evolved to its current state, articulate many of the major findings and contributions, and discuss how we believe it will evolve in the future. This approach contributes to the field of SHRM by synthesizing work in this domain and by highlighting areas of research focus that have received perhaps enough attention, as well as areas of research focus that, while promising, have remained largely unexamined. 1. Introduction Boxall, Purcell, and Wright (2007) distinguish among three major subfields of human resource management (HRM): micro HRM (MHRM), strategic HRM (SHRM), and international HRM (IHRM). Micro HRM covers the subfunctions of HR policy and practice and consists of two main categories: one with managing individuals and small groups (e.g., recruitment, selection, induction, training and development, performance management, and remuneration) and the other with managing work organization and employee voice systems (including union-management relations). Strategic HRM covers the overall HR strategies adopted by business units and companies and tries to measure their impacts on performance. Within this domain both design and execution issues are examined. International HRM covers HRM in companies operating across national boundaries. Since strategic HRM often covers the international context, we will include those international HRM articles that have a strategic focus. While most of the academic literature on SHRM has been published in the last 30 years, the intellectual roots of the field can be traced back to the 1920s in the U.S. (Kaufman, 2001). The concept of labor as a human resource and the strategic view of HRM policy and practice were described and discussed by labor economists and industrial relations scholars of that period, such as John Commons. Progressive companies in the 1920s intentionally formulated and adopted innovative HR practices that represented a strategic approach to the management of labor. A small, but visibly elite group of employers in this time period …", "title": "" }, { "docid": "neg:1840056_7", "text": "Computer role playing games engage players through interleaved story and open-ended game play. We present an approach to procedurally generating, rendering, and making playable novel games based on a priori unknown story structures. These stories may be authored by humans or by computational story generation systems. Our approach couples player, designer, and algorithm to generate a novel game using preferences for game play style, general design aesthetics, and a novel story structure. Our approach is implemented in Game Forge, a system that uses search-based optimization to find and render a novel game world configuration that supports a sequence of plot points plus play style preferences. Additionally, Game Forge supports execution of the game through reactive control of game world logic and non-player character behavior.", "title": "" }, { "docid": "neg:1840056_8", "text": "While active learning has drawn broad attention in recent years, there are relatively few studies on stopping criterion for active learning. We here propose a novel model stability based stopping criterion, which considers the potential of each unlabeled examples to change the model once added to the training set. The underlying motivation is that active learning should terminate when the model does not change much by adding remaining examples. Inspired by the widely used stochastic gradient update rule, we use the gradient of the loss at each candidate example to measure its capability to change the classifier. Under the model change rule, we stop active learning when the changing ability of all remaining unlabeled examples is less than a given threshold. We apply the stability-based stopping criterion to two popular classifiers: logistic regression and support vector machines (SVMs). It can be generalized to a wide spectrum of learning models. Substantial experimental results on various UCI benchmark data sets have demonstrated that the proposed approach outperforms state-of-art methods in most cases.", "title": "" }, { "docid": "neg:1840056_9", "text": "Automated seizure detection using clinical electroencephalograms is a challenging machine learning problem because the multichannel signal often has an extremely low signal to noise ratio. Events of interest such as seizures are easily confused with signal artifacts (e.g, eye movements) or benign variants (e.g., slowing). Commercially available systems suffer from unacceptably high false alarm rates. Deep learning algorithms that employ high dimensional models have not previously been effective due to the lack of big data resources. In this paper, we use the TUH EEG Seizure Corpus to evaluate a variety of hybrid deep structures including Convolutional Neural Networks and Long Short-Term Memory Networks. We introduce a novel recurrent convolutional architecture that delivers 30% sensitivity at 7 false alarms per 24 hours. We have also evaluated our system on a held-out evaluation set based on the Duke University Seizure Corpus and demonstrate that performance trends are similar to the TUH EEG Seizure Corpus. This is a significant finding because the Duke corpus was collected with different instrumentation and at different hospitals. Our work shows that deep learning architectures that integrate spatial and temporal contexts are critical to achieving state of the art performance and will enable a new generation of clinically-acceptable technology.", "title": "" }, { "docid": "neg:1840056_10", "text": "The ability to computationally predict whether a compound treats a disease would improve the economy and success rate of drug approval. This study describes Project Rephetio to systematically model drug efficacy based on 755 existing treatments. First, we constructed Hetionet (neo4j.het.io), an integrative network encoding knowledge from millions of biomedical studies. Hetionet v1.0 consists of 47,031 nodes of 11 types and 2,250,197 relationships of 24 types. Data were integrated from 29 public resources to connect compounds, diseases, genes, anatomies, pathways, biological processes, molecular functions, cellular components, pharmacologic classes, side effects, and symptoms. Next, we identified network patterns that distinguish treatments from non-treatments. Then, we predicted the probability of treatment for 209,168 compound-disease pairs (het.io/repurpose). Our predictions validated on two external sets of treatment and provided pharmacological insights on epilepsy, suggesting they will help prioritize drug repurposing candidates. This study was entirely open and received realtime feedback from 40 community members.", "title": "" }, { "docid": "neg:1840056_11", "text": "These days, microarray gene expression data are playing an essential role in cancer classifications. However, due to the availability of small number of effective samples compared to the large number of genes in microarray data, many computational methods have failed to identify a small subset of important genes. Therefore, it is a challenging task to identify small number of disease-specific significant genes related for precise diagnosis of cancer sub classes. In this paper, particle swarm optimization (PSO) method along with adaptive K-nearest neighborhood (KNN) based gene selection technique are proposed to distinguish a small subset of useful genes that are sufficient for the desired classification purpose. A proper value of K would help to form the appropriate numbers of neighborhood to be explored and hence to classify the dataset accurately. Thus, a heuristic for selecting the optimal values of K efficiently, guided by the classification accuracy is also proposed. This proposed technique of finding minimum possible meaningful set of genes is applied on three benchmark microarray datasets, namely the small round blue cell tumor (SRBCT) data, the acute lymphoblastic leukemia (ALL) and acute myeloid leukemia (AML) data and the mixed-lineage leukemia (MLL) data. Results demonstrate the usefulness of the proposed method in terms of classification accuracy on blind test samples, number of informative genes and computing time. Further, the usefulness and universal characteristics of the identified genes are reconfirmed by using different classifiers, such as support vector machine (SVM). 2014 Published by Elsevier Ltd.", "title": "" }, { "docid": "neg:1840056_12", "text": "Text representations using neural word embeddings have proven effective in many NLP applications. Recent researches adapt the traditional word embedding models to learn vectors of multiword expressions (concepts/entities). However, these methods are limited to textual knowledge bases (e.g., Wikipedia). In this paper, we propose a novel and simple technique for integrating the knowledge about concepts from two large scale knowledge bases of different structure (Wikipedia and Probase) in order to learn concept representations. We adapt the efficient skip-gram model to seamlessly learn from the knowledge in Wikipedia text and Probase concept graph. We evaluate our concept embedding models on two tasks: (1) analogical reasoning, where we achieve a state-of-the-art performance of 91% on semantic analogies, (2) concept categorization, where we achieve a state-of-the-art performance on two benchmark datasets achieving categorization accuracy of 100% on one and 98% on the other. Additionally, we present a case study to evaluate our model on unsupervised argument type identification for neural semantic parsing. We demonstrate the competitive accuracy of our unsupervised method and its ability to better generalize to out of vocabulary entity mentions compared to the tedious and error prone methods which depend on gazetteers and regular expressions.", "title": "" }, { "docid": "neg:1840056_13", "text": "This prospective, randomized study evaluated continuous-flow cold therapy for postoperative pain in outpatient arthroscopic anterior cruciate ligament (ACL) reconstructions. In group 1, cold therapy was constant for 3 days then as needed in days 4 through 7. Group 2 had no cold therapy. Evaluations and diaries were kept at 1, 2, and 8 hours after surgery, and then daily. Pain was assessed using the VAS and Likert scales. There were 51 cold and 49 noncold patients included. Continuous passive movement (CPM) use averaged 54 hours for cold and 41 hours for noncold groups (P=.003). Prone hangs were done for 192 minutes in the cold group and 151 minutes in the noncold group. Motion at 1 week averaged 5/88 for the cold group and 5/79 the noncold group. The noncold group average visual analog scale (VAS) pain and Likert pain scores were always greater than the cold group. The noncold group average Vicodin use (Knoll, Mt. Olive, NJ) was always greater than the cold group use (P=.001). Continuous-flow cold therapy lowered VAS and Likert scores, reduced Vicodin use, increased prone hangs, CPM, and knee flexion. Continuous-flow cold therapy is safe and effective for outpatient ACL reconstruction reducing pain medication requirements.", "title": "" }, { "docid": "neg:1840056_14", "text": "With the enormous growth of digital content in internet, various types of online reviews such as product and movie reviews present a wealth of subjective information that can be very helpful for potential users. Sentiment analysis aims to use automated tools to detect subjective information from reviews. Up to now as there are few researches conducted on feature selection in sentiment analysis, there are very rare works for Persian sentiment analysis. This paper considers the problem of sentiment classification using different feature selection methods for online customer reviews in Persian language. Three of the challenges of Persian text are using of a wide variety of declensional suffixes, different word spacing and many informal or colloquial words. In this paper we study these challenges by proposing a model for sentiment classification of Persian review documents. The proposed model is based on stemming and feature selection and is employed Naive Bayes algorithm for classification. We evaluate the performance of the model on a collection of cellphone reviews, where the results show the effectiveness of the proposed approaches.", "title": "" }, { "docid": "neg:1840056_15", "text": "BACKGROUND\nOne of the main methods for evaluation of fetal well-being is analysis of Doppler flow velocity waveform of fetal vessels. Evaluation of Doppler wave of the middle cerebral artery can predict most of the at-risk fetuses in high-risk pregnancies. In this study, we tried to determine the normal ranges and their trends during pregnancy of Doppler flow velocity indices (resistive index, pulsatility index, systolic-to-diastolic ratio, and peak systolic velocity) of middle cerebral artery in 20 - 40 weeks normal pregnancies in Iranians.\n\n\nMETHODS\nIn this cross-sectional study, 1037 women with normal pregnancy and gestational age of 20 to 40 weeks were investigated for fetal middle cerebral artery Doppler examination.\n\n\nRESULTS\nResistive index, pulsatility index, and systolic-to-diastolic ratio values of middle cerebral artery decreased in a parabolic pattern while the peak systolic velocity value increased linearly with progression of the gestational age. These changes were statistically significant (P<0.001 for all four variables) and were more characteristic during late weeks of pregnancy. The mean fetal heart rate was also significantly (P<0.001) reduced in correlation with the gestational age.\n\n\nCONCLUSION\nDoppler waveform indices of fetal middle cerebral artery are useful means for determining fetal well-being. Herewith, the normal ranges of Doppler waveform indices for an Iranian population are presented.", "title": "" }, { "docid": "neg:1840056_16", "text": "One of the challenges in evaluating multi-object video detection, tracking and classification systems is having publically available data sets with which to compare different systems. However, the measures of performance for tracking and classification are different. Data sets that are suitable for evaluating tracking systems may not be appropriate for classification. Tracking video data sets typically only have ground truth track IDs, while classification video data sets only have ground truth class-label IDs. The former identifies the same object over multiple frames, while the latter identifies the type of object in individual frames. This paper describes an advancement of the ground truth meta-data for the DARPA Neovision2 Tower data set to allow both the evaluation of tracking and classification. The ground truth data sets presented in this paper contain unique object IDs across 5 different classes of object (Car, Bus, Truck, Person, Cyclist) for 24 videos of 871 image frames each. In addition to the object IDs and class labels, the ground truth data also contains the original bounding box coordinates together with new bounding boxes in instances where un-annotated objects were present. The unique IDs are maintained during occlusions between multiple objects or when objects re-enter the field of view. This will provide: a solid foundation for evaluating the performance of multi-object tracking of different types of objects, a straightforward comparison of tracking system performance using the standard Multi Object Tracking (MOT) framework, and classification performance using the Neovision2 metrics. These data have been hosted publically.", "title": "" }, { "docid": "neg:1840056_17", "text": "The numerical representation precision required by the computations performed by Deep Neural Networks (DNNs) varies across networks and between layers of a same network. This observation motivates a precision-based approach to acceleration which takes into account both the computational structure and the required numerical precision representation. This work presents <italic>Stripes</italic> (<italic>STR</italic>), a hardware accelerator that uses bit-serial computations to improve energy efficiency and performance. Experimental measurements over a set of state-of-the-art DNNs for image classification show that <italic>STR</italic> improves performance over a state-of-the-art accelerator from 1.35<inline-formula><tex-math notation=\"LaTeX\">$\\times$</tex-math><alternatives> <inline-graphic xlink:href=\"judd-ieq1-2597140.gif\"/></alternatives></inline-formula> to 5.33<inline-formula> <tex-math notation=\"LaTeX\">$\\times$</tex-math><alternatives><inline-graphic xlink:href=\"judd-ieq2-2597140.gif\"/> </alternatives></inline-formula> and by 2.24<inline-formula><tex-math notation=\"LaTeX\">$\\times$</tex-math> <alternatives><inline-graphic xlink:href=\"judd-ieq3-2597140.gif\"/></alternatives></inline-formula> on average. <italic>STR</italic>’s area and power overhead are estimated at 5 percent and 12 percent respectively. <italic> STR</italic> is 2.00<inline-formula><tex-math notation=\"LaTeX\">$\\times$</tex-math><alternatives> <inline-graphic xlink:href=\"judd-ieq4-2597140.gif\"/></alternatives></inline-formula> more energy efficient than the baseline.", "title": "" }, { "docid": "neg:1840056_18", "text": "Botnets have traditionally been seen as a threat to personal computers; however, the recent shift to mobile platforms resulted in a wave of new botnets. Due to its popularity, Android mobile Operating System became the most targeted platform. In spite of rising numbers, there is a significant gap in understanding the nature of mobile botnets and their communication characteristics. In this paper, we address this gap and provide a deep analysis of Command and Control (C&C) and built-in URLs of Android botnets detected since the first appearance of the Android platform. By combining both static and dynamic analyses with visualization, we uncover the relationships between the majority of the analyzed botnet families and offer an insight into each malicious infrastructure. As a part of this study we compile and offer to the research community a dataset containing 1929 samples representing 14 Android botnet families.", "title": "" }, { "docid": "neg:1840056_19", "text": "This paper represents the design and implementation of an indoor based navigation system for visually impaired people using a path finding algorithm and a wearable cap. This development of the navigation system consists of two modules: a Wearable part and a schematic of the area where the navigation system works by guiding the user. The wearable segment consists of a cap designed with IR receivers, an Arduino Nano processor, a headphone and an ultrasonic sensor. The schematic segment plans for the movement directions inside a room by dividing the room area into cells with a predefined matrix containing location information. For navigating the user, sixteen IR transmitters which continuously monitor the user position are placed at equal interval in the XY (8 in X-plane and 8 in Y-plane) directions of the indoor environment. A Braille keypad is used by the user where he gave the cell number for determining destination position. A path finding algorithm has been developed for determining the position of the blind person and guide him/her to his/her destination. The developed algorithm detects the position of the user by receiving continuous data from transmitter and guide the user to his/her destination by voice command. The ultrasonic sensor mounted on the cap detects the obstacles along the pathway of the visually impaired person. This proposed navigation system does not require any complex infrastructure design or the necessity of holding any extra assistive device by the user (i.e. augmented cane, smartphone, cameras). In the proposed design, prerecorded voice command will provide movement guideline to every edge of the indoor environment according to the user's destination choice. This makes this navigation system relatively simple and user friendly for those who are not much familiar with the most advanced technology and people with physical disabilities. Moreover, this proposed navigation system does not need GPS or any telecommunication networks which makes it suitable for use in rural areas where there is no telecommunication network coverage. In conclusion, the proposed system is relatively cheaper to implement in comparison to other existing navigation system, which will contribute to the betterment of the visually impaired people's lifestyle of developing and under developed countries.", "title": "" } ]
1840057
Hybrid CPU-GPU Framework for Network Motifs
[ { "docid": "pos:1840057_0", "text": "Spatial pyramid matching is a standard architecture for categorical image retrieval. However, its performance is largely limited by the prespecified rectangular spatial regions when pooling local descriptors. In this paper, we propose to learn object-shaped and directional receptive fields for image categorization. In particular, different objects in an image are seamlessly constructed by superpixels, while the direction captures human gaze shifting path. By generating a number of superpixels in each image, we construct graphlets to describe different objects. They function as the object-shaped receptive fields for image comparison. Due to the huge number of graphlets in an image, a saliency-guided graphlet selection algorithm is proposed. A manifold embedding algorithm encodes graphlets with the semantics of training image tags. Then, we derive a manifold propagation to calculate the postembedding graphlets by leveraging visual saliency maps. The sequentially propagated graphlets constitute a path that mimics human gaze shifting. Finally, we use the learned graphlet path as receptive fields for local image descriptor pooling. The local descriptors from similar receptive fields of pairwise images more significantly contribute to the final image kernel. Thorough experiments demonstrate the advantage of our approach.", "title": "" } ]
[ { "docid": "neg:1840057_0", "text": "We select a menu of seven popular decision theories and embed each theory in five models of stochastic choice, including tremble, Fechner and random utility model. We find that the estimated parameters of decision theories differ significantly when theories are combined with different models. Depending on the selected model of stochastic choice we obtain different rankings of decision theories with regard to their goodness of fit to the data. The fit of all analyzed decision theories improves significantly when they are embedded in a Fechner model of heteroscedastic truncated errors or a random utility model. Copyright  2009 John Wiley & Sons, Ltd.", "title": "" }, { "docid": "neg:1840057_1", "text": "Mul-T is a parallel Lisp system, based on Multilisp's future construct, that has been developed to run on an Encore Multimax multiprocessor. Mul-T is an extended version of the Yale T system and uses the T system's ORBIT compiler to achieve “production quality” performance on stock hardware — about 100 times faster than Multilisp. Mul-T shows that futures can be implemented cheaply enough to be useful in a production-quality system. Mul-T is fully operational, including a user interface that supports managing groups of parallel tasks.", "title": "" }, { "docid": "neg:1840057_2", "text": "The development of topical cosmetic anti-aging products is becoming increasingly sophisticated. This is demonstrated by the benefit agents selected and the scientific approaches used to identify them, treatment protocols that increasingly incorporate multi-product regimens, and the level of rigor in the clinical testing used to demonstrate efficacy. Consistent with these principles, a new cosmetic anti-aging regimen was recently developed. The key product ingredients were identified based on an understanding of the key mechanistic themes associated with aging at the genomic level coupled with appropriate in vitro testing. The products were designed to provide optimum benefits when used in combination in a regimen format. This cosmetic regimen was then tested for efficacy against the appearance of facial wrinkles in a 24-week clinical trial compared with 0.02% tretinoin, a recognized benchmark prescription treatment for facial wrinkling. The cosmetic regimen significantly improved wrinkle appearance after 8 weeks relative to tretinoin and was better tolerated. Wrinkle appearance benefits from the two treatments in cohorts of subjects who continued treatment through 24 weeks were also comparable.", "title": "" }, { "docid": "neg:1840057_3", "text": "Taking care and maintenance of a healthy population is the Strategy of each country. Information and communication technologies in the health care system have led to many changes in order to improve the quality of health care services to patients, rational spending time and reduce costs. In the booming field of IT research, the reach of drug delivery, information on grouping of similar drugs has been lacking. The wealth distribution and drug affordability at a certain demographic has been interlinked and proposed in this paper. Looking at the demographic we analyze and group the drugs based on target action and link this to the wealth and the people to medicine ratio, which can be accomplished via data mining and web mining. The data thus mined will be analysed and made available to public and commercial purpose for their further knowledge and benefit.", "title": "" }, { "docid": "neg:1840057_4", "text": "The commit processing in a Distributed Real Time Database (DRTDBS) can significantly increase execution time of a transaction. Therefore, designing a good commit protocol is important for the DRTDBS; the main challenge is the adaptation of standard commit protocol into the real time database system and so, decreasing the number of missed transaction in the systems. In these papers we review the basic commit protocols and the other protocols depend on it, for enhancing the transaction performance in DRTDBS. We propose a new commit protocol for reducing the number of transaction that missing their deadline. Keywords— DRTDBS, Commit protocols, Commit processing, 2PC protocol, 3PC protocol, Missed Transaction, Abort Transaction.", "title": "" }, { "docid": "neg:1840057_5", "text": "Information technology (IT) such as Electronic Data Interchange (EDI), Radio Frequency Identification Technology (RFID), wireless, the Internet and World Wide Web (WWW), and Information Systems (IS) such as Electronic Commerce (E-Commerce) systems and Enterprise Resource Planning (ERP) systems have had tremendous impact in education, healthcare, manufacturing, transportation, retailing, pure services, and even war. Many organizations turned to IT/IS to help them achieve their goals; however, many failed to achieve the full potential of IT/IS. These failures can be attributed at least in part to a weak link in the planning process. That weak link is the IT/IS justification process. The decision-making process has only grown more difficult in recent years with the increased complexity of business brought about by the rapid growth of supply chain management, the virtual enterprise and E-business. These are but three of the many changes in the business environment over the past 10–12 years. The complexities of this dynamic new business environment should be taken into account in IT/IS justification. We conducted a review of the current literature on IT/IS justification. The purpose of the literature review was to assemble meaningful information for the development of a framework for IT/IS evaluation that better reflects the new business environment. A suitable classification scheme has been proposed for organizing the literature reviewed. Directions for future research are indicated. 2005 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "neg:1840057_6", "text": "Collective entity disambiguation, or collective entity linking aims to jointly resolve multiple mentions by linking them to their associated entities in a knowledge base. Previous works are primarily based on the underlying assumption that entities within the same document are highly related. However, the extent to which these entities are actually connected in reality is rarely studied and therefore raises interesting research questions. For the first time, this paper shows that the semantic relationships between mentioned entities within a document are in fact less dense than expected. This could be attributed to several reasons such as noise, data sparsity, and knowledge base incompleteness. As a remedy, we introduce MINTREE, a new tree-based objective for the problem of entity disambiguation. The key intuition behind MINTREE is the concept of coherence relaxation which utilizes the weight of a minimum spanning tree to measure the coherence between entities. Based on this new objective, we design Pair-Linking, a novel iterative solution for the MINTREE optimization problem. The idea of Pair-Linking is simple: instead of considering all the given mentions, Pair-Linking iteratively selects a pair with the highest confidence at each step for decision making. Via extensive experiments on 8 benchmark datasets, we show that our approach is not only more accurate but also surprisingly faster than many state-of-the-art collective linking algorithms.", "title": "" }, { "docid": "neg:1840057_7", "text": "To cite: He A, Kwatra SG, Kazi N, et al. BMJ Case Rep Published online: [please include Day Month Year] doi:10.1136/bcr-2016215335 DESCRIPTION A woman aged 45 years presented for evaluation of skin lesions. She reported an 8–9-year history of occasionally tender, waxing-and-waning skin nodules refractory to dapsone, prednisone and methotrexate. Examination revealed multiple indurated subcutaneous nodules distributed on the upper extremities, with scattered patches of lipoatrophy in areas of nodule regression (figure 1). Her medical history was unremarkable; CBC and CMP were within normal limits, with no history of radiotherapy or evidence of internal organ involvement. She had a positive ANA titre (1:160, speckled), but negative anti-dsDNA, anti-Smith, anti-Ro and anti-La antibodies. Differential diagnosis included erythema nodosum (EN), erythema induratum of Bazin (EIB), lupus profundus (LP) and cutaneous lymphoma. Initial wedge biopsy in 2008 disclosed a predominantly lobular panniculitic process with some septal involvement (figure 2A). Broad zones of necrosis were present (figure 2B). The infiltrate consisted of a pleomorphic population of lymphocytes with occasional larger atypical lymphocytes (figure 2C). There were foci of adipocyte rimming by the atypical lymphocytes (figure 2C). Immunophenotyping revealed predominance of CD3+ T cells with some CD20+ B-cell aggregates. The atypical cells stained CD4 and CD8 in approximately equal ratios. TIA-1 was positive in many of the atypical cells but not prominently enough to render a diagnosis of cytotoxic T-cell lymphoma. T-cell receptor PCR studies showed polyclonality. Subsequent biopsies performed annually after treatment with prednisone in 2008 and 2010, dapsone in 2009 and methotrexate in 2012 showed very similar pathological and molecular features. Adipocyte rimming and TCR polyclonality persisted. EN is characterised by subcutaneous nodules on the lower extremities in association with elevated erythrocyte sedimentation rate (ESR) and C reactive protein (CRP), influenza-like prodrome preceding nodule formation and self-limiting course. Histologically, EN shows a mostly septal panniculitis with radial granulomas. EN was ruled out on the basis of normal ESR (6) and CRP (<0.1), chronic relapsing course and predominantly lobular panniculitis process histologically. EIB typically presents with violaceous nodules located on the posterior lower extremities, with arms rarely affected, of patients with a history of tuberculosis (TB). Histologically, EIB shows granulomatous inflammation with focal necrosis, vasculitis and septal fibrosis. Our patient had no evidence or history of TB infection and presented with nodules of a different clinical morphology. Ultimately, this constellation of histological and immunophenotypic findings showed an atypical panniculitic T-lymphocytic infiltrate. Although the lesion showed a lobular panniculitis with features that could be seen in subcutaneous panniculitis-like T-cell lymphoma (SPTCL), the presence of plasma cells, absence of CD8 and TIA restriction and T-cell polyclonality did not definitively support that", "title": "" }, { "docid": "neg:1840057_8", "text": "OBJECTIVES\nInformation overload in electronic medical records can impede providers' ability to identify important clinical data and may contribute to medical error. An understanding of the information requirements of ICU providers will facilitate the development of information systems that prioritize the presentation of high-value data and reduce information overload. Our objective was to determine the clinical information needs of ICU physicians, compared to the data available within an electronic medical record.\n\n\nDESIGN\nProspective observational study and retrospective chart review.\n\n\nSETTING\nThree ICUs (surgical, medical, and mixed) at an academic referral center.\n\n\nSUBJECTS\nNewly admitted ICU patients and physicians (residents, fellows, and attending staff).\n\n\nMEASUREMENTS AND MAIN RESULTS\nThe clinical information used by physicians during the initial diagnosis and treatment of admitted patients was captured using a questionnaire. Clinical information concepts were ranked according to the frequency of reported use (primary outcome) and were compared to information availability in the electronic medical record (secondary outcome). Nine hundred twenty-five of 1,277 study questionnaires (408 patients) were completed. Fifty-one clinical information concepts were identified as being useful during ICU admission. A median (interquartile range) of 11 concepts (6-16) was used by physicians per patient admission encounter with four used greater than 50% of the time. Over 25% of the clinical data available in the electronic medical record was never used, and only 33% was used greater than 50% of the time by admitting physicians.\n\n\nCONCLUSIONS\nPhysicians use a limited number of clinical information concepts at the time of patient admission to the ICU. The electronic medical record contains an abundance of unused data. Better electronic data management strategies are needed, including the priority display of frequently used clinical concepts within the electronic medical record, to improve the efficiency of ICU care.", "title": "" }, { "docid": "neg:1840057_9", "text": "We present Deep Speaker, a neural speaker embedding system that maps utterances to a hypersphere where speaker similarity is measured by cosine similarity. The embeddings generated by Deep Speaker can be used for many tasks, including speaker identification, verification, and clustering. We experiment with ResCNN and GRU architectures to extract the acoustic features, then mean pool to produce utterance-level speaker embeddings, and train using triplet loss based on cosine similarity. Experiments on three distinct datasets suggest that Deep Speaker outperforms a DNN-based i-vector baseline. For example, Deep Speaker reduces the verification equal error rate by 50% (relatively) and improves the identification accuracy by 60% (relatively) on a text-independent dataset. We also present results that suggest adapting from a model trained with Mandarin can improve accuracy for English speaker recognition.", "title": "" }, { "docid": "neg:1840057_10", "text": "BACKGROUND\nMost individuals with mood disorders experience psychiatric and/or medical comorbidity. Available treatment guidelines for major depressive disorder (MDD) and bipolar disorder (BD) have focused on treating mood disorders in the absence of comorbidity. Treating comorbid conditions in patients with mood disorders requires sufficient decision support to inform appropriate treatment.\n\n\nMETHODS\nThe Canadian Network for Mood and Anxiety Treatments (CANMAT) task force sought to prepare evidence- and consensus-based recommendations on treating comorbid conditions in patients with MDD and BD by conducting a systematic and qualitative review of extant data. The relative paucity of studies in this area often required a consensus-based approach to selecting and sequencing treatments.\n\n\nRESULTS\nSeveral principles emerge when managing comorbidity. They include, but are not limited to: establishing the diagnosis, risk assessment, establishing the appropriate setting for treatment, chronic disease management, concurrent or sequential treatment, and measurement-based care.\n\n\nCONCLUSIONS\nEfficacy, effectiveness, and comparative effectiveness research should emphasize treatment and management of conditions comorbid with mood disorders. Clinicians are encouraged to screen and systematically monitor for comorbid conditions in all individuals with mood disorders. The common comorbidity in mood disorders raises fundamental questions about overlapping and discrete pathoetiology.", "title": "" }, { "docid": "neg:1840057_11", "text": "Today, both the military and commercial sectors are placing an increased emphasis on global communications. This has prompted the development of several low earth orbit satellite systems that promise worldwide connectivity and real-time voice communications. This article provides a tutorial overview of the IRIDIUM low earth orbit satellite system and performance results obtained via simulation. First, it presents an overview of key IRIDIUM design parameters and features. Then, it examines the issues associated with routing in a dynamic network topology, focusing on network management and routing algorithm selection. Finally, it presents the results of the simulation and demonstrates that the IRIDIUM system is a robust system capable of meeting published specifications.", "title": "" }, { "docid": "neg:1840057_12", "text": "In this paper we have designed and implemented (15, k) a BCH Encoder and decoder using VHDL for reliable data transfer in AWGN channel with multiple error correction control. The digital logic implementation of binary encoding of multiple error correcting BCH code (15, k) of length n=15 over GF (2 4 ) with irreducible primitive polynomial x 4 +x+1 is organized into shift register circuits. Using the cyclic codes, the reminder b(x) can be obtained in a linear (15-k) stage shift register with feedback connections corresponding to the coefficients of the generated polynomial. Three encoders are designed using VHDL to encode the single, double and triple error correcting BCH code (15, k) corresponding to the coefficient of generated polynomial. Information bit is transmitted in unchanged form up to K clock cycles and during this period parity bits are calculated in the LFSR then the parity bits are transmitted from k+1 to 15 clock cycles. Total 15-k numbers of parity bits with k information bits are transmitted in 15 code word. In multiple error correction method, we have implemented (15, 5 ,3 ) ,(15,7, 2) and (15, 11, 1) BCH encoder and decoder using VHDL and the simulation is done using Xilinx ISE 14.2. KeywordsBCH, BER, SNR, BCH Encoder, Decoder VHDL, Error Correction, AWGN, LFSR", "title": "" }, { "docid": "neg:1840057_13", "text": "CAN bus is ISO international standard serial communication protocol. It is one of the most widely used fieldbus in the world. It has become the standard bus of embedded industrial control LAN. Ethernet is the most common communication protocol standard that is applied in the existing LAN. Networked industrial control usually adopts fieldbus and Ethernet network, thus the protocol conversion problems of the heterogeneous network composed of Ethernet and CAN bus has become one of the research hotspots in the technology of the industrial control network. STM32F103RC ARM microprocessor was used in the design of the Ethernet-CAN protocol conversion module, the simplified TCP/IP communication protocol uIP protocol was adopted to improve the efficiency of the protocol conversion and guarantee the stability of the system communication. The results of the experiments show that the designed module can realize high-speed and transparent protocol conversion.", "title": "" }, { "docid": "neg:1840057_14", "text": "The kinematics of manipulators is a central problem in the automatic control of robot manipulators. Theoretical background for the analysis of the 5 Dof Lynx-6 educational Robot Arm kinematics is presented in this paper. The kinematics problem is defined as the transformation from the Cartesian space to the joint space and vice versa. The Denavit-Harbenterg (D-H) model of representation is used to model robot links and joints in this study. Both forward and inverse kinematics solutions for this educational manipulator are presented, An effective method is suggested to decrease multiple solutions in inverse kinematics. A visual software package, named MSG, is also developed for testing Motional Characteristics of the Lynx-6 Robot arm. The kinematics solutions of the software package were found to be identical with the robot arm’s physical motional behaviors. Keywords—Lynx 6, robot arm, forward kinematics, inverse kinematics, software, DH parameters, 5 DOF ,SSC-32 , simulator.", "title": "" }, { "docid": "neg:1840057_15", "text": "Do investments in customer satisfaction lead to excess returns? If so, are these returns associated with higher stock market risk? The empirical evidence presented in this article suggests that the answer to the first question is yes, but equally remarkable, the answer to the second question is no, suggesting that satisfied customers are economic assets with high returns/low risk. Although these results demonstrate stock market imperfections with respect to the time it takes for share prices to adjust, they are consistent with previous studies in marketing in that a firm’s satisfied customers are likely to improve both the level and the stability of net cash flows. The implication, implausible as it may seem in other contexts, is high return/low risk. Specifically, the authors find that customer satisfaction, as measured by the American Customer Satisfaction Index (ACSI), is significantly related to market value of equity. Yet news about ACSI results does not move share prices. This apparent inconsistency is the catalyst for examining whether excess stock returns might be generated as a result. The authors present two stock portfolios: The first is a paper portfolio that is back tested, and the second is an actual case. At low systematic risk, both outperform the market by considerable margins. In other words, it is possible to beat the market consistently by investing in firms that do well on the ACSI.", "title": "" }, { "docid": "neg:1840057_16", "text": "OVERVIEW: Next-generation semiconductor factories need to support miniaturization below 100 nm and have higher production efficiency, mainly of 300-mm-diameter wafers. Particularly to reduce the price of semiconductor devices, shorten development time [thereby reducing the TAT (turn-around time)], and support frequent product changeovers, semiconductor manufacturers must enhance the productivity of their systems. To meet these requirements, Hitachi proposes solutions that will support e-manufacturing on the next-generation semiconductor production line (see Fig. 1). Yasutsugu Usami Isao Kawata Hideyuki Yamamoto Hiroyoshi Mori Motoya Taniguchi, Dr. Eng.", "title": "" }, { "docid": "neg:1840057_17", "text": "The investigators proposed that transgression-related interpersonal motivations result from 3 psychological parameters: forbearance (abstinence from avoidance and revenge motivations, and maintenance of benevolence), trend forgiveness (reductions in avoidance and revenge, and increases in benevolence), and temporary forgiveness (transient reductions in avoidance and revenge, and transient increases in benevolence). In 2 studies, the investigators examined this 3-parameter model. Initial ratings of transgression severity and empathy were directly related to forbearance but not trend forgiveness. Initial responsibility attributions were inversely related to forbearance but directly related to trend forgiveness. When people experienced high empathy and low responsibility attributions, they also tended to experience temporary forgiveness. The distinctiveness of each of these 3 parameters underscores the importance of studying forgiveness temporally.", "title": "" }, { "docid": "neg:1840057_18", "text": "In the era of the Internet of Things (IoT), an enormous amount of sensing devices collect and/or generate various sensory data over time for a wide range of fields and applications. Based on the nature of the application, these devices will result in big or fast/real-time data streams. Applying analytics over such data streams to discover new information, predict future insights, and make control decisions is a crucial process that makes IoT a worthy paradigm for businesses and a quality-of-life improving technology. In this paper, we provide a thorough overview on using a class of advanced machine learning techniques, namely deep learning (DL), to facilitate the analytics and learning in the IoT domain. We start by articulating IoT data characteristics and identifying two major treatments for IoT data from a machine learning perspective, namely IoT big data analytics and IoT streaming data analytics. We also discuss why DL is a promising approach to achieve the desired analytics in these types of data and applications. The potential of using emerging DL techniques for IoT data analytics are then discussed, and its promises and challenges are introduced. We present a comprehensive background on different DL architectures and algorithms. We also analyze and summarize major reported research attempts that leveraged DL in the IoT domain. The smart IoT devices that have incorporated DL in their intelligence background are also discussed. DL implementation approaches on the fog and cloud centers in support of IoT applications are also surveyed. Finally, we shed light on some challenges and potential directions for future research. At the end of each section, we highlight the lessons learned based on our experiments and review of the recent literature.", "title": "" } ]
1840058
Effects of Website Interactivity on Online Retail Shopping Behavior
[ { "docid": "pos:1840058_0", "text": "A key issue facing information systems researchers and practitioners has been the difficulty in creating favorable user reactions to new technologies. Insufficient or ineffective training has been identified as one of the key factors underlying this disappointing reality. Among the various enhancements to training being examined in research, the role of intrinsic motivation as a lever to create favorable user perceptions has not been sufficiently exploited. In this research, two studies were conducted to compare a traditional training method with a training method that included a component aimed at enhancing intrinsic motivation. The results strongly favored the use of an intrinsic motivator during training. Key implications for theory and practice are discussed. 1Allen Lee was the accepting senior editor for this paper. Sometimes when I am at my computer, I say to my wife, \"1'11 be done in just a minute\" and the next thing I know she's standing over me saying, \"It's been an hour!\" (Collins 1989, p. 11). Investment in emerging information technology applications can lead to productivity gains only if they are accepted and used. Several theoretical perspectives have emphasized the importance of user perceptions of ease of use as a key factor affecting acceptance of information technology. Favorable ease of use perceptions are necessary for initial acceptance (Davis et al. 1989), which of course is essential for adoption and continued use. During the early stages of learning and use, ease of use perceptions are significantly affected by training (e.g., Venkatesh and Davis 1996). Investments in training by organizations have been very high and have continued to grow rapidly. Kelly (1982) reported a figure of $100B, which doubled in about a decade (McKenna 1990). In spite of such large investments in training , only 10% of training leads to a change in behavior On trainees' jobs (Georgenson 1982). Therefore, it is important to understand the most effective training methods (e.g., Facteau et al. 1995) and to improve existing training methods in order to foster favorable perceptions among users about the ease of use of a technology, which in turn should lead to acceptance and usage. Prior research in psychology (e.g., Deci 1975) suggests that intrinsic motivation during training leads to beneficial outcomes. However, traditional training methods in information systems research have tended to emphasize imparting knowledge to potential users (e.g., Nelson and Cheney 1987) while not paying Sufficient attention to intrinsic motivation during training. The two field …", "title": "" }, { "docid": "pos:1840058_1", "text": "Received: 12 July 2000 Revised: 20 August 2001 : 30 July 2002 Accepted: 15 October 2002 Abstract This paper explores factors that influence consumer’s intentions to purchase online at an electronic commerce website. Specifically, we investigate online purchase intention using two different perspectives: a technology-oriented perspective and a trust-oriented perspective. We summarise and review the antecedents of online purchase intention that have been developed within these two perspectives. An empirical study in which the contributions of both perspectives are investigated is reported. We study the perceptions of 228 potential online shoppers regarding trust and technology and their attitudes and intentions to shop online at particular websites. In terms of relative contributions, we found that the trust-antecedent ‘perceived risk’ and the technology-antecedent ‘perceived ease-of-use’ directly influenced the attitude towards purchasing online. European Journal of Information Systems (2003) 12, 41–48. doi:10.1057/ palgrave.ejis.3000445", "title": "" } ]
[ { "docid": "neg:1840058_0", "text": "Although space syntax has been successfully applied to many urban GIS studies, there is still a need to develop robust algorithms that support the automated derivation of graph representations. These graph structures are needed to apply the computational principles of space syntax and derive the morphological view of an urban structure. So far the application of space syntax principles to the study of urban structures has been a partially empirical and non-deterministic task, mainly due to the fact that an urban structure is modeled as a set of axial lines whose derivation is a non-computable process. This paper proposes an alternative model of space for the application of space syntax principles, based on the concepts of characteristic points defined as the nodes of an urban structure schematised as a graph. This method has several advantages over the axial line representation: it is computable and cognitively meaningful. Our proposal is illustrated by a case study applied to the city of GaÈ vle in Sweden. We will also show that this method has several nice properties that surpass the axial line technique.", "title": "" }, { "docid": "neg:1840058_1", "text": "Emojis have gone viral on the Internet across platforms and devices. Interwoven into our daily communications, they have become a ubiquitous new language. However, little has been done to analyze the usage of emojis at scale and in depth. Why do some emojis become especially popular while others don’t? How are people using them among the words? In this work, we take the initiative to study the collective usage and behavior of emojis, and specifically, how emojis interact with their context. We base our analysis on a very large corpus collected from a popular emoji keyboard, which contains a full month of inputs from millions of users. Our analysis is empowered by a state-of-the-art machine learning tool that computes the embeddings of emojis and words in a semantic space. We find that emojis with clear semantic meanings are more likely to be adopted. While entity-related emojis are more likely to be used as alternatives to words, sentimentrelated emojis often play a complementary role in a message. Overall, emojis are significantly more prevalent in a senti-", "title": "" }, { "docid": "neg:1840058_2", "text": "High-dimensional pattern classification was applied to baseline and multiple follow-up MRI scans of the Alzheimer's Disease Neuroimaging Initiative (ADNI) participants with mild cognitive impairment (MCI), in order to investigate the potential of predicting short-term conversion to Alzheimer's Disease (AD) on an individual basis. MCI participants that converted to AD (average follow-up 15 months) displayed significantly lower volumes in a number of grey matter (GM) regions, as well as in the white matter (WM). They also displayed more pronounced periventricular small-vessel pathology, as well as an increased rate of increase of such pathology. Individual person analysis was performed using a pattern classifier previously constructed from AD patients and cognitively normal (CN) individuals to yield an abnormality score that is positive for AD-like brains and negative otherwise. The abnormality scores measured from MCI non-converters (MCI-NC) followed a bimodal distribution, reflecting the heterogeneity of this group, whereas they were positive in almost all MCI converters (MCI-C), indicating extensive patterns of AD-like brain atrophy in almost all MCI-C. Both MCI subgroups had similar MMSE scores at baseline. A more specialized classifier constructed to differentiate converters from non-converters based on their baseline scans provided good classification accuracy reaching 81.5%, evaluated via cross-validation. These pattern classification schemes, which distill spatial patterns of atrophy to a single abnormality score, offer promise as biomarkers of AD and as predictors of subsequent clinical progression, on an individual patient basis.", "title": "" }, { "docid": "neg:1840058_3", "text": "We present the first study that evaluates both speaker and listener identification for direct speech in literary texts. Our approach consists of two steps: identification of speakers and listeners near the quotes, and dialogue chain segmentation. Evaluation results show that this approach outperforms a rule-based approach that is stateof-the-art on a corpus of literary texts.", "title": "" }, { "docid": "neg:1840058_4", "text": "In this work we perform an analysis of probabilistic approaches to recommendation upon a different validation perspective, which focuses on accuracy metrics such as recall and precision of the recommendation list. Traditionally, state-of-art approches to recommendations consider the recommendation process from a “missing value prediction” perspective. This approach simplifies the model validation phase that is based on the minimization of standard error metrics such as RMSE. However, recent studies have pointed several limitations of this approach, showing that a lower RMSE does not necessarily imply improvements in terms of specific recommendations. We demonstrate that the underlying probabilistic framework offers several advantages over traditional methods, in terms of flexibility in the generation of the recommendation list and consequently in the accuracy of recommendation.", "title": "" }, { "docid": "neg:1840058_5", "text": "Two important cues to female physical attractiveness are body mass index (BMI) and shape. In front view, it seems that BMI may be more important than shape; however, is it true in profile where shape cues may be stronger? There is also the question of whether men and women have the same perception of female physical attractiveness. Some studies have suggested that they do not, but this runs contrary to mate selection theory. This predicts that women will have the same perception of female attractiveness as men do. This allows them to judge their own relative value, with respect to their peer group, and match this value with the value of a prospective mate. To clarify these issues we asked 40 male and 40 female undergraduates to rate a set of pictures of real women (50 in front-view and 50 in profile) for attractiveness. BMI was the primary predictor of attractiveness in both front and profile, and the putative visual cues to BMI showed a higher degree of view-invariance than shape cues such as the waist-hip ratio (WHR). Consistent with mate selection theory, there were no significant differences in the rating of attractiveness by male and female raters.", "title": "" }, { "docid": "neg:1840058_6", "text": "Online social networks may be important avenues for building and maintaining social capital as adult’s age. However, few studies have explicitly examined the role online communities play in the lives of seniors. In this exploratory study, U.S. seniors were interviewed to assess the impact of Facebook on social capital. Interpretive thematic analysis reveals Facebook facilitates connections to loved ones and may indirectly facilitate bonding social capital. Awareness generated via Facebook often lead to the sharing and receipt of emotional support via other channels. As such, Facebook acted as a catalyst for increasing social capital. The implication of “awareness” as a new dimension of social capital theory is discussed. Additionally, Facebook was found to have potential negative impacts on seniors’ current relationships due to open access to personal information. Finally, common concerns related to privacy, comfort with technology, and inappropriate content were revealed.", "title": "" }, { "docid": "neg:1840058_7", "text": "We propose a novel and robust computational framework for automatic detection of deformed 2D wallpaper patterns in real-world images. The theory of 2D crystallographic groups provides a sound and natural correspondence between the underlying lattice of a deformed wallpaper pattern and a degree-4 graphical model. We start the discovery process with unsupervised clustering of interest points and voting for consistent lattice unit proposals. The proposed lattice basis vectors and pattern element contribute to the pairwise compatibility and joint compatibility (observation model) functions in a Markov random field (MRF). Thus, we formulate the 2D lattice detection as a spatial, multitarget tracking problem, solved within an MRF framework using a novel and efficient mean-shift belief propagation (MSBP) method. Iterative detection and growth of the deformed lattice are interleaved with regularized thin-plate spline (TPS) warping, which rectifies the current deformed lattice into a regular one to ensure stability of the MRF model in the next round of lattice recovery. We provide quantitative comparisons of our proposed method with existing algorithms on a diverse set of 261 real-world photos to demonstrate significant advances in accuracy and speed over the state of the art in automatic discovery of regularity in real images.", "title": "" }, { "docid": "neg:1840058_8", "text": "Spatial and temporal contextual information plays a key role for analyzing user behaviors, and is helpful for predicting where he or she will go next. With the growing ability of collecting information, more and more temporal and spatial contextual information is collected in systems, and the location prediction problem becomes crucial and feasible. Some works have been proposed to address this problem, but they all have their limitations. Factorizing Personalized Markov Chain (FPMC) is constructed based on a strong independence assumption among different factors, which limits its performance. Tensor Factorization (TF) faces the cold start problem in predicting future actions. Recurrent Neural Networks (RNN) model shows promising performance comparing with PFMC and TF, but all these methods have problem in modeling continuous time interval and geographical distance. In this paper, we extend RNN and propose a novel method called Spatial Temporal Recurrent Neural Networks (ST-RNN). ST-RNN can model local temporal and spatial contexts in each layer with time-specific transition matrices for different time intervals and distance-specific transition matrices for different geographical distances. Experimental results show that the proposed ST-RNN model yields significant improvements over the competitive compared methods on two typical datasets, i.e., Global Terrorism Database (GTD) and Gowalla dataset.", "title": "" }, { "docid": "neg:1840058_9", "text": "This article presents some key achievements and recommendations from the IoT6 European research project on IPv6 exploitation for the Internet of Things (IoT). It highlights the potential of IPv6 to support the integration of a global IoT deployment including legacy systems by overcoming horizontal fragmentation as well as more direct vertical integration between communicating devices and the cloud.", "title": "" }, { "docid": "neg:1840058_10", "text": "By adopting the distributed problem-solving strategy, swarm intelligence algorithms have been successfully applied to many optimization problems that are difficult to deal with using traditional methods. At present, there are many well-implemented algorithms, such as particle swarm optimization, genetic algorithm, artificial bee colony algorithm, and ant colony optimization. These algorithms have already shown favorable performances. However, with the objects becoming increasingly complex, it is becoming gradually more difficult for these algorithms to meet human’s demand in terms of accuracy and time. Designing a new algorithm to seek better solutions for optimization problems is becoming increasingly essential. Dolphins have many noteworthy biological characteristics and living habits such as echolocation, information exchanges, cooperation, and division of labor. Combining these biological characteristics and living habits with swarm intelligence and bringing them into optimization problems, we propose a brand new algorithm named the ‘dolphin swarm algorithm’ in this paper. We also provide the definitions of the algorithm and specific descriptions of the four pivotal phases in the algorithm, which are the search phase, call phase, reception phase, and predation phase. Ten benchmark functions with different properties are tested using the dolphin swarm algorithm, particle swarm optimization, genetic algorithm, and artificial bee colony algorithm. The convergence rates and benchmark function results of these four algorithms are compared to testify the effect of the dolphin swarm algorithm. The results show that in most cases, the dolphin swarm algorithm performs better. The dolphin swarm algorithm possesses some great features, such as first-slow-then-fast convergence, periodic convergence, local-optimum-free, and no specific demand on benchmark functions. Moreover, the dolphin swarm algorithm is particularly appropriate to optimization problems, with more calls of fitness functions and fewer individuals.", "title": "" }, { "docid": "neg:1840058_11", "text": "Next generation deep neural networks for classification hosted on embedded platforms will rely on fast, efficient, and accurate learning algorithms. Initialization of weights in learning networks has a great impact on the classification accuracy. In this paper we focus on deriving good initial weights by modeling the error function of a deep neural network as a high-dimensional landscape. We observe that due to the inherent complexity in its algebraic structure, such an error function may conform to general results of the statistics of large systems. To this end we apply some results from Random Matrix Theory to analyse these functions. We model the error function in terms of a Hamiltonian in N-dimensions and derive some theoretical results about its general behavior. These results are further used to make better initial guesses of weights for the learning algorithm.", "title": "" }, { "docid": "neg:1840058_12", "text": "This paper focuses on running scans in a main memory data processing system at \"bare metal\" speed. Essentially, this means that the system must aim to process data at or near the speed of the processor (the fastest component in most system configurations). Scans are common in main memory data processing environments, and with the state-of-the-art techniques it still takes many cycles per input tuple to apply simple predicates on a single column of a table. In this paper, we propose a technique called BitWeaving that exploits the parallelism available at the bit level in modern processors. BitWeaving operates on multiple bits of data in a single cycle, processing bits from different columns in each cycle. Thus, bits from a batch of tuples are processed in each cycle, allowing BitWeaving to drop the cycles per column to below one in some case. BitWeaving comes in two flavors: BitWeaving/V which looks like a columnar organization but at the bit level, and BitWeaving/H which packs bits horizontally. In this paper we also develop the arithmetic framework that is needed to evaluate predicates using these BitWeaving organizations. Our experimental results show that both these methods produce significant performance benefits over the existing state-of-the-art methods, and in some cases produce over an order of magnitude in performance improvement.", "title": "" }, { "docid": "neg:1840058_13", "text": "People who design, use, and are affected by autonomous artificially intelligent agents want to be able to trust such agents—that is, to know that these agents will perform correctly, to understand the reasoning behind their actions, and to know how to use them appropriately. Many techniques have been devised to assess and influence human trust in artificially intelligent agents. However, these approaches are typically ad hoc and have not been formally related to each other or to formal trust models. This article presents a survey of algorithmic assurances, i.e., programmed components of agent operation that are expressly designed to calibrate user trust in artificially intelligent agents. Algorithmic assurances are first formally defined and classified from the perspective of formally modeled human-artificially intelligent agent trust relationships. Building on these definitions, a synthesis of research across communities such as machine learning, human-computer interaction, robotics, e-commerce, and others reveals that assurance algorithms naturally fall along a spectrum in terms of their impact on an agent’s core functionality, with seven notable classes ranging from integral assurances (which impact an agent’s core functionality) to supplemental assurances (which have no direct effect on agent performance). Common approaches within each of these classes are identified and discussed; benefits and drawbacks of different approaches are also investigated.", "title": "" }, { "docid": "neg:1840058_14", "text": "Despite the theoretical and demonstrated empirical significance of parental coping strategies for the wellbeing of families of children with disabilities, relatively little research has focused explicitly on coping in mothers and fathers of children with autism. In the present study, 89 parents of preschool children and 46 parents of school-age children completed a measure of the strategies they used to cope with the stresses of raising their child with autism. Factor analysis revealed four reliable coping dimensions: active avoidance coping, problem-focused coping, positive coping, and religious/denial coping. Further data analysis suggested gender differences on the first two of these dimensions but no reliable evidence that parental coping varied with the age of the child with autism. Associations were also found between coping strategies and parental stress and mental health. Practical implications are considered including reducing reliance on avoidance coping and increasing the use of positive coping strategies.", "title": "" }, { "docid": "neg:1840058_15", "text": "Fulfilling the requirements of point-of-care testing (POCT) training regarding proper execution of measurements and compliance with internal and external quality control specifications is a great challenge. Our aim was to compare the values of the highly critical parameter hemoglobin (Hb) determined with POCT devices and central laboratory analyzer in the highly vulnerable setting of an emergency department in a supra maximal care hospital to assess the quality of POCT performance. In 2548 patients, Hb measurements using POCT devices (POCT-Hb) were compared with Hb measurements performed at the central laboratory (Hb-ZL). Additionally, sub collectives (WHO anemia classification, patients with Hb <8 g/dl and suprageriatric patients (age >85y.) were analyzed. Overall, the correlation between POCT-Hb and Hb-ZL was highly significant (r = 0.96, p<0.001). Mean difference was -0.44g/dl. POCT-Hb values tended to be higher than Hb-ZL values (t(2547) = 36.1, p<0.001). Standard deviation of the differences was 0.62 g/dl. Only in 26 patients (1%), absolute differences >2.5g/dl occurred. McNemar´s test revealed significant differences regarding anemia diagnosis according to WHO definition for male, female and total patients (♂ p<0.001; ♀ p<0.001, total p<0.001). Hb-ZL resulted significantly more often in anemia diagnosis. In samples with Hb<8g/dl, McNemar´s test yielded no significant difference (p = 0.169). In suprageriatric patients, McNemar´s test revealed significant differences regarding anemia diagnosis according to WHO definition in male, female and total patients (♂ p<0.01; ♀ p = 0.002, total p<0.001). The difference between Hb-ZL and POCT-Hb with Hb<8g/dl was not statistically significant (<8g/dl, p = 1.000). Overall, we found a highly significant correlation between the analyzed hemoglobin concentration measurement methods, i.e. POCT devices and at the central laboratory. The results confirm the successful implementation of the presented POCT concept. Nevertheless some limitations could be identified in anemic patients stressing the importance of carefully examining clinically implausible results.", "title": "" }, { "docid": "neg:1840058_16", "text": "Distributed Denial of Service (DDoS) is defined as an attack in which mutiple compromised systems are made to attack a single target to make the services unavailable foe legitimate users.It is an attack designed to render a computer or network incapable of providing normal services. DDoS attack uses many compromised intermediate systems, known as botnets which are remotely controlled by an attacker to launch these attacks. DDOS attack basically results in the situation where an entity cannot perform an action for which it is authenticated. This usually means that a legitimate node on the network is unable to reach another node or their performance is degraded. The high interruption and severance caused by DDoS is really posing an immense threat to entire internet world today. Any compromiseto computing, communication and server resources such as sockets, CPU, memory, disk/database bandwidth, I/O bandwidth, router processing etc. for collaborative environment would surely endanger the entire application. It becomes necessary for researchers and developers to understand behaviour of DDoSattack because it affects the target network with little or no advance warning. Hence developing advanced intrusion detection and prevention systems for preventing, detecting, and responding to DDOS attack is a critical need for cyber space. Our rigorous survey study presented in this paper describes a platform for the study of evolution of DDoS attacks and their defense mechanisms.", "title": "" }, { "docid": "neg:1840058_17", "text": "The network performance of virtual machines plays a critical role in Network Functions Virtualization (NFV), and several technologies have been developed to address hardware-level virtualization shortcomings. Recent advances in operating system level virtualization and deployment platforms such as Docker have made containers an ideal candidate for high performance application encapsulation and deployment. However, Docker and other solutions typically use lower-performing networking mechanisms. In this paper, we explore the feasibility of using technologies designed to accelerate virtual machine networking with containers, in addition to quantifying the network performance of container-based VNFs compared to the state-of-the-art virtual machine solutions. Our results show that containerized applications can provide lower latency and delay variation, and can take advantage of high performance networking technologies previously only used for hardware virtualization.", "title": "" }, { "docid": "neg:1840058_18", "text": "This paper presents a novel approach based on enhanced local directional patterns (ELDP) to face recognition, which adopts local edge gradient information to represent face images. Specially, each pixel of every facial image sub-block gains eight edge response values by convolving the local 3 3 neighborhood with eight Kirsch masks, respectively. ELDP just utilizes the directions of the most encoded into a double-digit octal number to produce the ELDP codes. The ELDP dominant patterns (ELDP) are generated by statistical analysis according to the occurrence rates of the ELDP codes in a mass of facial images. Finally, the face descriptor is represented by using the global concatenated histogram based on ELDP or ELDP extracted from the face image which is divided into several sub-regions. The performances of several single face descriptors not integrated schemes are evaluated in face recognition under different challenges via several experiments. The experimental results demonstrate that the proposed method is more robust to non-monotonic illumination changes and slight noise without any filter. & 2013 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "neg:1840058_19", "text": "The extent to which mental health consumers encounter stigma in their daily lives is a matter of substantial importance for their recovery and quality of life. This article summarizes the results of a nationwide survey of 1,301 mental health consumers concerning their experience of stigma and discrimination. Survey results and followup interviews with 100 respondents revealed experience of stigma from a variety of sources, including communities, families, churches, coworkers, and mental health caregivers. The majority of respondents tended to try to conceal their disorders and worried a great deal that others would find out about their psychiatric status and treat them unfavorably. They reported discouragement, hurt, anger, and lowered self-esteem as results of their experiences, and they urged public education as a means for reducing stigma. Some reported that involvement in advocacy and speaking out when stigma and discrimination were encountered helped them to cope with stigma. Limitations to generalization of results include the self-selection, relatively high functioning of participants, and respondent connections to a specific advocacy organization-the National Alliance for the Mentally Ill.", "title": "" } ]
1840059
End-to-End Training of Hybrid CNN-CRF Models for Stereo
[ { "docid": "pos:1840059_0", "text": "Recent work has shown that optical flow estimation can be formulated as a supervised learning task and can be successfully solved with convolutional networks. Training of the so-called FlowNet was enabled by a large synthetically generated dataset. The present paper extends the concept of optical flow estimation via convolutional networks to disparity and scene flow estimation. To this end, we propose three synthetic stereo video datasets with sufficient realism, variation, and size to successfully train large networks. Our datasets are the first large-scale datasets to enable training and evaluation of scene flow methods. Besides the datasets, we present a convolutional network for real-time disparity estimation that provides state-of-the-art results. By combining a flow and disparity estimation network and training it jointly, we demonstrate the first scene flow estimation with a convolutional network.", "title": "" }, { "docid": "pos:1840059_1", "text": "We present a method for extracting depth information from a rectified image pair. Our approach focuses on the first stage of many stereo algorithms: the matching cost computation. We approach the problem by learning a similarity measure on small image patches using a convolutional neural network. Training is carried out in a supervised manner by constructing a binary classification data set with examples of similar and dissimilar pairs of patches. We examine two network architectures for this task: one tuned for speed, the other for accuracy. The output of the convolutional neural network is used to initialize the stereo matching cost. A series of post-processing steps follow: cross-based cost aggregation, semiglobal matching, a left-right consistency check, subpixel enhancement, a median filter, and a bilateral filter. We evaluate our method on the KITTI 2012, KITTI 2015, and Middlebury stereo data sets and show that it outperforms other approaches on all three data sets.", "title": "" }, { "docid": "pos:1840059_2", "text": "This paper presents a data-driven matching cost for stereo matching. A novel deep visual correspondence embedding model is trained via Convolutional Neural Network on a large set of stereo images with ground truth disparities. This deep embedding model leverages appearance data to learn visual similarity relationships between corresponding image patches, and explicitly maps intensity values into an embedding feature space to measure pixel dissimilarities. Experimental results on KITTI and Middlebury data sets demonstrate the effectiveness of our model. First, we prove that the new measure of pixel dissimilarity outperforms traditional matching costs. Furthermore, when integrated with a global stereo framework, our method ranks top 3 among all two-frame algorithms on the KITTI benchmark. Finally, cross-validation results show that our model is able to make correct predictions for unseen data which are outside of its labeled training set.", "title": "" } ]
[ { "docid": "neg:1840059_0", "text": "A number of recent advances have been achieved in the study of midbrain dopaminergic neurons. Understanding these advances and how they relate to one another requires a deep understanding of the computational models that serve as an explanatory framework and guide ongoing experimental inquiry. This intertwining of theory and experiment now suggests very clearly that the phasic activity of the midbrain dopamine neurons provides a global mechanism for synaptic modification. These synaptic modifications, in turn, provide the mechanistic underpinning for a specific class of reinforcement learning mechanisms that now seem to underlie much of human and animal behavior. This review describes both the critical empirical findings that are at the root of this conclusion and the fantastic theoretical advances from which this conclusion is drawn.", "title": "" }, { "docid": "neg:1840059_1", "text": "Migraine is a debilitating neurological disorder that affects about 12% of the population. In the past decade, the role of the neuropeptide calcitonin gene-related peptide (CGRP) in migraine has been firmly established by clinical studies. CGRP administration can trigger migraines, and CGRP receptor antagonists ameliorate migraine. In this review, we will describe multifunctional activities of CGRP that could potentially contribute to migraine. These include roles in light aversion, neurogenic inflammation, peripheral and central sensitization of nociceptive pathways, cortical spreading depression, and regulation of nitric oxide production. Yet clearly there will be many other contributing genes that could act in concert with CGRP. One candidate is pituitary adenylate cyclase-activating peptide (PACAP), which shares some of the same actions as CGRP, including the ability to induce migraine in migraineurs and light aversive behavior in rodents. Interestingly, both CGRP and PACAP act on receptors that share an accessory subunit called receptor activity modifying protein-1 (RAMP1). Thus, comparisons between the actions of these two migraine-inducing neuropeptides, CGRP and PACAP, may provide new insights into migraine pathophysiology.", "title": "" }, { "docid": "neg:1840059_2", "text": "Android OS experiences a blazing popularity since the last few years. This predominant platform has established itself not only in the mobile world but also in the Internet of Things (IoT) devices. This popularity, however, comes at the expense of security, as it has become a tempting target of malicious apps. Hence, there is an increasing need for sophisticated, automatic, and portable malware detection solutions. In this paper, we propose MalDozer, an automatic Android malware detection and family attribution framework that relies on sequences classification using deep learning techniques. Starting from the raw sequence of the app's API method calls, MalDozer automatically extracts and learns the malicious and the benign patterns from the actual samples to detect Android malware. MalDozer can serve as a ubiquitous malware detection system that is not only deployed on servers, but also on mobile and even IoT devices. We evaluate MalDozer on multiple Android malware datasets ranging from 1 K to 33 K malware apps, and 38 K benign apps. The results show that MalDozer can correctly detect malware and attribute them to their actual families with an F1-Score of 96%e99% and a false positive rate of 0.06% e2%, under all tested datasets and settings. © 2018 The Author(s). Published by Elsevier Ltd on behalf of DFRWS. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/).", "title": "" }, { "docid": "neg:1840059_3", "text": "In this paper, proactive resource allocation based on user location for point-to-point communication over fading channels is introduced, whereby the source must transmit a packet when the user requests it within a deadline of a single time slot. We introduce a prediction model in which the source predicts the request arrival $T_p$ slots ahead, where $T_p$ denotes the prediction window (PW) size. The source allocates energy to transmit some bits proactively for each time slot of the PW with the objective of reducing the transmission energy over the non-predictive case. The requests are predicted based on the user location utilizing the prior statistics about the user requests at each location. We also assume that the prediction is not perfect. We propose proactive scheduling policies to minimize the expected energy consumption required to transmit the requested packets under two different assumptions on the channel state information at the source. In the first scenario, offline scheduling, we assume the channel states are known a-priori at the source at the beginning of the PW. In the second scenario, online scheduling, it is assumed that the source has causal knowledge of the channel state. Numerical results are presented showing the gains achieved by using proactive scheduling policies compared with classical (reactive) networks. Simulation results also show that increasing the PW size leads to a significant reduction in the consumed transmission energy even with imperfect prediction.", "title": "" }, { "docid": "neg:1840059_4", "text": "The development of pharmacotherapies for cocaine addiction has been disappointingly slow. However, new neurobiological knowledge of how the brain is changed by chronic pharmacological insult with cocaine is revealing novel targets for drug development. Certain drugs currently being tested in clinical trials tap into the underlying cocaine-induced neuroplasticity, including drugs promoting GABA or inhibiting glutamate transmission. Armed with rationales derived from a neurobiological perspective that cocaine addiction is a pharmacologically induced disease of neuroplasticity in brain circuits mediating normal reward learning, one can expect novel pharmacotherapies to emerge that directly target the biological pathology of addiction.", "title": "" }, { "docid": "neg:1840059_5", "text": "From a theoretical perspective, most discussions of statistical learning (SL) have focused on the possible \"statistical\" properties that are the object of learning. Much less attention has been given to defining what \"learning\" is in the context of \"statistical learning.\" One major difficulty is that SL research has been monitoring participants' performance in laboratory settings with a strikingly narrow set of tasks, where learning is typically assessed offline, through a set of two-alternative-forced-choice questions, which follow a brief visual or auditory familiarization stream. Is that all there is to characterizing SL abilities? Here we adopt a novel perspective for investigating the processing of regularities in the visual modality. By tracking online performance in a self-paced SL paradigm, we focus on the trajectory of learning. In a set of three experiments we show that this paradigm provides a reliable and valid signature of SL performance, and it offers important insights for understanding how statistical regularities are perceived and assimilated in the visual modality. This demonstrates the promise of integrating different operational measures to our theory of SL.", "title": "" }, { "docid": "neg:1840059_6", "text": "This work characterizes electromagnetic excitation forces in interior permanent-magnet (IPM) brushless direct current (BLDC) motors and investigates their effects on noise and vibration. First, the electromagnetic excitations are classified into three sources: 1) so-called cogging torque, for which we propose an efficient technique of computation that takes into account saturation effects as a function of rotor position; 2) ripples of mutual and reluctance torque, for which we develop an equation to characterize the combination of space harmonics of inductances and flux linkages related to permanent magnets and time harmonics of current; and 3) fluctuation of attractive forces in the radial direction between the stator and rotor, for which we analyze contributions of electric currents as well as permanent magnets by the finite-element method. Then, the paper reports on an experimental investigation of influences of structural dynamic characteristics such as natural frequencies and mode shapes, as well as electromagnetic excitation forces, on noise and vibration in an IPM motor used in washing machines.", "title": "" }, { "docid": "neg:1840059_7", "text": "With the rapid growth of the internet and the spread of the information contained therein, the volume of information available on the web is more than the ability of users to manage, capture and keep the information up to date. One solution to this problem are personalization and recommender systems. Recommender systems use the comments of the group of users so that, to help people in that group more effectively to identify their favorite items from a huge set of choices. In recent years, the web has seen very strong growth in the use of blogs. Considering the high volume of information in blogs, bloggers are in trouble to find the desired information and find blogs with similar thoughts and desires. Therefore, considering the mass of information for the blogs, a blog recommender system seems to be necessary. In this paper, by combining different methods of clustering and collaborative filtering, personalized recommender system for Persian blogs is suggested.", "title": "" }, { "docid": "neg:1840059_8", "text": "As mobile instant messaging has become a major means of communication with the widespread use of smartphones, emoticons, symbols that are meant to indicate particular emotions in instant messages, have also developed into various forms. The primary purpose of this study is to classify the usage patterns of emoticons focusing on a particular variant known as \"stickers\" to observe individual and social characteristics of emoticon use and reinterpret the meaning of emoticons in instant messages. A qualitative approach with an in-depth semi-structured interview was used to uncover the motive in using emoticon stickers. The study suggests that besides using emoticon stickers for expressing emotions, users may have other motives: strategic and functional purposes.", "title": "" }, { "docid": "neg:1840059_9", "text": "MATLAB is specifically designed for simulating dynamic systems. This paper describes a method of modelling impulse voltage generator using Simulink, an extension of MATLAB. The equations for modelling have been developed and a corresponding Simulink model has been constructed. It shows that Simulink program becomes very useful in studying the effect of parameter changes in the design to obtain the desired impulse voltages and waveshapes from an impulse generator.", "title": "" }, { "docid": "neg:1840059_10", "text": "3D mesh segmentation has become a crucial part of many applications in 3D shape analysis. In this paper, a comprehensive survey on 3D mesh segmentation methods is presented. Analysis of the existing methodologies is addressed taking into account a new categorization along with the performance evaluation frameworks which aim to support meaningful benchmarks not only qualitatively but also in a quantitative manner. This survey aims to capture the essence of current trends in 3D mesh segmentation.", "title": "" }, { "docid": "neg:1840059_11", "text": "One of the most common tasks in medical imaging is semantic segmentation. Achieving this segmentation automatically has been an active area of research, but the task has been proven very challenging due to the large variation of anatomy across different patients. However, recent advances in deep learning have made it possible to significantly improve the performance of image recognition and semantic segmentation methods in the field of computer vision. Due to the data driven approaches of hierarchical feature learning in deep learning frameworks, these advances can be translated to medical images without much difficulty. Several variations of deep convolutional neural networks have been successfully applied to medical images. Especially fully convolutional architectures have been proven efficient for segmentation of 3D medical images. In this article, we describe how to build a 3D fully convolutional network (FCN) that can process 3D images in order to produce automatic semantic segmentations. The model is trained and evaluated on a clinical computed tomography (CT) dataset and shows stateof-the-art performance in multi-organ segmentation.", "title": "" }, { "docid": "neg:1840059_12", "text": "Today, with more competiveness of industries, markets, and working atmosphere in productive and service organizations what is very important for maintaining clients present, for attracting new clients and as a result increasing growth of success in organizations is having a suitable relation with clients. Bank is among organizations which are not an exception. Especially, at the moment according to increasing rate of banks` privatization, it can be argued that significance of attracting clients for banks is more than every time. The article tries to investigate effect of CRM on marketing performance in banking industry. The research method is applied and survey and descriptive. Statistical community of the research is 5 branches from Mellat Banks across Khoramabad Province and their clients. There are 45 personnel in this branch and according to Morgan Table the sample size was 40 people. Clients example was considered according to collected information, one questionnaire was designed for bank organization and another one was prepared for banks` clients in which reliability and validity are approved. The research result indicates that CRM is ineffective on marketing performance.", "title": "" }, { "docid": "neg:1840059_13", "text": "A multi database model of distributed information retrieval is presented in which people are assumed to have access to many searchable text databases In such an environment full text information retrieval consists of discovering database contents ranking databases by their expected ability to satisfy the query searching a small number of databases and merging results returned by di erent databases This paper presents algorithms for each task It also discusses how to reorganize conventional test collections into multi database testbeds and evaluation methodologies for multi database experiments A broad and diverse group of experimental results is presented to demonstrate that the algorithms are e ective e cient robust and scalable", "title": "" }, { "docid": "neg:1840059_14", "text": "Tweet streams provide a variety of real-life and real-time information on social events that dynamically change over time. Although social event detection has been actively studied, how to efficiently monitor evolving events from continuous tweet streams remains open and challenging. One common approach for event detection from text streams is to use single-pass incremental clustering. However, this approach does not track the evolution of events, nor does it address the issue of efficient monitoring in the presence of a large number of events. In this paper, we capture the dynamics of events using four event operations (create, absorb, split, and merge), which can be effectively used to monitor evolving events. Moreover, we propose a novel event indexing structure, called Multi-layer Inverted List (MIL), to manage dynamic event databases for the acceleration of large-scale event search and update. We thoroughly study the problem of nearest neighbour search using MIL based on upper bound pruning, along with incremental index maintenance. Extensive experiments have been conducted on a large-scale real-life tweet dataset. The results demonstrate the promising performance of our event indexing and monitoring methods on both efficiency and effectiveness.", "title": "" }, { "docid": "neg:1840059_15", "text": "The analysis of the topology and organization of brain networks is known to greatly benefit from network measures in graph theory. However, to evaluate dynamic changes of brain functional connectivity, more sophisticated quantitative metrics characterizing temporal evolution of brain topological features are required. To simplify conversion of time-varying brain connectivity to a static graph representation is straightforward but the procedure loses temporal information that could be critical in understanding the brain functions. To extend the understandings of functional segregation and integration to a dynamic fashion, we recommend dynamic graph metrics to characterise temporal changes of topological features of brain networks. This study investigated functional segregation and integration of brain networks over time by dynamic graph metrics derived from EEG signals during an experimental protocol: performance of complex flight simulation tasks with multiple levels of difficulty. We modelled time-varying brain functional connectivity as multi-layer networks, in which each layer models brain connectivity at time window $t+\\Delta t$ . Dynamic graph metrics were calculated to quantify temporal and topological properties of the network. Results show that brain networks under the performance of complex tasks reveal a dynamic small-world architecture with a number of frequently connected nodes or hubs, which supports the balance of information segregation and integration in brain over time. The results also show that greater cognitive workloads caused by more difficult tasks induced a more globally efficient but less clustered dynamic small-world functional network. Our study illustrates that task-related changes of functional brain network segregation and integration can be characterized by dynamic graph metrics.", "title": "" }, { "docid": "neg:1840059_16", "text": "EUCAST expert rules have been developed to assist clinical microbiologists and describe actions to be taken in response to specific antimicrobial susceptibility test results. They include recommendations on reporting, such as inferring susceptibility to other agents from results with one, suppression of results that may be inappropriate, and editing of results from susceptible to intermediate or resistant or from intermediate to resistant on the basis of an inferred resistance mechanism. They are based on current clinical and/or microbiological evidence. EUCAST expert rules also include intrinsic resistance phenotypes and exceptional resistance phenotypes, which have not yet been reported or are very rare. The applicability of EUCAST expert rules depends on the MIC breakpoints used to define the rules. Setting appropriate clinical breakpoints, based on treating patients and not on the detection of resistance mechanisms, may lead to modification of some expert rules in the future.", "title": "" }, { "docid": "neg:1840059_17", "text": "This paper reports finding from a study carried out in a remote rural area of Bangladesh during December 2000. Nineteen key informants were interviewed for collecting data on domestic violence against women. Each key informant provided information about 10 closest neighbouring ever-married women covering a total of 190 women. The questionnaire included information about frequency of physical violence, verbal abuse, and other relevant information, including background characteristics of the women and their husbands. 50.5% of the women were reported to be battered by their husbands and 2.1% by other family members. Beating by the husband was negatively related with age of husband: the odds of beating among women with husbands aged less than 30 years were six times of those with husbands aged 50 years or more. Members of micro-credit societies also had higher odds of being beaten than non-members. The paper discusses the possibility of community-centred interventions by raising awareness about the violation of human rights issues and other legal and psychological consequences to prevent domestic violence against women.", "title": "" }, { "docid": "neg:1840059_18", "text": "Software Defined Networking (SDN) is an emerging promising paradigm for network management because of its centralized network intelligence. However, the centralized control architecture of the software-defined networks (SDNs) brings novel challenges of reliability, scalability, fault tolerance and interoperability. In this paper, we proposed a novel clustered distributed controller architecture in the real setting of SDNs. The distributed cluster implementation comprises of multiple popular SDN controllers. The proposed mechanism is evaluated using a real world network topology running on top of an emulated SDN environment. The result shows that the proposed distributed controller clustering mechanism is able to significantly reduce the average latency from 8.1% to 1.6%, the packet loss from 5.22% to 4.15%, compared to distributed controller without clustering running on HP Virtual Application Network (VAN) SDN and Open Network Operating System (ONOS) controllers respectively. Moreover, proposed method also shows reasonable CPU utilization results. Furthermore, the proposed mechanism makes possible to handle unexpected load fluctuations while maintaining a continuous network operation, even when there is a controller failure. The paper is a potential contribution stepping towards addressing the issues of reliability, scalability, fault tolerance, and inter-operability.", "title": "" }, { "docid": "neg:1840059_19", "text": "In linear representation based face recognition (FR), it is expected that a discriminative dictionary can be learned from the training samples so that the query sample can be better represented for classification. On the other hand, dimensionality reduction is also an important issue for FR. It can not only reduce significantly the storage space of face images, but also enhance the discrimination of face feature. Existing methods mostly perform dimensionality reduction and dictionary learning separately, which may not fully exploit the discriminative information in the training samples. In this paper, we propose to learn jointly the projection matrix for dimensionality reduction and the discriminative dictionary for face representation. The joint learning makes the learned projection and dictionary better fit with each other so that a more effective face classification can be obtained. The proposed algorithm is evaluated on benchmark face databases in comparison with existing linear representation based methods, and the results show that the joint learning improves the FR rate, particularly when the number of training samples per class is small.", "title": "" } ]
1840060
How Random Walks Can Help Tourism
[ { "docid": "pos:1840060_0", "text": "Social media such as those residing in the popular photo sharing websites is attracting increasing attention in recent years. As a type of user-generated data, wisdom of the crowd is embedded inside such social media. In particular, millions of users upload to Flickr their photos, many associated with temporal and geographical information. In this paper, we investigate how to rank the trajectory patterns mined from the uploaded photos with geotags and timestamps. The main objective is to reveal the collective wisdom recorded in the seemingly isolated photos and the individual travel sequences reflected by the geo-tagged photos. Instead of focusing on mining frequent trajectory patterns from geo-tagged social media, we put more effort into ranking the mined trajectory patterns and diversifying the ranking results. Through leveraging the relationships among users, locations and trajectories, we rank the trajectory patterns. We then use an exemplar-based algorithm to diversify the results in order to discover the representative trajectory patterns. We have evaluated the proposed framework on 12 different cities using a Flickr dataset and demonstrated its effectiveness.", "title": "" }, { "docid": "pos:1840060_1", "text": "Collaborative filtering is the most popular approach to build recommender systems and has been successfully employed in many applications. However, it cannot make recommendations for so-called cold start users that have rated only a very small number of items. In addition, these methods do not know how confident they are in their recommendations. Trust-based recommendation methods assume the additional knowledge of a trust network among users and can better deal with cold start users, since users only need to be simply connected to the trust network. On the other hand, the sparsity of the user item ratings forces the trust-based approach to consider ratings of indirect neighbors that are only weakly trusted, which may decrease its precision. In order to find a good trade-off, we propose a random walk model combining the trust-based and the collaborative filtering approach for recommendation. The random walk model allows us to define and to measure the confidence of a recommendation. We performed an evaluation on the Epinions dataset and compared our model with existing trust-based and collaborative filtering methods.", "title": "" } ]
[ { "docid": "neg:1840060_0", "text": "Recurrent Neural Networks (RNNs) play a major role in the field of sequential learning, and have outperformed traditional algorithms on many benchmarks. Training deep RNNs still remains a challenge, and most of the state-of-the-art models are structured with a transition depth of 2-4 layers. Recurrent Highway Networks (RHNs) were introduced in order to tackle this issue. These have achieved state-of-the-art performance on a few benchmarks using a depth of 10 layers. However, the performance of this architecture suffers from a bottleneck, and ceases to improve when an attempt is made to add more layers. In this work, we analyze the causes for this, and postulate that the main source is the way that the information flows through time. We introduce a novel and simple variation for the RHN cell, called Highway State Gating (HSG), which allows adding more layers, while continuing to improve performance. By using a gating mechanism for the state, we allow the net to ”choose” whether to pass information directly through time, or to gate it. This mechanism also allows the gradient to back-propagate directly through time and, therefore, results in a slightly faster convergence. We use the Penn Treebank (PTB) dataset as a platform for empirical proof of concept. Empirical results show that the improvement due to Highway State Gating is for all depths, and as the depth increases, the improvement also increases.", "title": "" }, { "docid": "neg:1840060_1", "text": "A significant number of promising applications for vehicular ad hoc networks (VANETs) are becoming a reality. Most of these applications require a variety of heterogenous content to be delivered to vehicles and to their on-board users. However, the task of content delivery in such dynamic and large-scale networks is easier said than done. In this article, we propose a classification of content delivery solutions applied to VANETs while highlighting their new characteristics and describing their underlying architectural design. First, the two fundamental building blocks that are part of an entire content delivery system are identified: replica allocation and content delivery. The related solutions are then classified according to their architectural definition. Within each category, solutions are described based on the techniques and strategies that have been adopted. As result, we present an in-depth discussion on the architecture, techniques, and strategies adopted by studies in the literature that tackle problems related to vehicular content delivery networks.", "title": "" }, { "docid": "neg:1840060_2", "text": "We undertook a meta-analysis of six Crohn's disease genome-wide association studies (GWAS) comprising 6,333 affected individuals (cases) and 15,056 controls and followed up the top association signals in 15,694 cases, 14,026 controls and 414 parent-offspring trios. We identified 30 new susceptibility loci meeting genome-wide significance (P < 5 × 10−8). A series of in silico analyses highlighted particular genes within these loci and, together with manual curation, implicated functionally interesting candidate genes including SMAD3, ERAP2, IL10, IL2RA, TYK2, FUT2, DNMT3A, DENND1B, BACH2 and TAGAP. Combined with previously confirmed loci, these results identify 71 distinct loci with genome-wide significant evidence for association with Crohn's disease.", "title": "" }, { "docid": "neg:1840060_3", "text": "We present a named entity recognition and classification system that uses only probabilistic character-level features. Classifications by multiple orthographic tries are combined in a hidden Markov model framework to incorporate both internal and contextual evidence. As part of the system, we perform a preprocessing stage in which capitalisation is restored to sentence-initial and all-caps words with high accuracy. We report f-values of 86.65 and 79.78 for English, and 50.62 and 54.43 for the German datasets.", "title": "" }, { "docid": "neg:1840060_4", "text": "Multi-choice reading comprehension is a challenging task, which involves the matching between a passage and a question-answer pair. This paper proposes a new co-matching approach to this problem, which jointly models whether a passage can match both a question and a candidate answer. Experimental results on the RACE dataset demonstrate that our approach achieves state-of-the-art performance.", "title": "" }, { "docid": "neg:1840060_5", "text": "This paper presents an implementation method for the people counting system which detects and tracks moving people using a fixed single camera. The main contribution of this paper is the novel head detection method based on body’s geometry. A novel body descriptor is proposed for finding people’s head which is defined as Body Feature Rectangle (BFR). First, a vertical projection method is used to get the line which divides touching persons into individuals. Second, a special inscribed rectangle is found to locate the neck position which describes the torso area. Third, locations of people’s heads can be got according to its neck-positions. Last, a robust counting method named MEA is proposed to get the real counts of walking people flows. The proposed method can divide the multiple-people image into individuals whatever people merge with each other or not. Moreover, the passing people can be counted accurately under the influence of wearing hats. Experimental results show that our proposed method can nearly reach to an accuracy of 100% if the number of a people-merging pattern is less than six. Keywords-People Counting; Head Detection; BFR; People-flow Tracking", "title": "" }, { "docid": "neg:1840060_6", "text": "Vestibular migraine is a chameleon among the episodic vertigo syndromes because considerable variation characterizes its clinical manifestation. The attacks may last from seconds to days. About one-third of patients presents with monosymptomatic attacks of vertigo or dizziness without headache or other migrainous symptoms. During attacks most patients show spontaneous or positional nystagmus and in the attack-free interval minor ocular motor and vestibular deficits. Women are significantly more often affected than men. Symptoms may begin at any time in life, with the highest prevalence in young adults and between the ages of 60 and 70. Over the last 10 years vestibular migraine has evolved into a medical entity in dizziness units. It is the most common cause of spontaneous recurrent episodic vertigo and accounts for approximately 10% of patients with vertigo and dizziness. Its broad spectrum poses a diagnostic problem of how to rule out Menière's disease or vestibular paroxysmia. Vestibular migraine should be included in the International Headache Classification of Headache Disorders (ICHD) as a subcategory of migraine. It should, however, be kept separate and distinct from basilar-type migraine and benign paroxysmal vertigo of childhood. We prefer the term \"vestibular migraine\" to \"migrainous vertigo,\" because the latter may also refer to various vestibular and non-vestibular symptoms. Antimigrainous medication to treat the single attack and to prevent recurring attacks appears to be effective, but the published evidence is weak. A randomized, double-blind, placebo-controlled study is required to evaluate medical treatment of this condition.", "title": "" }, { "docid": "neg:1840060_7", "text": "In this paper, the traditional k-modes clustering algorithm is extended by weighting attribute value matches in dissimilarity computation. The use of attribute value weighting technique makes it possible to generate clusters with stronger intra-similarities, and therefore achieve better clustering performance. Experimental results on real life datasets show that these value weighting based k-modes algorithms are superior to the standard k-modes algorithm with respect to clustering accuracy.", "title": "" }, { "docid": "neg:1840060_8", "text": "Carefully managing the presentation of self via technology is a core practice on all modern social media platforms. Recently, selfies have emerged as a new, pervasive genre of identity performance. In many ways unique, selfies bring us fullcircle to Goffman—blending the online and offline selves together. In this paper, we take an empirical, Goffman-inspired look at the phenomenon of selfies. We report a large-scale, mixed-method analysis of the categories in which selfies appear on Instagram—an online community comprising over 400M people. Applying computer vision and network analysis techniques to 2.5M selfies, we present a typology of emergent selfie categories which represent emphasized identity statements. To the best of our knowledge, this is the first large-scale, empirical research on selfies. We conclude, contrary to common portrayals in the press, that selfies are really quite ordinary: they project identity signals such as wealth, health and physical attractiveness common to many online media, and to offline life.", "title": "" }, { "docid": "neg:1840060_9", "text": "AIM\nTo identify key predictors and moderators of mental health 'help-seeking behavior' in adolescents.\n\n\nBACKGROUND\nMental illness is highly prevalent in adolescents and young adults; however, individuals in this demographic group are among the least likely to seek help for such illnesses. Very little quantitative research has examined predictors of help-seeking behaviour in this demographic group.\n\n\nDESIGN\nA cross-sectional design was used.\n\n\nMETHODS\nA group of 180 volunteers between the ages of 17-25 completed a survey designed to measure hypothesized predictors and moderators of help-seeking behaviour. Predictors included a range of health beliefs, personality traits and attitudes. Data were collected in August 2010 and were analysed using two standard and three hierarchical multiple regression analyses.\n\n\nFINDINGS\nThe standard multiple regression analyses revealed that extraversion, perceived benefits of seeking help, perceived barriers to seeking help and social support were direct predictors of help-seeking behaviour. Tests of moderated relationships (using hierarchical multiple regression analyses) indicated that perceived benefits were more important than barriers in predicting help-seeking behaviour. In addition, perceived susceptibility did not predict help-seeking behaviour unless individuals were health conscious to begin with or they believed that they would benefit from help.\n\n\nCONCLUSION\nA range of personality traits, attitudes and health beliefs can predict help-seeking behaviour for mental health problems in adolescents. The variable 'Perceived Benefits' is of particular importance as it is: (1) a strong and robust predictor of help-seeking behaviour; and (2) a factor that can theoretically be modified based on health promotion programmes.", "title": "" }, { "docid": "neg:1840060_10", "text": "Traditionally, visualization design assumes that the e↵ectiveness of visualizations is based on how much, and how clearly, data are presented. We argue that visualization requires a more nuanced perspective. Data are not ends in themselves, but means to an end (such as generating knowledge or assisting in decision-making). Focusing on the presentation of data per se can result in situations where these higher goals are ignored. This is especially the case for situations where cognitive or perceptual biases make the presentation of “just” the data as misleading as willful distortion. We argue that we need to de-sanctify data, and occasionally promote designs which distort or obscure data in service of understanding. We discuss examples of beneficial embellishment, distortion, and obfuscation in visualization, and argue that these examples are representative of a wider class of techniques for going beyond simplistic presentations of data.", "title": "" }, { "docid": "neg:1840060_11", "text": "Fast and reliable face and facial feature detection are required abilities for any Human Computer Interaction approach based on Computer Vision. Since the publication of the Viola-Jones object detection framework and the more recent open source implementation, an increasing number of applications have appeared, particularly in the context of facial processing. In this respect, the OpenCV community shares a collection of public domain classifiers for this scenario. However, as far as we know these classifiers have never been evaluated and/or compared. In this paper we analyze the individual performance of all those public classifiers getting the best performance for each target. These results are valid to define a baseline for future approaches. Additionally we propose a simple hierarchical combination of those classifiers to increase the facial feature detection rate while reducing the face false detection rate.", "title": "" }, { "docid": "neg:1840060_12", "text": "The design and development of process-aware information systems is often supported by specifying requirements as business process models. Although this approach is generally accepted as an effective strategy, it remains a fundamental challenge to adequately validate these models given the diverging skill set of domain experts and system analysts. As domain experts often do not feel confident in judging the correctness and completeness of process models that system analysts create, the validation often has to regress to a discourse using natural language. In order to support such a discourse appropriately, so-called verbalization techniques have been defined for different types of conceptual models. However, there is currently no sophisticated technique available that is capable of generating natural-looking text from process models. In this paper, we address this research gap and propose a technique for generating natural language texts from business process models. A comparison with manually created process descriptions demonstrates that the generated texts are superior in terms of completeness, structure, and linguistic complexity. An evaluation with users further demonstrates that the texts are very understandable and effectively allow the reader to infer the process model semantics. Hence, the generated texts represent a useful input for process model validation.", "title": "" }, { "docid": "neg:1840060_13", "text": "We present results from a multi-generational study of collocated group console gaming. We examine the intergenerational gaming practices of four generations of gamers, from ages 3 to 83 and, in particular, the roles that gamers of different generations take on when playing together in groups. Our findings highlight the extent to which existing gaming technologies are amenable to interactions within collocated intergenerational groups and the broader set of roles that have emerged in these computer-mediated interactions than have previously been documented by studies of more traditional collocated, intergenerational interactions. We articulate attributes of the games that encourage intergenerational interaction.", "title": "" }, { "docid": "neg:1840060_14", "text": "Progress in signal processing continues to enable welcome advances in high-frequency (HF) radio performance and efficiency. The latest data waveforms use channels wider than 3 kHz to boost data throughput and robustness. This has driven the need for a more capable Automatic Link Establishment (ALE) system that links faster and adapts the wideband HF (WBHF) waveform to efficiently use available spectrum. In this paper, we investigate the possibility and advantages of using various non-scanning ALE techniques with the new wideband ALE (WALE) to further improve spectrum awareness and linking speed.", "title": "" }, { "docid": "neg:1840060_15", "text": "This paper proposes a comprehensive methodology for the design of a controllable electric vehicle charger capable of making the most of the interaction with an autonomous smart energy management system (EMS) in a residential setting. Autonomous EMSs aim achieving the potential benefits associated with energy exchanges between consumers and the grid, using bidirectional and power-controllable electric vehicle chargers. A suitable design for a controllable charger is presented, including the sizing of passive elements and controllers. This charger has been implemented using an experimental setup with a digital signal processor to validate its operation. The experimental results obtained foresee an adequate interaction between the proposed charger and a compatible autonomous EMS in a typical residential setting.", "title": "" }, { "docid": "neg:1840060_16", "text": "In this paper, we overview the 2009 Simulated Car Racing Championship-an event comprising three competitions held in association with the 2009 IEEE Congress on Evolutionary Computation (CEC), the 2009 ACM Genetic and Evolutionary Computation Conference (GECCO), and the 2009 IEEE Symposium on Computational Intelligence and Games (CIG). First, we describe the competition regulations and the software framework. Then, the five best teams describe the methods of computational intelligence they used to develop their drivers and the lessons they learned from the participation in the championship. The organizers provide short summaries of the other competitors. Finally, we summarize the championship results, followed by a discussion about what the organizers learned about 1) the development of high-performing car racing controllers and 2) the organization of scientific competitions.", "title": "" }, { "docid": "neg:1840060_17", "text": "The cross-entropy loss commonly used in deep learning is closely related to the defining properties of optimal representations, but does not enforce some of the key properties. We show that this can be solved by adding a regularization term, which is in turn related to injecting multiplicative noise in the activations of a Deep Neural Network, a special case of which is the common practice of dropout. We show that our regularized loss function can be efficiently minimized using Information Dropout, a generalization of dropout rooted in information theoretic principles that automatically adapts to the data and can better exploit architectures of limited capacity. When the task is the reconstruction of the input, we show that our loss function yields a Variational Autoencoder as a special case, thus providing a link between representation learning, information theory and variational inference. Finally, we prove that we can promote the creation of optimal disentangled representations simply by enforcing a factorized prior, a fact that has been observed empirically in recent work. Our experiments validate the theoretical intuitions behind our method, and we find that Information Dropout achieves a comparable or better generalization performance than binary dropout, especially on smaller models, since it can automatically adapt the noise to the structure of the network, as well as to the test sample.", "title": "" }, { "docid": "neg:1840060_18", "text": "Accurate quantification of gluconeogenic flux following alcohol ingestion in overnight-fasted humans has yet to be reported. [2-13C1]glycerol, [U-13C6]glucose, [1-2H1]galactose, and acetaminophen were infused in normal men before and after the consumption of 48 g alcohol or a placebo to quantify gluconeogenesis, glycogenolysis, hepatic glucose production, and intrahepatic gluconeogenic precursor availability. Gluconeogenesis decreased 45% vs. the placebo (0.56 ± 0.05 to 0.44 ± 0.04 mg ⋅ kg-1 ⋅ min-1vs. 0.44 ± 0.05 to 0.63 ± 0.09 mg ⋅ kg-1 ⋅ min-1, respectively, P < 0.05) in the 5 h after alcohol ingestion, and total gluconeogenic flux was lower after alcohol compared with placebo. Glycogenolysis fell over time after both the alcohol and placebo cocktails, from 1.46-1.47 mg ⋅ kg-1 ⋅ min-1to 1.35 ± 0.17 mg ⋅ kg-1 ⋅ min-1(alcohol) and 1.26 ± 0.20 mg ⋅ kg-1 ⋅ min-1, respectively (placebo, P < 0.05 vs. baseline). Hepatic glucose output decreased 12% after alcohol consumption, from 2.03 ± 0.21 to 1.79 ± 0.21 mg ⋅ kg-1 ⋅ min-1( P < 0.05 vs. baseline), but did not change following the placebo. Estimated intrahepatic gluconeogenic precursor availability decreased 61% following alcohol consumption ( P < 0.05 vs. baseline) but was unchanged after the placebo ( P < 0.05 between treatments). We conclude from these results that gluconeogenesis is inhibited after alcohol consumption in overnight-fasted men, with a somewhat larger decrease in availability of gluconeogenic precursors but a smaller effect on glucose production and no effect on plasma glucose concentrations. Thus inhibition of flux into the gluconeogenic precursor pool is compensated by changes in glycogenolysis, the fate of triose-phosphates, and peripheral tissue utilization of plasma glucose.", "title": "" } ]
1840061
Type-2 fuzzy logic systems for temperature evaluation in ladle furnace
[ { "docid": "pos:1840061_0", "text": "Type-2 fuzzy sets let us model and minimize the effects of uncertainties in rule-base fuzzy logic systems. However, they are difficult to understand for a variety of reasons which we enunciate. In this paper, we strive to overcome the difficulties by: 1) establishing a small set of terms that let us easily communicate about type-2 fuzzy sets and also let us define such sets very precisely, 2) presenting a new representation for type-2 fuzzy sets, and 3) using this new representation to derive formulas for union, intersection and complement of type-2 fuzzy sets without having to use the Extension Principle.", "title": "" }, { "docid": "pos:1840061_1", "text": "Real world environments are characterized by high levels of linguistic and numerical uncertainties. A Fuzzy Logic System (FLS) is recognized as an adequate methodology to handle the uncertainties and imprecision available in real world environments and applications. Since the invention of fuzzy logic, it has been applied with great success to numerous real world applications such as washing machines, food processors, battery chargers, electrical vehicles, and several other domestic and industrial appliances. The first generation of FLSs were type-1 FLSs in which type-1 fuzzy sets were employed. Later, it was found that using type-2 FLSs can enable the handling of higher levels of uncertainties. Recent works have shown that interval type-2 FLSs can outperform type-1 FLSs in the applications which encompass high uncertainty levels. However, the majority of interval type-2 FLSs handle the linguistic and input numerical uncertainties using singleton interval type-2 FLSs that mix the numerical and linguistic uncertainties to be handled only by the linguistic labels type-2 fuzzy sets. This ignores the fact that if input numerical uncertainties were present, they should affect the incoming inputs to the FLS. Even in the papers that employed non-singleton type-2 FLSs, the input signals were assumed to have a predefined shape (mostly Gaussian or triangular) which might not reflect the real uncertainty distribution which can vary with the associated measurement. In this paper, we will present a new approach which is based on an adaptive non-singleton interval type-2 FLS where the numerical uncertainties will be modeled and handled by non-singleton type-2 fuzzy inputs and the linguistic uncertainties will be handled by interval type-2 fuzzy sets to represent the antecedents’ linguistic labels. The non-singleton type-2 fuzzy inputs are dynamic and they are automatically generated from data and they do not assume a specific shape about the distribution associated with the given sensor. We will present several real world experiments using a real world robot which will show how the proposed type-2 non-singleton type-2 FLS will produce a superior performance to its singleton type-1 and type-2 counterparts when encountering high levels of uncertainties.", "title": "" }, { "docid": "pos:1840061_2", "text": "In this paper we propose a new approach to genetic optimization of modular neural networks with fuzzy response integration. The architecture of the modular neural network and the structure of the fuzzy system (for response integration) are designed using genetic algorithms. The proposed methodology is applied to the case of human recognition based on three biometric measures, namely iris, ear, and voice. Experimental results show that optimal modular neural networks can be designed with the use of genetic algorithms and as a consequence the recognition rates of such networks can be improved significantly. In the case of optimization of the fuzzy system for response integration, the genetic algorithm not only adjusts the number of membership functions and rules, but also allows the variation on the type of logic (type-1 or type-2) and the change in the inference model (switching to Mamdani model or Sugeno model). Another interesting finding of this work is that when human recognition is performed under noisy conditions, the response integrators of the modular networks constructed by the genetic algorithm are found to be optimal when using type-2 fuzzy logic. This could have been expected as there has been experimental evidence from previous works that type-2 fuzzy logic is better suited to model higher levels of uncertainty. 2012 Elsevier Inc. All rights reserved.", "title": "" } ]
[ { "docid": "neg:1840061_0", "text": "The evaluation of the effects of different media ionic strengths and pH on the release of hydrochlorothiazide, a poorly soluble drug, and diltiazem hydrochloride, a cationic and soluble drug, from a gel forming hydrophilic polymeric matrix was the objective of this study. The drug to polymer ratio of formulated tablets was 4:1. Hydrochlorothiazide or diltiazem HCl extended release (ER) matrices containing hypromellose (hydroxypropyl methylcellulose (HPMC)) were evaluated in media with a pH range of 1.2-7.5, using an automated USP type III, Bio-Dis dissolution apparatus. The ionic strength of the media was varied over a range of 0-0.4M to simulate the gastrointestinal fed and fasted states and various physiological pH conditions. Sodium chloride was used for ionic regulation due to its ability to salt out polymers in the midrange of the lyotropic series. The results showed that the ionic strength had a profound effect on the drug release from the diltiazem HCl K100LV matrices. The K4M, K15M and K100M tablets however withstood the effects of media ionic strength and showed a decrease in drug release to occur with an increase in ionic strength. For example, drug release after the 1h mark for the K100M matrices in water was 36%. Drug release in pH 1.2 after 1h was 30%. An increase of the pH 1.2 ionic strength to 0.4M saw a reduction of drug release to 26%. This was the general trend for the K4M and K15M matrices as well. The similarity factor f2 was calculated using drug release in water as a reference. Despite similarity occurring for all the diltiazem HCl matrices in the pH 1.2 media (f2=64-72), increases of ionic strength at 0.2M and 0.4M brought about dissimilarity. The hydrochlorothiazide tablet matrices showed similarity at all the ionic strength tested for all polymers (f2=56-81). The values of f2 however reduced with increasing ionic strengths. DSC hydration results explained the hydrochlorothiazide release from their HPMC matrices. There was an increase in bound water as ionic strengths increased. Texture analysis was employed to determine the gel strength and also to explain the drug release for the diltiazem hydrochloride. This methodology can be used as a valuable tool for predicting potential ionic effects related to in vivo fed and fasted states on drug release from hydrophilic ER matrices.", "title": "" }, { "docid": "neg:1840061_1", "text": "Shading is a tedious process for artists involved in 2D cartoon and manga production given the volume of contents that the artists have to prepare regularly over tight schedule. While we can automate shading production with the presence of geometry, it is impractical for artists to model the geometry for every single drawing. In this work, we aim to automate shading generation by analyzing the local shapes, connections, and spatial arrangement of wrinkle strokes in a clean line drawing. By this, artists can focus more on the design rather than the tedious manual editing work, and experiment with different shading effects under different conditions. To achieve this, we have made three key technical contributions. First, we model five perceptual cues by exploring relevant psychological principles to estimate the local depth profile around strokes. Second, we formulate stroke interpretation as a global optimization model that simultaneously balances different interpretations suggested by the perceptual cues and minimizes the interpretation discrepancy. Lastly, we develop a wrinkle-aware inflation method to generate a height field for the surface to support the shading region computation. In particular, we enable the generation of two commonly-used shading styles: 3D-like soft shading and manga-style flat shading.", "title": "" }, { "docid": "neg:1840061_2", "text": "A new compact two-segments dielectric resonator antenna (TSDR) for ultrawideband (UWB) application is presented and studied. The design consists of a thin monopole printed antenna loaded with two dielectric resonators with different dielectric constant. By applying a combination of U-shaped feedline and modified TSDR, proper radiation characteristics are achieved. The proposed antenna provides an ultrawide impedance bandwidth, high radiation efficiency, and compact antenna with an overall size of 18 × 36 × 11 mm . From the measurement results, it is found that the realized dielectric resonator antenna with good radiation characteristics provides an ultrawide bandwidth of about 110%, covering a range from 3.14 to 10.9 GHz, which covers UWB application.", "title": "" }, { "docid": "neg:1840061_3", "text": "This is the second of five papers in the child survival series. The first focused on continuing high rates of child mortality (over 10 million each year) from preventable causes: diarrhoea, pneumonia, measles, malaria, HIV/AIDS, the underlying cause of undernutrition, and a small group of causes leading to neonatal deaths. We review child survival interventions feasible for delivery at high coverage in low-income settings, and classify these as level 1 (sufficient evidence of effect), level 2 (limited evidence), or level 3 (inadequate evidence). Our results show that at least one level-1 intervention is available for preventing or treating each main cause of death among children younger than 5 years, apart from birth asphyxia, for which a level-2 intervention is available. There is also limited evidence for several other interventions. However, global coverage for most interventions is below 50%. If level 1 or 2 interventions were universally available, 63% of child deaths could be prevented. These findings show that the interventions needed to achieve the millennium development goal of reducing child mortality by two-thirds by 2015 are available, but that they are not being delivered to the mothers and children who need them.", "title": "" }, { "docid": "neg:1840061_4", "text": "Integer optimization problems are concerned with the efficient allocation of limited resources to meet a desired objective when some of the resources in question can only be divided into discrete parts. In such cases, the divisibility constraints on these resources, which may be people, machines, or other discrete inputs, may restrict the possible alternatives to a finite set. Nevertheless, there are usually too many alternatives to make complete enumeration a viable option for instances of realistic size. For example, an airline may need to determine crew schedules that minimize the total operating cost; an automotive manufacturer may want to determine the optimal mix of models to produce in order to maximize profit; or a flexible manufacturing facility may want to schedule production for a plant without knowing precisely what parts will be needed in future periods. In today’s changing and competitive industrial environment, the difference between ad hoc planning methods and those that use sophisticated mathematical models to determine an optimal course of action can determine whether or not a company survives.", "title": "" }, { "docid": "neg:1840061_5", "text": "This paper explores the problem of breast tissue classification of microscopy images. Based on the predominant cancer type the goal is to classify images into four categories of normal, benign, in situ carcinoma, and invasive carcinoma. Given a suitable training dataset, we utilize deep learning techniques to address the classification problem. Due to the large size of each image in the training dataset, we propose a patch-based technique which consists of two consecutive convolutional neural networks. The first “patch-wise” network acts as an auto-encoder that extracts the most salient features of image patches while the second “image-wise” network performs classification of the whole image. The first network is pre-trained and aimed at extracting local information while the second network obtains global information of an input image. We trained the networks using the ICIAR 2018 grand challenge on BreAst Cancer Histology (BACH) dataset. The proposed method yields 95% accuracy on the validation set compared to previously reported 77% accuracy rates in the literature. Our code is publicly available at https://github.com/ImagingLab/ICIAR2018.", "title": "" }, { "docid": "neg:1840061_6", "text": "Constrained Local Models (CLMs) are a well-established family of methods for facial landmark detection. However, they have recently fallen out of favor to cascaded regressionbased approaches. This is in part due to the inability of existing CLM local detectors to model the very complex individual landmark appearance that is affected by expression, illumination, facial hair, makeup, and accessories. In our work, we present a novel local detector – Convolutional Experts Network (CEN) – that brings together the advantages of neural architectures and mixtures of experts in an end-toend framework. We further propose a Convolutional Experts Constrained Local Model (CE-CLM) algorithm that uses CEN as a local detector. We demonstrate that our proposed CE-CLM algorithm outperforms competitive state-of-the-art baselines for facial landmark detection by a large margin, especially on challenging profile images.", "title": "" }, { "docid": "neg:1840061_7", "text": "his paper presents a low-voltage (LV) (1.0 V) and low-power (LP) (40 μW) inverter based operational transconductance amplifier (OTA) using FGMOS (Floating-Gate MOS) transistor and its application in Gm-C filters. The OTA was designed in a 0.18 μm CMOS process. The simulation results of the proposed OTA demonstrate an open loop gain of 30.2 dB and a unity gain frequency of 942 MHz. In this OTA, the relative tuning range of 50 is achieved. To demonstrate the use of the proposed OTA in practical circuits, the second-order filter was designed. The designed filter has a good tuning range from 100 kHz to 5.6 MHz which is suitable for the wireless specifications of Bluetooth (650 kHz), CDMA2000 (700 kHz) and Wideband CDMA (2.2 MHz). The active area occupied by the designed filter on the silicon is and the maximum power consumption of this filter is 160 μW.", "title": "" }, { "docid": "neg:1840061_8", "text": "We introduce instancewise feature selection as a methodology for model interpretation. Our method is based on learning a function to extract a subset of features that are most informative for each given example. This feature selector is trained to maximize the mutual information between selected features and the response variable, where the conditional distribution of the response variable given the input is the model to be explained. We develop an efficient variational approximation to the mutual information, and show the effectiveness of our method on a variety of synthetic and real data sets using both quantitative metrics and human evaluation.", "title": "" }, { "docid": "neg:1840061_9", "text": "The development of novel targeted therapies with acceptable safety profiles is critical to successful cancer outcomes with better survival rates. Immunotherapy offers promising opportunities with the potential to induce sustained remissions in patients with refractory disease. Recent dramatic clinical responses in trials with gene modified T cells expressing chimeric antigen receptors (CARs) in B-cell malignancies have generated great enthusiasm. This therapy might pave the way for a potential paradigm shift in the way we treat refractory or relapsed cancers. CARs are genetically engineered receptors that combine the specific binding domains from a tumor targeting antibody with T cell signaling domains to allow specifically targeted antibody redirected T cell activation. Despite current successes in hematological cancers, we are only in the beginning of exploring the powerful potential of CAR redirected T cells in the control and elimination of resistant, metastatic, or recurrent nonhematological cancers. This review discusses the application of the CAR T cell therapy, its challenges, and strategies for successful clinical and commercial translation.", "title": "" }, { "docid": "neg:1840061_10", "text": "Despite the widespread acceptance and use of pornography, much remains unknown about the heterogeneity among consumers of pornography. Using a sample of 457 college students from a midwestern university in the United States, a latent profile analysis was conducted to identify unique classifications of pornography users considering motivations of pornography use, level of pornography use, age of user, degree of pornography acceptance, and religiosity. Results indicated three classes of pornography users: Porn Abstainers (n 1⁄4 285), Auto-Erotic Porn Users (n 1⁄4 85), and Complex Porn Users (n 1⁄4 87). These three classes of pornography use are carefully defined. The odds of membership in these three unique classes of pornography users was significantly distinguished by relationship status, selfesteem, and gender. These results expand what is known about pornography users by providing a more person-centered approach that is more nuanced in understanding pornography use. This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit", "title": "" }, { "docid": "neg:1840061_11", "text": "Function recovery is a critical step in many binary analysis and instrumentation tasks. Existing approaches rely on commonly used function prologue patterns to recognize function starts, and possibly epilogues for the ends. However, this approach is not robust when dealing with different compilers, compiler versions, and compilation switches. Although machine learning techniques have been proposed, the possibility of errors still limits their adoption. In this work, we present a novel function recovery technique that is based on static analysis. Evaluations have shown that we can produce very accurate results that are applicable to a wider set of applications.", "title": "" }, { "docid": "neg:1840061_12", "text": "IBM Watson is a cognitive computing system capable of question answering in natural languages. It is believed that IBM Watson can understand large corpora and answer relevant questions more effectively than any other question-answering system currently available. To unleash the full power of Watson, however, we need to train its instance with a large number of wellprepared question-answer pairs. Obviously, manually generating such pairs in a large quantity is prohibitively time consuming and significantly limits the efficiency of Watson’s training. Recently, a large-scale dataset of over 30 million question-answer pairs was reported. Under the assumption that using such an automatically generated dataset could relieve the burden of manual question-answer generation, we tried to use this dataset to train an instance of Watson and checked the training efficiency and accuracy. According to our experiments, using this auto-generated dataset was effective for training Watson, complementing manually crafted question-answer pairs. To the best of the authors’ knowledge, this work is the first attempt to use a largescale dataset of automatically generated questionanswer pairs for training IBM Watson. We anticipate that the insights and lessons obtained from our experiments will be useful for researchers who want to expedite Watson training leveraged by automatically generated question-answer pairs.", "title": "" }, { "docid": "neg:1840061_13", "text": "In this paper, we present a motion planning framework for a fully deployed autonomous unmanned aerial vehicle which integrates two sample-based motion planning techniques, Probabilistic Roadmaps and Rapidly Exploring Random Trees. Additionally, we incorporate dynamic reconfigurability into the framework by integrating the motion planners with the control kernel of the UAV in a novel manner with little modification to the original algorithms. The framework has been verified through simulation and in actual flight. Empirical results show that these techniques used with such a framework offer a surprisingly efficient method for dynamically reconfiguring a motion plan based on unforeseen contingencies which may arise during the execution of a plan. The framework is generic and can be used for additional platforms.", "title": "" }, { "docid": "neg:1840061_14", "text": "Internet of Things (IoT) has been given a lot of emphasis since the 90s when it was first proposed as an idea of interconnecting different electronic devices through a variety of technologies. However, during the past decade IoT has rapidly been developed without appropriate consideration of the profound security goals and challenges involved. This study explores the security aims and goals of IoT and then provides a new classification of different types of attacks and countermeasures on security and privacy. It then discusses future security directions and challenges that need to be addressed to improve security concerns over such networks and aid in the wider adoption of IoT by masses.", "title": "" }, { "docid": "neg:1840061_15", "text": "The purpose of making a “biobetter” biologic is to improve on the salient characteristics of a known biologic for which there is, minimally, clinical proof of concept or, maximally, marketed product data. There already are several examples in which second-generation or biobetter biologics have been generated by improving the pharmacokinetic properties of an innovative drug, including Neulasta® [a PEGylated, longer-half-life version of Neupogen® (filgrastim)] and Aranesp® [a longer-half-life version of Epogen® (epoetin-α)]. This review describes the use of protein fusion technologies such as Fc fusion proteins, fusion to human serum albumin, fusion to carboxy-terminal peptide, and other polypeptide fusion approaches to make biobetter drugs with more desirable pharmacokinetic profiles.", "title": "" }, { "docid": "neg:1840061_16", "text": "Automotive information services utilizing vehicle data are rapidly expanding. However, there is currently no data centric software architecture that takes into account the scale and complexity of data involving numerous sensors. To address this issue, the authors have developed an in-vehicle datastream management system for automotive embedded systems (eDSMS) as data centric software architecture. Providing the data stream functionalities to drivers and passengers are highly beneficial. This paper describes a vehicle embedded data stream processing platform for Android devices. The platform enables flexible query processing with a dataflow query language and extensible operator functions in the query language on the platform. The platform employs architecture independent of data stream schema in in-vehicle eDSMS to facilitate smoother Android application program development. This paper presents specifications and design of the query language and APIs of the platform, evaluate it, and discuss the results. Keywords—Android, automotive, data stream management system", "title": "" }, { "docid": "neg:1840061_17", "text": "With the explosive increase in mobile apps, more and more threats migrate from traditional PC client to mobile device. Compared with traditional Win+Intel alliance in PC, Android+ARM alliance dominates in Mobile Internet, the apps replace the PC client software as the major target of malicious usage. In this paper, to improve the security status of current mobile apps, we propose a methodology to evaluate mobile apps based on cloud computing platform and data mining. We also present a prototype system named MobSafe to identify the mobile app’s virulence or benignancy. Compared with traditional method, such as permission pattern based method, MobSafe combines the dynamic and static analysis methods to comprehensively evaluate an Android app. In the implementation, we adopt Android Security Evaluation Framework (ASEF) and Static Android Analysis Framework (SAAF), the two representative dynamic and static analysis methods, to evaluate the Android apps and estimate the total time needed to evaluate all the apps stored in one mobile app market. Based on the real trace from a commercial mobile app market called AppChina, we can collect the statistics of the number of active Android apps, the average number apps installed in one Android device, and the expanding ratio of mobile apps. As mobile app market serves as the main line of defence against mobile malwares, our evaluation results show that it is practical to use cloud computing platform and data mining to verify all stored apps routinely to filter out malware apps from mobile app markets. As the future work, MobSafe can extensively use machine learning to conduct automotive forensic analysis of mobile apps based on the generated multifaceted data in this stage.", "title": "" }, { "docid": "neg:1840061_18", "text": "Deep learning methods often require large annotated data sets to estimate their high numbers of parameters, which is not practical for many robotic domains. One way to migitate this issue is to transfer features learned on large datasets to related tasks. In this work, we describe the perception system developed for the entry of team NimbRo Picking into the Amazon Picking Challenge 2016. Object detection and semantic Segmentation methods are adapted to the domain, including incorporation of depth measurements. To avoid the need for large training datasets, we make use of pretrained models whenever possible, e.g. CNNs pretrained on ImageNet, and the whole DenseCap captioning pipeline pretrained on the Visual Genome Dataset. Our system performed well at the APC 2016 and reached second and third places for the stow and pick tasks, respectively.", "title": "" }, { "docid": "neg:1840061_19", "text": "Human body exposure to radiofrequency electromagnetic waves emitted from smart meters was assessed using various exposure configurations. Specific energy absorption rate distributions were determined using three anatomically realistic human models. Each model was assigned with age- and frequency-dependent dielectric properties representing a collection of age groups. Generalized exposure conditions involving standing and sleeping postures were assessed for a home area network operating at 868 and 2,450 MHz. The smart meter antenna was fed with 1 W power input which is an overestimation of what real devices typically emit (15 mW max limit). The highest observed whole body specific energy absorption rate value was 1.87 mW kg-1 , within the child model at a distance of 15 cm from a 2,450 MHz device. The higher values were attributed to differences in dimension and dielectric properties within the model. Specific absorption rate (SAR) values were also estimated based on power density levels derived from electric field strength measurements made at various distances from smart meter devices. All the calculated SAR values were found to be very small in comparison to International Commission on Non-Ionizing Radiation Protection limits for public exposure. Bioelectromagnetics. 39:200-216, 2018. © 2017 Wiley Periodicals, Inc.", "title": "" } ]
1840062
Degeneration in VAE: in the Light of Fisher Information Loss
[ { "docid": "pos:1840062_0", "text": "We introduce Natural Neural Networks, a novel family of algorithms that speed up convergence by adapting their internal representation during training to improve conditioning of the Fisher matrix. In particular, we show a specific example that employs a simple and efficient reparametrization of the neural network weights by implicitly whitening the representation obtained at each layer, while preserving the feed-forward computation of the network. Such networks can be trained efficiently via the proposed Projected Natural Gradient Descent algorithm (PRONG), which amortizes the cost of these reparametrizations over many parameter updates and is closely related to the Mirror Descent online learning algorithm. We highlight the benefits of our method on both unsupervised and supervised learning tasks, and showcase its scalability by training on the large-scale ImageNet Challenge dataset.", "title": "" } ]
[ { "docid": "neg:1840062_0", "text": "In the application of lead-acid series batteries, the voltage imbalance of each battery should be considered. Therefore, additional balancer circuits must be integrated into the battery. An active battery balancing circuit with an auxiliary storage can employ a sequential battery imbalance detection algorithm by comparing the voltage of a battery and auxiliary storage. The system is being in balance if the battery voltage imbalance is less than 10mV/cell. In this paper, a new algorithm is proposed so that the battery voltage balancing time can be improved. The battery balancing system is based on the LTC3305 working principle. The simulation verifies that the proposed algorithm can achieve permitted battery voltage imbalance faster than that of the previous algorithm.", "title": "" }, { "docid": "neg:1840062_1", "text": "Biologically detailed single neuron and network models are important for understanding how ion channels, synapses and anatomical connectivity underlie the complex electrical behavior of the brain. While neuronal simulators such as NEURON, GENESIS, MOOSE, NEST, and PSICS facilitate the development of these data-driven neuronal models, the specialized languages they employ are generally not interoperable, limiting model accessibility and preventing reuse of model components and cross-simulator validation. To overcome these problems we have used an Open Source software approach to develop NeuroML, a neuronal model description language based on XML (Extensible Markup Language). This enables these detailed models and their components to be defined in a standalone form, allowing them to be used across multiple simulators and archived in a standardized format. Here we describe the structure of NeuroML and demonstrate its scope by converting into NeuroML models of a number of different voltage- and ligand-gated conductances, models of electrical coupling, synaptic transmission and short-term plasticity, together with morphologically detailed models of individual neurons. We have also used these NeuroML-based components to develop an highly detailed cortical network model. NeuroML-based model descriptions were validated by demonstrating similar model behavior across five independently developed simulators. Although our results confirm that simulations run on different simulators converge, they reveal limits to model interoperability, by showing that for some models convergence only occurs at high levels of spatial and temporal discretisation, when the computational overhead is high. Our development of NeuroML as a common description language for biophysically detailed neuronal and network models enables interoperability across multiple simulation environments, thereby improving model transparency, accessibility and reuse in computational neuroscience.", "title": "" }, { "docid": "neg:1840062_2", "text": "Web-based learning plays a vital role in the modern education system, where different technologies are being emerged to enhance this E-learning process. Therefore virtual and online laboratories are gaining popularity due to its easy implementation and accessibility worldwide. These types of virtual labs are useful where the setup of the actual laboratory is complicated due to several factors such as high machinery or hardware cost. This paper presents a very efficient method of building a model using JavaScript Web Graphics Library with HTML5 enabled and having controllable features inbuilt. This type of program is free from any web browser plug-ins or application and also server independent. Proprietary software has always been a bottleneck in the development of such platforms. This approach rules out this issue and can easily applicable. Here the framework has been discussed and neatly elaborated with an example of a simplified robot configuration.", "title": "" }, { "docid": "neg:1840062_3", "text": "We introduce a large dataset of narrative texts and questions about these texts, intended to be used in a machine comprehension task that requires reasoning using commonsense knowledge. Our dataset complements similar datasets in that we focus on stories about everyday activities, such as going to the movies or working in the garden, and that the questions require commonsense knowledge, or more specifically, script knowledge, to be answered. We show that our mode of data collection via crowdsourcing results in a substantial amount of such inference questions. The dataset forms the basis of a shared task on commonsense and script knowledge organized at SemEval 2018 and provides challenging test cases for the broader natural language understanding community.", "title": "" }, { "docid": "neg:1840062_4", "text": "In 2163 personally interviewed female twins from a population-based registry, the pattern of age at onset and comorbidity of the simple phobias (animal and situational)--early onset and low rates of comorbidity--differed significantly from that of agoraphobia--later onset and high rates of comorbidity. Consistent with an inherited \"phobia proneness\" but not a \"social learning\" model of phobias, the familial aggregation of any phobia, agoraphobia, social phobia, and animal phobia appeared to result from genetic and not from familial-environmental factors, with estimates of heritability of liability ranging from 30% to 40%. The best-fitting multivariate genetic model indicated the existence of genetic and individual-specific environmental etiologic factors common to all four phobia subtypes and others specific for each of the individual subtypes. This model suggested that (1) environmental experiences that predisposed to all phobias were most important for agoraphobia and social phobia and relatively unimportant for the simple phobias, (2) environmental experiences that uniquely predisposed to only one phobia subtype had a major impact on simple phobias, had a modest impact on social phobia, and were unimportant for agoraphobia, and (3) genetic factors that predisposed to all phobias were most important for animal phobia and least important for agoraphobia. Simple phobias appear to arise from the joint effect of a modest genetic vulnerability and phobia-specific traumatic events in childhood, while agoraphobia and, to a somewhat lesser extent, social phobia result from the combined effect of a slightly stronger genetic influence and nonspecific environmental experiences.", "title": "" }, { "docid": "neg:1840062_5", "text": "Recently, several methods for single image super-resolution(SISR) based on deep neural networks have obtained high performance with regard to reconstruction accuracy and computational performance. This paper details the methodology and results of the New Trends in Image Restoration and Enhancement (NTIRE) challenge. The task of this challenge is to restore rich details (high frequencies) in a high resolution image for a single low resolution input image based on a set of prior examples with low and corresponding high resolution images. The challenge has two tracks. We present a super-resolution (SR) method, which uses three losses assigned with different weights to be regarded as optimization target. Meanwhile, the residual blocks are also used for obtaining significant improvement in the evaluation. The final model consists of 9 weight layers with four residual blocks and reconstructs the low resolution image with three color channels simultaneously, which shows better performance on these two tracks and benchmark datasets.", "title": "" }, { "docid": "neg:1840062_6", "text": "Colon cancer is one of the most prevalent diseases across the world. Numerous epidemiological studies indicate that diets rich in fruit, such as berries, provide significant health benefits against several types of cancer, including colon cancer. The anticancer activities of berries are attributed to their high content of phytochemicals and to their relevant antioxidant properties. In vitro and in vivo studies have demonstrated that berries and their bioactive components exert therapeutic and preventive effects against colon cancer by the suppression of inflammation, oxidative stress, proliferation and angiogenesis, through the modulation of multiple signaling pathways such as NF-κB, Wnt/β-catenin, PI3K/AKT/PKB/mTOR, and ERK/MAPK. Based on the exciting outcomes of preclinical studies, a few berries have advanced to the clinical phase. A limited number of human studies have shown that consumption of berries can prevent colorectal cancer, especially in patients at high risk (familial adenopolyposis or aberrant crypt foci, and inflammatory bowel diseases). In this review, we aim to highlight the findings of berries and their bioactive compounds in colon cancer from in vitro and in vivo studies, both on animals and humans. Thus, this review could be a useful step towards the next phase of berry research in colon cancer.", "title": "" }, { "docid": "neg:1840062_7", "text": "PROBLEM\nHow can human contributions to accidents be reconstructed? Investigators can easily take the position a of retrospective outsider, looking back on a sequence of events that seems to lead to an inevitable outcome, and pointing out where people went wrong. This does not explain much, however, and may not help prevent recurrence.\n\n\nMETHOD AND RESULTS\nThis paper examines how investigators can reconstruct the role that people contribute to accidents in light of what has recently become known as the new view of human error. The commitment of the new view is to move controversial human assessments and actions back into the flow of events of which they were part and which helped bring them forth, to see why assessments and actions made sense to people at the time. The second half of the paper addresses one way in which investigators can begin to reconstruct people's unfolding mindsets.\n\n\nIMPACT ON INDUSTRY\nIn an era where a large portion of accidents are attributed to human error, it is critical to understand why people did what they did, rather than judging them for not doing what we now know they should have done. This paper helps investigators avoid the traps of hindsight by presenting a method with which investigators can begin to see how people's actions and assessments actually made sense at the time.", "title": "" }, { "docid": "neg:1840062_8", "text": "A novel algorithm for the detection of underwater man-made objects in forwardlooking sonar imagery is proposed. The algorithm takes advantage of the integral-image representation to quickly compute features, and progressively reduces the computational load by working on smaller portions of the image along the detection process phases. By adhering to the proposed scheme, real-time detection on sonar data onboard an autonomous vehicle is made possible. The proposed method does not require training data, as it dynamically takes into account environmental characteristics of the sensed sonar data. The proposed approach has been implemented and integrated into the software system of the Gemellina autonomous surface vehicle, and is able to run in real time. The validity of the proposed approach is demonstrated on real experiments carried out at sea with the Gemellina autonomous surface vehicle.", "title": "" }, { "docid": "neg:1840062_9", "text": "Depth-first search, as developed by Tarjan and coauthors, is a fundamental technique of efficient algorithm design for graphs [23]. This note presents depth-first search algorithms for two basic problems, strong and biconnected components. Previous algorithms either compute auxiliary quantities based on the depth-first search tree (e.g., LOWPOINT values) or require two passes. We present one-pass algorithms that only maintain a representation of the depth-first search path. This gives a simplified view of depth-first search without sacrificing efficiency. In greater detail, most depth-first search algorithms (e.g., [23,10,11]) compute so-called LOWPOINT values that are defined in terms of the depth-first search tree. Because of the success of this method LOWPOINT values have become almost synonymous with depth-first search. LOWPOINT values are regarded as crucial in the strong and biconnected component algorithms, e.g., [14, pp. 94, 514]. Tarjan’s LOWPOINT method for strong components is presented in texts [1, 7,14,16,17,21]. The strong component algorithm of Kosaraju and Sharir [22] is often viewed as conceptu-", "title": "" }, { "docid": "neg:1840062_10", "text": "The concept of network slicing opens the possibilities to address the complex requirements of multi-tenancy in 5G. To this end, SDN/NFV can act as technology enabler. This paper presents a centralised and dynamic approach for creating and provisioning network slices for virtual network operators' consumption to offer services to their end customers, focusing on an SDN wireless backhaul use case. We demonstrate our approach for dynamic end-to-end slice and service provisioning in a testbed.", "title": "" }, { "docid": "neg:1840062_11", "text": "In many application areas like e-science and data-warehousing detailed information about the origin of data is required. This kind of information is often referred to as data provenance or data lineage. The provenance of a data item includes information about the processes and source data items that lead to its creation and current representation. The diversity of data representation models and application domains has lead to a number of more or less formal definitions of provenance. Most of them are limited to a special application domain, data representation model or data processing facility. Not surprisingly, the associated implementations are also restricted to some application domain and depend on a special data model. In this paper we give a survey of data provenance models and prototypes, present a general categorization scheme for provenance models and use this categorization scheme to study the properties of the existing approaches. This categorization enables us to distinguish between different kinds of provenance information and could lead to a better understanding of provenance in general. Besides the categorization of provenance types, it is important to include the storage, transformation and query requirements for the different kinds of provenance information and application domains in our considerations. The analysis of existing approaches will assist us in revealing open research problems in the area of data provenance.", "title": "" }, { "docid": "neg:1840062_12", "text": "Abstraction in imagery results from the strategic simplification and elimination of detail to clarify the visual structure of the depicted shape. It is a mainstay of artistic practice and an important ingredient of effective visual communication. We develop a computational method for the abstract depiction of 2D shapes. Our approach works by organizing the shape into parts using a new synthesis of holistic features of the part shape, local features of the shape boundary, and global aspects of shape organization. Our abstractions are new shapes with fewer and clearer parts.", "title": "" }, { "docid": "neg:1840062_13", "text": "In this paper the effect of spoilers on aerodynamic characteristics of an airfoil were observed by CFD.As the experimental airfoil NACA 2415 was choosen and spoiler was extended from five different positions based on the chord length C. Airfoil section is designed with a spoiler extended at an angle of 7 degree with the horizontal.The spoiler extends to 0.15C.The geometry of 2-D airfoil without spoiler and with spoiler was designed in GAMBIT.The numerical simulation was performed by ANS YS Fluent to observe the effect of spoiler position on the aerodynamic characteristics of this particular airfoil. The results obtained from the computational process were plotted on graph and the conceptual assumptions were verified as the lift is reduced and the drag is increased that obeys the basic function of a spoiler. Coefficient of drag. I. INTRODUCTION An airplane wing has a special shape called an airfoil. As a wing moves through air, the air is split and passes above and below the wing. The wing's upper surface is shaped so the air rushing over the top speeds up and stretches out. This decreases the air pressure above the wing. The air flowing below the wing moves in a straighter line, so its speed and air pressure remains the same. Since high air pressure always moves toward low air pressure, the air below the wing pushes upward toward the air above the wing. The wing is in the middle, and the whole wing is ―lifted‖. The faster an airplane moves, the more lift there is and when the force of lift is greater than the force of gravity, the airplane is able to fly. [1] A spoiler, sometimes called a lift dumper is a device intended to reduce lift in an aircraft. Spoilers are plates on the top surface of a wing which can be extended upward into the airflow and spoil it. By doing so, the spoiler creates a carefully controlled stall over the portion of the wing behind it, greatly reducing the lift of that wing section. Spoilers are designed to reduce lift also making considerable increase in drag. Spoilers increase drag and reduce lift on the wing. If raised on only one wing, they aid roll control, causing that wing to drop. If the spoilers rise symmetrically in flight, the aircraft can either be slowed in level flight or can descend rapidly without an increase in airspeed. When the …", "title": "" }, { "docid": "neg:1840062_14", "text": "Anthropometric quantities are widely used in epidemiologic research as possible confounders, risk factors, or outcomes. 3D laser-based body scans (BS) allow evaluation of dozens of quantities in short time with minimal physical contact between observers and probands. The aim of this study was to compare BS with classical manual anthropometric (CA) assessments with respect to feasibility, reliability, and validity. We performed a study on 108 individuals with multiple measurements of BS and CA to estimate intra- and inter-rater reliabilities for both. We suggested BS equivalents of CA measurements and determined validity of BS considering CA the gold standard. Throughout the study, the overall concordance correlation coefficient (OCCC) was chosen as indicator of agreement. BS was slightly more time consuming but better accepted than CA. For CA, OCCCs for intra- and inter-rater reliability were greater than 0.8 for all nine quantities studied. For BS, 9 of 154 quantities showed reliabilities below 0.7. BS proxies for CA measurements showed good agreement (minimum OCCC > 0.77) after offset correction. Thigh length showed higher reliability in BS while upper arm length showed higher reliability in CA. Except for these issues, reliabilities of CA measurements and their BS equivalents were comparable.", "title": "" }, { "docid": "neg:1840062_15", "text": "As the Internet of Things (IoT) is emerging as an attractive paradigm, a typical IoT architecture that U2IoT (Unit IoT and Ubiquitous IoT) model has been presented for the future IoT. Based on the U2IoT model, this paper proposes a cyber-physical-social based security architecture (IPM) to deal with Information, Physical, and Management security perspectives, and presents how the architectural abstractions support U2IoT model. In particular, 1) an information security model is established to describe the mapping relations among U2IoT, security layer, and security requirement, in which social layer and additional intelligence and compatibility properties are infused into IPM; 2) physical security referring to the external context and inherent infrastructure are inspired by artificial immune algorithms; 3) recommended security strategies are suggested for social management control. The proposed IPM combining the cyber world, physical world and human social provides constructive proposal towards the future IoT security and privacy protection.", "title": "" }, { "docid": "neg:1840062_16", "text": "The Bitcoin cryptocurrency introduced a novel distributed consensus mechanism relying on economic incentives. While a coalition controlling a majority of computational power may undermine the system, for example by double-spending funds, it is often assumed it would be incentivized not to attack to protect its long-term stake in the health of the currency. We show how an attacker might purchase mining power (perhaps at a cost premium) for a short duration via bribery. Indeed, bribery can even be performed in-band with the system itself enforcing the bribe. A bribing attacker would not have the same concerns about the long-term health of the system, as their majority control is inherently short-lived. New modeling assumptions are needed to explain why such attacks have not been observed in practice. The need for all miners to avoid short-term profits by accepting bribes further suggests a potential tragedy of the commons which has not yet been analyzed.", "title": "" }, { "docid": "neg:1840062_17", "text": "State monitoring is widely used for detecting critical events and abnormalities of distributed systems. As the scale of such systems grows and the degree of workload consolidation increases in Cloud data centers, node failures and performance interferences, especially transient ones, become the norm rather than the exception. Hence, distributed state monitoring tasks are often exposed to impaired communication caused by such dynamics on different nodes. Unfortunately, existing distributed state monitoring approaches are often designed under the assumption of always-online distributed monitoring nodes and reliable inter-node communication. As a result, these approaches often produce misleading results which in turn introduce various problems to Cloud users who rely on state monitoring results to perform automatic management tasks such as auto-scaling. This paper introduces a new state monitoring approach that tackles this challenge by exposing and handling communication dynamics such as message delay and loss in Cloud monitoring environments. Our approach delivers two distinct features. First, it quantitatively estimates the accuracy of monitoring results to capture uncertainties introduced by messaging dynamics. This feature helps users to distinguish trustworthy monitoring results from ones heavily deviated from the truth, yet significantly improves monitoring utility compared with simple techniques that invalidate all monitoring results generated with the presence of messaging dynamics. Second, our approach also adapts to non-transient messaging issues by reconfiguring distributed monitoring algorithms to minimize monitoring errors. Our experimental results show that, even under severe message loss and delay, our approach consistently improves monitoring accuracy, and when applied to Cloud application auto-scaling, outperforms existing state monitoring techniques in terms of the ability to correctly trigger dynamic provisioning.", "title": "" }, { "docid": "neg:1840062_18", "text": "In this paper, we present an algorithm that generates high dynamic range (HDR) images from multi-exposed low dynamic range (LDR) stereo images. The vast majority of cameras in the market only capture a limited dynamic range of a scene. Our algorithm first computes the disparity map between the stereo images. The disparity map is used to compute the camera response function which in turn results in the scene radiance maps. A refinement step for the disparity map is then applied to eliminate edge artifacts in the final HDR image. Existing methods generate HDR images of good quality for still or slow motion scenes, but give defects when the motion is fast. Our algorithm can deal with images taken during fast motion scenes and tolerate saturation and radiometric changes better than other stereo matching algorithms.", "title": "" }, { "docid": "neg:1840062_19", "text": "Skeleton-based action recognition has made great progress recently, but many problems still remain unsolved. For example, the representations of skeleton sequences captured by most of the previous methods lack spatial structure information and detailed temporal dynamics features. In this paper, we propose a novel model with spatial reasoning and temporal stack learning (SR-TSL) for skeleton-based action recognition, which consists of a spatial reasoning network (SRN) and a temporal stack learning network (TSLN). The SRN can capture the high-level spatial structural information within each frame by a residual graph neural network, while the TSLN can model the detailed temporal dynamics of skeleton sequences by a composition of multiple skip-clip LSTMs. During training, we propose a clip-based incremental loss to optimize the model. We perform extensive experiments on the SYSU 3D Human-Object Interaction dataset and NTU RGB+D dataset and verify the effectiveness of each network of our model. The comparison results illustrate that our approach achieves much better results than the state-of-the-art methods.", "title": "" } ]
1840063
The particle swarm optimization algorithm: convergence analysis and parameter selection
[ { "docid": "pos:1840063_0", "text": "The performance of particle swarm optimization using an inertia weight is compared with performance using a constriction factor. Five benchmark functions are used for the comparison. It is concluded that the best approach is to use the constriction factor while limiting the maximum velocity Vmax to the dynamic range of the variable Xmax on each dimension. This approach provides performance on the benchmark functions superior to any other published results known by the authors. '", "title": "" } ]
[ { "docid": "neg:1840063_0", "text": "The majority of the human genome consists of non-coding regions that have been called junk DNA. However, recent studies have unveiled that these regions contain cis-regulatory elements, such as promoters, enhancers, silencers, insulators, etc. These regulatory elements can play crucial roles in controlling gene expressions in specific cell types, conditions, and developmental stages. Disruption to these regions could contribute to phenotype changes. Precisely identifying regulatory elements is key to deciphering the mechanisms underlying transcriptional regulation. Cis-regulatory events are complex processes that involve chromatin accessibility, transcription factor binding, DNA methylation, histone modifications, and the interactions between them. The development of next-generation sequencing techniques has allowed us to capture these genomic features in depth. Applied analysis of genome sequences for clinical genetics has increased the urgency for detecting these regions. However, the complexity of cis-regulatory events and the deluge of sequencing data require accurate and efficient computational approaches, in particular, machine learning techniques. In this review, we describe machine learning approaches for predicting transcription factor binding sites, enhancers, and promoters, primarily driven by next-generation sequencing data. Data sources are provided in order to facilitate testing of novel methods. The purpose of this review is to attract computational experts and data scientists to advance this field.", "title": "" }, { "docid": "neg:1840063_1", "text": "A six-phase six-step voltage-fed induction motor is presented. The inverter is a transistorized six-step voltage source inverter, while the motor is a modified standard three-phase squirrel-cage motor. The stator is rewound with two three-phase winding sets displaced from each other by 30 electrical degrees. A model for the system is developed to simulate the drive and predict its performance. The simulation results for steady-state conditions and experimental measurements show very good correlation. It is shown that this winding configuration results in the elimination of all air-gap flux time harmonics of the order (6v ±1, v = 1,3,5,...). Consequently, all rotor copper losses produced by these harmonics as well as all torque harmonics of the order (6v, v = 1,3,5,...) are eliminated. A comparison between-the measured instantaneous torque of both three-phase and six-phase six-step voltage-fed induction machines shows the advantage of the six-phase system over the three-phase system in eliminating the sixth harmonic dominant torque ripple.", "title": "" }, { "docid": "neg:1840063_2", "text": "As one of the most important mid-level features of music, chord contains rich information of harmonic structure that is useful for music information retrieval. In this paper, we present a chord recognition system based on the N-gram model. The system is time-efficient, and its accuracy is comparable to existing systems. We further propose a new method to construct chord features for music emotion classification and evaluate its performance on commercial song recordings. Experimental results demonstrate the advantage of using chord features for music classification and retrieval.", "title": "" }, { "docid": "neg:1840063_3", "text": "Facial expression recognition has been investigated for many years, and there are two popular models: Action Units (AUs) and the Valence-Arousal space (V-A space) that have been widely used. However, most of the databases for estimating V-A intensity are captured in laboratory settings, and the benchmarks \"in-the-wild\" do not exist. Thus, the First Affect-In-The-Wild Challenge released a database for V-A estimation while the videos were captured in wild condition. In this paper, we propose an integrated deep learning framework for facial attribute recognition, AU detection, and V-A estimation. The key idea is to apply AUs to estimate the V-A intensity since both AUs and V-A space could be utilized to recognize some emotion categories. Besides, the AU detector is trained based on the convolutional neural network (CNN) for facial attribute recognition. In experiments, we will show the results of the above three tasks to verify the performances of our proposed network framework.", "title": "" }, { "docid": "neg:1840063_4", "text": "The capability of transcribing music audio into music notation is a fascinating example of human intelligence. It involves perception (analyzing complex auditory scenes), cognition (recognizing musical objects), knowledge representation (forming musical structures), and inference (testing alternative hypotheses). Automatic music transcription (AMT), i.e., the design of computational algorithms to convert acoustic music signals into some form of music notation, is a challenging task in signal processing and artificial intelligence. It comprises several subtasks, including multipitch estimation (MPE), onset and offset detection, instrument recognition, beat and rhythm tracking, interpretation of expressive timing and dynamics, and score typesetting.", "title": "" }, { "docid": "neg:1840063_5", "text": "Make more knowledge even in less time every day. You may not always spend your time and money to go abroad and get the experience and knowledge by yourself. Reading is a good alternative to do in getting this desirable knowledge and experience. You may gain many things from experiencing directly, but of course it will spend much money. So here, by reading social network data analytics social network data analytics, you can take more advantages with limited budget.", "title": "" }, { "docid": "neg:1840063_6", "text": "Space vector modulation (SVM) is the best modulation technique to drive 3-phase load such as 3-phase induction motor. In this paper, the pulse width modulation strategy with SVM is analyzed in detail. The modulation strategy uses switching time calculator to calculate the timing of voltage vector applied to the three-phase balanced-load. The principle of the space vector modulation strategy is performed using Matlab/Simulink. The simulation result indicates that this algorithm is flexible and suitable to use for advance vector control. The strategy of the switching minimizes the distortion of load current as well as loss due to minimize number of commutations in the inverter.", "title": "" }, { "docid": "neg:1840063_7", "text": "INTRODUCTION\nThe faculty of Medicine, (FOM) Makerere University Kampala was started in 1924 and has been running a traditional curriculum for 79 years. A few years back it embarked on changing its curriculum from traditional to Problem Based Learning (PBL) and Community Based Education and Service (COBES) as well as early clinical exposure. This curriculum has been implemented since the academic year 2003/2004. The study was done to describe the steps taken to change and implement the curriculum at the Faculty of Medicine, Makerere University Kampala.\n\n\nOBJECTIVE\nTo describe the steps taken to change and implement the new curriculum at the Faculty of Medicine.\n\n\nMETHODS\nThe stages taken during the process were described and analysed.\n\n\nRESULTS\nThe following stages were recognized characterization of Uganda's health status, analysis of government policy, analysis of old curriculum, needs assessment, adoption of new model (SPICES), workshop/retreats for faculty sensitization, incremental development of programs by faculty, implementation of new curriculum.\n\n\nCONCLUSION\nThe FOM has successfully embarked on curriculum change. This has not been without challenges. However, challenges have been taken on and handled as they arose and this has led to the implementation of new curriculum. Problem based learning can be adopted even in a low resourced country like Uganda.", "title": "" }, { "docid": "neg:1840063_8", "text": "The popularity of wireless networks has increased in recent years and is becoming a common addition to LANs. In this paper we investigate a novel use for a wireless network based on the IEEE 802.11 standard: inferring the location of a wireless client from signal quality measures. Similar work has been limited to prototype systems that rely on nearest-neighbor techniques to infer location. In this paper, we describe Nibble, a Wi-Fi location service that uses Bayesian networks to infer the location of a device. We explain the general theory behind the system and how to use the system, along with describing our experiences at a university campus building and at a research lab. We also discuss how probabilistic modeling can be applied to a diverse range of applications that use sensor data.", "title": "" }, { "docid": "neg:1840063_9", "text": "We propose a method for discovering the dependency relationships between the topics of documents shared in social networks using the latent social interactions, attempting to answer the question: given a seemingly new topic, from where does this topic evolve? In particular, we seek to discover the pair-wise probabilistic dependency in topics of documents which associate social actors from a latent social network, where these documents are being shared. By viewing the evolution of topics as a Markov chain, we estimate a Markov transition matrix of topics by leveraging social interactions and topic semantics. Metastable states in a Markov chain are applied to the clustering of topics. Applied to the CiteSeer dataset, a collection of documents in academia, we show the trends of research topics, how research topics are related and which are stable. We also show how certain social actors, authors, impact these topics and propose new ways for evaluating author impact.", "title": "" }, { "docid": "neg:1840063_10", "text": "Luck (2009) argues that gamers face a dilemma when it comes to performing certain virtual acts. Most gamers regularly commit acts of virtual murder, and take these acts to be morally permissible. They are permissible because unlike real murder, no one is harmed in performing them; their only victims are computer-controlled characters, and such characters are not moral patients. What Luck points out is that this justification equally applies to virtual pedophelia, but gamers intuitively think that such acts are not morally permissible. The result is a dilemma: either gamers must reject the intuition that virtual pedophelic acts are impermissible and so accept partaking in such acts, or they must reject the intuition that virtual murder acts are permissible, and so abstain from many (if not most) extant games. While the prevailing solution to this dilemma has been to try and find a morally relevant feature to distinguish the two cases, I argue that a different route should be pursued. It is neither the case that all acts of virtual murder are morally permissible, nor are all acts of virtual pedophelia impermissible. Our intuitions falter and produce this dilemma because they are not sensitive to the different contexts in which games present virtual acts.", "title": "" }, { "docid": "neg:1840063_11", "text": "Video-based person re-identification matches video clips of people across non-overlapping cameras. Most existing methods tackle this problem by encoding each video frame in its entirety and computing an aggregate representation across all frames. In practice, people are often partially occluded, which can corrupt the extracted features. Instead, we propose a new spatiotemporal attention model that automatically discovers a diverse set of distinctive body parts. This allows useful information to be extracted from all frames without succumbing to occlusions and misalignments. The network learns multiple spatial attention models and employs a diversity regularization term to ensure multiple models do not discover the same body part. Features extracted from local image regions are organized by spatial attention model and are combined using temporal attention. As a result, the network learns latent representations of the face, torso and other body parts using the best available image patches from the entire video sequence. Extensive evaluations on three datasets show that our framework outperforms the state-of-the-art approaches by large margins on multiple metrics.", "title": "" }, { "docid": "neg:1840063_12", "text": "1 . I n t r o d u c t i o n Reasoning about knowledge and belief has long been an issue of concern in philosophy and artificial intelligence (cf. [Hil],[MH],[Mo]). Recently we have argued that reasoning about knowledge is also crucial in understanding and reasoning about protocols in distributed systems, since messages can be viewed as changing the state of knowledge of a system [HM]; knowledge also seems to be of v i tal importance in cryptography theory [Me] and database theory. In order to formally reason about knowledge, we need a good semantic model. Part of the difficulty in providing such a model is that there is no agreement on exactly what the properties of knowledge are or should * This author's work was supported in part by DARPA contract N00039-82-C-0250. be. For example, is it the case that you know what facts you know? Do you know what you don't know? Do you know only true things, or can something you \"know\" actually be false? Possible-worlds semantics provide a good formal tool for \"customizing\" a logic so that, by making minor changes in the semantics, we can capture different sets of axioms. The idea, first formalized by Hintikka [Hi l ] , is that in each state of the world, an agent (or knower or player: we use all these words interchangeably) has other states or worlds that he considers possible. An agent knows p exactly if p is true in all the worlds that he considers possible. As Kripke pointed out [Kr], by imposing various conditions on this possibil i ty relation, we can capture a number of interesting axioms. For example, if we require that the real world always be one of the possible worlds (which amounts to saying that the possibility relation is reflexive), then it follows that you can't know anything false. Similarly, we can show that if the relation is transitive, then you know what you know. If the relation is transitive and symmetric, then you also know what you don't know. (The one-knower models where the possibility relation is reflexive corresponds to the classical modal logic T, while the reflexive and transitive case corresponds to S4, and the reflexive, symmetric and transitive case corresponds to S5.) Once we have a general framework for modelling knowledge, a reasonable question to ask is how hard it is to reason about knowledge. In particular, how hard is it to decide if a given formula is valid or satisfiable? The answer to this question depends crucially on the choice of axioms. For example, in the oneknower case, Ladner [La] has shown that for T and S4 the problem of deciding satisfiability is complete in polynomial space, while for S5 it is NP-complete, J. Halpern and Y. Moses 481 and thus no harder than the satisf iabi l i ty problem for propos i t iona l logic. Our a im in th is paper is to reexamine the possiblewor lds f ramework for knowledge and belief w i t h four par t icu lar po ints of emphasis: (1) we show how general techniques for f inding decision procedures and complete ax iomat izat ions apply to models for knowledge and belief, (2) we show how sensitive the di f f icul ty of the decision procedure is to such issues as the choice of moda l operators and the ax iom system, (3) we discuss how not ions of common knowledge and impl ic i t knowl edge among a group of agents fit in to the possibleworlds f ramework, and, f inal ly, (4) we consider to what extent the possible-worlds approach is a viable one for model l ing knowledge and belief. We begin in Section 2 by reviewing possible-world semantics in deta i l , and prov ing tha t the many-knower versions of T, S4, and S5 do indeed capture some of the more common axiomatizat ions of knowledge. In Section 3 we t u r n to complexity-theoret ic issues. We review some standard not ions f rom complexi ty theory, and then reprove and extend Ladner's results to show tha t the decision procedures for the many-knower versions of T, S4, and S5 are a l l complete in po lynomia l space.* Th is suggests tha t for S5, reasoning about many agents' knowledge is qual i ta t ive ly harder than jus t reasoning about one agent's knowledge of the real wor ld and of his own knowledge. In Section 4 we t u rn our at tent ion to mod i fy ing the model so tha t i t can deal w i t h belief rather than knowledge, where one can believe something tha t is false. Th is turns out to be somewhat more compl i cated t han dropp ing the assumption of ref lexivi ty, but i t can s t i l l be done in the possible-worlds f ramework. Results about decision procedures and complete axiomat i i a t i ons for belief paral le l those for knowledge. In Section 5 we consider what happens when operators for common knowledge and implicit knowledge are added to the language. A group has common knowledge of a fact p exact ly when everyone knows tha t everyone knows tha t everyone knows ... tha t p is t rue. (Common knowledge is essentially wha t McCar thy 's \" f oo l \" knows; cf. [MSHI] . ) A group has i m p l ic i t knowledge of p i f, roughly speaking, when the agents poo l the i r knowledge together they can deduce p. (Note our usage of the not ion of \" imp l i c i t knowl edge\" here differs s l ight ly f rom the way it is used in [Lev2] and [FH].) As shown in [ H M l ] , common knowl edge is an essential state for reaching agreements and * A problem is said to be complete w i th respect to a complexity class if, roughly speaking, it is the hardest problem in that class (see Section 3 for more details). coordinating action. For very similar reasons, common knowledge also seems to play an important role in human understanding of speech acts (cf. [CM]). The notion of implicit knowledge arises when reasoning about what states of knowledge a group can attain through communication, and thus is also crucial when reasoning about the efficacy of speech acts and about communication protocols in distributed systems. It turns out that adding an implicit knowledge operator to the language does not substantially change the complexity of deciding the satisfiability of formulas in the language, but this is not the case for common knowledge. Using standard techniques from PDL (Propositional Dynamic Logic; cf. [FL],[Pr]), we can show that when we add common knowledge to the language, the satisfiability problem for the resulting logic (whether it is based on T, S4, or S5) is complete in deterministic exponential time, as long as there at least two knowers. Thus, adding a common knowledge operator renders the decision procedure qualitatively more complex. (Common knowledge does not seem to be of much interest in the in the case of one knower. In fact, in the case of S4 and S5, if there is only one knower, knowledge and common knowledge are identical.) We conclude in Section 6 with some discussion of the appropriateness of the possible-worlds approach for capturing knowledge and belief, particularly in light of our results on computational complexity. Detailed proofs of the theorems stated here, as well as further discussion of these results, can be found in the ful l paper ([HM2]). 482 J. Halpern and Y. Moses 2.2 Possib le-wor lds semant ics: Following Hintikka [H i l ] , Sato [Sa], Moore [Mo], and others, we use a posaible-worlds semantics to model knowledge. This provides us wi th a general framework for our semantical investigations of knowledge and belief. (Everything we say about \"knowledge* in this subsection applies equally well to belief.) The essential idea behind possible-worlds semantics is that an agent's state of knowledge corresponds to the extent to which he can determine what world he is in. In a given world, we can associate wi th each agent the set of worlds that, according to the agent's knowledge, could possibly be the real world. An agent is then said to know a fact p exactly if p is true in all the worlds in this set; he does not know p if there is at least one world that he considers possible where p does not hold. * We discuss the ramifications of this point in Section 6. ** The name K (m) is inspired by the fact that for one knower, the system reduces to the well-known modal logic K. J. Halpern and Y. Moses 483 484 J. Halpern and Y. Moses that can be said is that we are modelling a rather idealised reaaoner, who knows all tautologies and all the logical consequences of his knowledge. If we take the classical interpretation of knowledge as true, justified belief, then an axiom such as A3 seems to be necessary. On the other hand, philosophers have shown that axiom A5 does not hold wi th respect to this interpretation ([Len]). However, the S5 axioms do capture an interesting interpretation of knowledge appropriate for reasoning about distributed systems (see [HM1] and Section 6). We continue here wi th our investigation of all these logics, deferring further comments on their appropriateness to Section 6. Theorem 3 implies that the provable formulas of K (m) correspond precisely to the formulas that are valid for Kripke worlds. As Kripke showed [Kr], there are simple conditions that we can impose on the possibility relations Pi so that the valid formulas of the resulting worlds are exactly the provable formulas of T ( m ) , S4 (m) , and S5(m) respectively. We wi l l try to motivate these conditions, but first we need a few definitions. * Since Lemma 4(b) says that a relation that is both reflexive and Euclidean must also be transitive, the reader may auspect that axiom A4 ia redundant in S5. Thia indeed ia the caae. J. Halpern and Y. Moses 485 486 J. Halpern and Y. Moses", "title": "" }, { "docid": "neg:1840063_13", "text": "In recent years, many data mining methods have been proposed for finding useful and structured information from market basket data. The association rule model was recently proposed in order to discover useful patterns and dependencies in such data. This paper discusses a method for indexing market basket data efficiently for similarity search. The technique is likely to be very useful in applications which utilize the similarity in customer buying behavior in order to make peer recommendations. We propose an index called the signature table, which is very flexible in supporting a wide range of similarity functions. The construction of the index structure is independent of the similarity function, which can be specified at query time. The resulting similarity search algorithm shows excellent scalability with increasing memory availability and database size.", "title": "" }, { "docid": "neg:1840063_14", "text": "The traditional diet in Okinawa is anchored by root vegetables (principally sweet potatoes), green and yellow vegetables, soybean-based foods, and medicinal plants. Marine foods, lean meats, fruit, medicinal garnishes and spices, tea, alcohol are also moderately consumed. Many characteristics of the traditional Okinawan diet are shared with other healthy dietary patterns, including the traditional Mediterranean diet, DASH diet, and Portfolio diet. All these dietary patterns are associated with reduced risk for cardiovascular disease, among other age-associated diseases. Overall, the important shared features of these healthy dietary patterns include: high intake of unrefined carbohydrates, moderate protein intake with emphasis on vegetables/legumes, fish, and lean meats as sources, and a healthy fat profile (higher in mono/polyunsaturated fats, lower in saturated fat; rich in omega-3). The healthy fat intake is likely one mechanism for reducing inflammation, optimizing cholesterol, and other risk factors. Additionally, the lower caloric density of plant-rich diets results in lower caloric intake with concomitant high intake of phytonutrients and antioxidants. Other shared features include low glycemic load, less inflammation and oxidative stress, and potential modulation of aging-related biological pathways. This may reduce risk for chronic age-associated diseases and promote healthy aging and longevity.", "title": "" }, { "docid": "neg:1840063_15", "text": "s since January 1975, a full-text search capacity, and a personal archive for saving articles and search results of interest. All articles can be printed in a format that is virtually identical to that of the typeset pages. Beginning six months after publication, the full text of all Original Articles and Special Articles is available free to nonsubscribers who have completed a brief registration. Copyright © 2003 Massachusetts Medical Society. All rights reserved. Downloaded from www.nejm.org at UNIV OF CINCINNATI SERIALS DEPT on August 8, 2007 .", "title": "" }, { "docid": "neg:1840063_16", "text": "Ticket annotation and search has become an essential research subject for the successful delivery of IT operational analytics. Millions of tickets are created yearly to address business users' IT related problems. In IT service desk management, it is critical to first capture the pain points for a group of tickets to determine root cause; secondly, to obtain the respective distributions in order to layout the priority of addressing these pain points. An advanced ticket analytics system utilizes a combination of topic modeling, clustering and Information Retrieval (IR) technologies to address the above issues and the corresponding architecture which integrates of these features will allow for a wider distribution of this technology and progress to a significant financial benefit for the system owner. Topic modeling has been used to extract topics from given documents; in general, each topic is represented by a unigram language model. However, it is not clear how to interpret the results in an easily readable/understandable way until now. Due to the inefficiency to render top concepts using existing techniques, in this paper, we propose a probabilistic framework, which consists of language modeling (especially the topic models), Part-Of-Speech (POS) tags, query expansion, retrieval modeling and so on for the practical challenge. The rigorously empirical experiments demonstrate the consistent and utility performance of the proposed method on real datasets.", "title": "" }, { "docid": "neg:1840063_17", "text": "This paper considers innovative marketing within the context of a micro firm, exploring how such firm’s marketing practices can take advantage of digital media. Factors that influence a micro firm’s innovative activities are examined and the development and implementation of digital media in the firm’s marketing practice is explored. Despite the significance of marketing and innovation to SMEs, a lack of literature and theory on innovation in marketing theory exists. Research suggests that small firms’ marketing practitioners and entrepreneurs have identified their marketing focus on the 4Is. This paper builds on knowledge in innovation and marketing and examines the process in a micro firm. A qualitative approach is applied using action research and case study approach. The relevant literature is reviewed as the starting point to diagnose problems and issues anticipated by business practitioners. A longitudinal study is used to illustrate the process of actions taken with evaluations and reflections presented. The exploration illustrates that in practice much of the marketing activities within micro firms are driven by incremental innovation. This research emphasises that integrating Information Communication Technologies (ICTs) successfully in marketing requires marketers to take an active managerial role far beyond their traditional areas of competence and authority.", "title": "" }, { "docid": "neg:1840063_18", "text": "Blockchain technologies are gaining massive momentum in the last few years. Blockchains are distributed ledgers that enable parties who do not fully trust each other to maintain a set of global states. The parties agree on the existence, values, and histories of the states. As the technology landscape is expanding rapidly, it is both important and challenging to have a firm grasp of what the core technologies have to offer, especially with respect to their data processing capabilities. In this paper, we first survey the state of the art, focusing on private blockchains (in which parties are authenticated). We analyze both in-production and research systems in four dimensions: distributed ledger, cryptography, consensus protocol, and smart contract. We then present BLOCKBENCH, a benchmarking framework for understanding performance of private blockchains against data processing workloads. We conduct a comprehensive evaluation of three major blockchain systems based on BLOCKBENCH, namely Ethereum, Parity, and Hyperledger Fabric. The results demonstrate several trade-offs in the design space, as well as big performance gaps between blockchain and database systems. Drawing from design principles of database systems, we discuss several research directions for bringing blockchain performance closer to the realm of databases.", "title": "" } ]
1840064
Facial Expression Recognition using Convolutional Neural Networks: State of the Art
[ { "docid": "pos:1840064_0", "text": "“Frontalization” is the process of synthesizing frontal facing views of faces appearing in single unconstrained photos. Recent reports have suggested that this process may substantially boost the performance of face recognition systems. This, by transforming the challenging problem of recognizing faces viewed from unconstrained viewpoints to the easier problem of recognizing faces in constrained, forward facing poses. Previous frontalization methods did this by attempting to approximate 3D facial shapes for each query image. We observe that 3D face shape estimation from unconstrained photos may be a harder problem than frontalization and can potentially introduce facial misalignments. Instead, we explore the simpler approach of using a single, unmodified, 3D surface as an approximation to the shape of all input faces. We show that this leads to a straightforward, efficient and easy to implement method for frontalization. More importantly, it produces aesthetic new frontal views and is surprisingly effective when used for face recognition and gender estimation.", "title": "" } ]
[ { "docid": "neg:1840064_0", "text": "Owing to the popularity of the PDF format and the continued exploitation of Adobe Reader, the detection of malicious PDFs remains a concern. All existing detection techniques rely on the PDF parser to a certain extent, while the complexity of the PDF format leaves an abundant space for parser confusion. To quantify the difference between these parsers and Adobe Reader, we create a reference JavaScript extractor by directly tapping into Adobe Reader at locations identified through a mostly automatic binary analysis technique. By comparing the output of this reference extractor against that of several opensource JavaScript extractors on a large data set obtained from VirusTotal, we are able to identify hundreds of samples which existing extractors fail to extract JavaScript from. By analyzing these samples we are able to identify several weaknesses in each of these extractors. Based on these lessons, we apply several obfuscations on a malicious PDF sample, which can successfully evade all the malware detectors tested. We call this evasion technique a PDF parser confusion attack. Lastly, we demonstrate that the reference JavaScript extractor improves the accuracy of existing JavaScript-based classifiers and how it can be used to mitigate these parser limitations in a real-world setting.", "title": "" }, { "docid": "neg:1840064_1", "text": "With the developments in information technology and improvements in communication channels, fraud is spreading all over the world, resulting in huge financial losses. Though fraud prevention mechanisms such as CHIP&PIN are developed, these mechanisms do not prevent the most common fraud types such as fraudulent credit card usages over virtual POS terminals through Internet or mail orders. As a result, fraud detection is the essential tool and probably the best way to stop such fraud types. In this study, classification models based on Artificial Neural Networks (ANN) and Logistic Regression (LR) are developed and applied on credit card fraud detection problem. This study is one of the firsts to compare the performance of ANN and LR methods in credit card fraud detection with a real data set.", "title": "" }, { "docid": "neg:1840064_2", "text": "We develop and apply statistical topic models to software as a means of extracting concepts from source code. The effectiveness of the technique is demonstrated on 1,555 projects from SourceForge and Apache consisting of 113,000 files and 19 million lines of code. In addition to providing an automated, unsupervised, solution to the problem of summarizing program functionality, the approach provides a probabilistic framework with which to analyze and visualize source file similarity. Finally, we introduce an information-theoretic approach for computing tangling and scattering of extracted concepts, and present preliminary results", "title": "" }, { "docid": "neg:1840064_3", "text": "Despite being a new term, ‘fake news’ has evolved rapidly. This paper argues that it should be reserved for cases of deliberate presentation of (typically) false or misleading claims as news, where these are misleading by design. The phrase ‘by design’ here refers to systemic features of the design of the sources and channels by which fake news propagates and, thereby, manipulates the audience’s cognitive processes. This prospective definition is then tested: first, by contrasting fake news with other forms of public disinformation; second, by considering whether it helps pinpoint conditions for the (recent) proliferation of fake news. Résumé: En dépit de son utilisation récente, l’expression «fausses nouvelles» a évolué rapidement. Cet article soutient qu'elle devrait être réservée aux présentations intentionnelles d’allégations (typiquement) fausses ou trompeuses comme si elles étaient des nouvelles véridiques et où elles sont faussées à dessein. L'expression «à dessein» fait ici référence à des caractéristiques systémiques de la conception des sources et des canaux par lesquels les fausses nouvelles se propagent et par conséquent, manipulent les processus cognitifs du public. Cette définition prospective est ensuite mise à l’épreuve: d'abord, en opposant les fausses nouvelles à d'autres formes de désinformation publique; deuxièmement, en examinant si elle aide à cerner les conditions de la prolifération (récente) de fausses nou-", "title": "" }, { "docid": "neg:1840064_4", "text": "Image generation has been successfully cast as an autoregressive sequence generation or transformation problem. Recent work has shown that self-attention is an effective way of modeling textual sequences. In this work, we generalize a recently proposed model architecture based on self-attention, the Transformer, to a sequence modeling formulation of image generation with a tractable likelihood. By restricting the selfattention mechanism to attend to local neighborhoods we significantly increase the size of images the model can process in practice, despite maintaining significantly larger receptive fields per layer than typical convolutional neural networks. While conceptually simple, our generative models significantly outperform the current state of the art in image generation on ImageNet, improving the best published negative log-likelihood on ImageNet from 3.83 to 3.77. We also present results on image super-resolution with a large magnification ratio, applying an encoder-decoder configuration of our architecture. In a human evaluation study, we find that images generated by our super-resolution model fool human observers three times more often than the previous state of the art.", "title": "" }, { "docid": "neg:1840064_5", "text": "A self-assessment of time management is developed for middle-school students. A sample of entering seventh-graders (N = 814) from five states across the USA completed this instrument, with 340 students retested 6 months later. Exploratory and confirmatory factor analysis suggested two factors (i.e., Meeting Deadlines and Planning) that adequately explain the variance in time management for this age group. Scales show evidence of reliability and validity; with high internal consistency, reasonable consistency of factor structure over time, moderate to high correlations with Conscientiousness, low correlations with the remaining four personality dimensions of the Big Five, and reasonable prediction of students’ grades. Females score significantly higher on both factors of time management, with gender differences in Meeting Deadlines (but not Planning) mediated by Conscientiousness. Potential applications of the instrument for evaluation, diagnosis, and remediation in educational settings are discussed. 2009 Elsevier Ltd. All rights reserved. 1. The assessment of time management in middle-school students In our technologically enriched society, individuals are constantly required to multitask, prioritize, and work against deadlines in a timely fashion (Orlikowsky & Yates, 2002). Time management has caught the attention of educational researchers, industrial organizational psychologists, and entrepreneurs, for its possible impact on academic achievement, job performance, and quality of life (Macan, 1994). However, research on time management has not kept pace with this enthusiasm, with extant investigations suffering from a number of problems. Claessens, Van Eerde, Rutte, and Roe’s (2007) review of the literature suggest that there are three major limitations to research on time management. First, many measures of time management have limited validity evidence. Second, many studies rely solely on one-shot self-report assessment, such that evidence for a scale’s generalizability over time cannot be collected. Third, school (i.e., K-12) populations have largely been ignored. For example, all studies in the Claessens et al. (2007) review focus on adult workplace samples (e.g., teachers, engineers) or university students, rather than students in K-12. The current study involves the development of a time management assessment tailored specifically to middle-school students (i.e., adolescents in the sixth to eighth grade of schooling). Time management may be particularly important at the onset of adolescence for three reasons. First, the possibility of early identification and remediation of poor time management practices. Second, the transition into secondary education, from a learning environment involving one teacher to one of time-tabled classes for different subjects with different teachers setting assignments and tests that may occur contiguously. Successfully navigating this new learning environment requires the development of time management skills. Third, adolescents use large amounts of their discretionary time on television, computer gaming, internet use, and sports: Average estimates are 3=4 and 2=4 h per day for seventh-grade boys and girls, respectively (Van den Bulck, 2004). With less time left to do more administratively complex schoolwork, adolescents clearly require time management skills to succeed academically. 1.1. Definitions and assessments of time management Time management has been defined and operationalized in several different ways: As a means for monitoring and controlling time, as setting goals in life and keeping track of time use, as prioritizing goals and generating tasks from the goals, and as the perception of a more structured and purposive life (e.g., Bond & Feather, 1988; Britton & Tesser, 1991; Burt & Kemp, 1994; Eilam & Aharon, 2003). The various definitions all converge on the same essential element: The completion of tasks within an expected timeframe while maintaining outcome quality, through mechanisms such as planning, organizing, prioritizing, or multitasking. To the same effect, Claessens et al. (2007) defined time management as ‘‘behaviors that aim at achieving an effective use of time while performing certain goal-directed activities” (p. 36). Four instruments have been used to assess time management in adults: The Time Management Behavior Scale (TMBS; 0191-8869/$ see front matter 2009 Elsevier Ltd. All rights reserved. doi:10.1016/j.paid.2009.02.018 * Corresponding author. Tel.: +1 609 734 1049. E-mail address: lliu@ets.org (O.L. Liu). Personality and Individual Differences 47 (2009) 174–179", "title": "" }, { "docid": "neg:1840064_6", "text": "Segmentation of the liver from abdominal computed tomography (CT) images is an essential step in some computer-assisted clinical interventions, such as surgery planning for living donor liver transplant, radiotherapy and volume measurement. In this work, we develop a deep learning algorithm with graph cut refinement to automatically segment the liver in CT scans. The proposed method consists of two main steps: (i) simultaneously liver detection and probabilistic segmentation using 3D convolutional neural network; (ii) accuracy refinement of the initial segmentation with graph cut and the previously learned probability map. The proposed approach was validated on forty CT volumes taken from two public databases MICCAI-Sliver07 and 3Dircadb1. For the MICCAI-Sliver07 test dataset, the calculated mean ratios of volumetric overlap error (VOE), relative volume difference (RVD), average symmetric surface distance (ASD), root-mean-square symmetric surface distance (RMSD) and maximum symmetric surface distance (MSD) are 5.9, 2.7 %, 0.91, 1.88 and 18.94 mm, respectively. For the 3Dircadb1 dataset, the calculated mean ratios of VOE, RVD, ASD, RMSD and MSD are 9.36, 0.97 %, 1.89, 4.15 and 33.14 mm, respectively. The proposed method is fully automatic without any user interaction. Quantitative results reveal that the proposed approach is efficient and accurate for hepatic volume estimation in a clinical setup. The high correlation between the automatic and manual references shows that the proposed method can be good enough to replace the time-consuming and nonreproducible manual segmentation method.", "title": "" }, { "docid": "neg:1840064_7", "text": "Much of capital market research in accounting over the past 20 years has assumed that the price adjustment process to information is instantaneous and/or trivial. This assumption has had an enormous influence on the way we select research topics, design empirical tests, and interpret research findings. In this discussion, I argue that price discovery is a complex process, deserving of more attention. I highlight significant problems associated with a na.ıve view of market efficiency, and advocate a more general model involving noise traders. Finally, I discuss the implications of recent evidence against market efficiency for future research. r 2001 Elsevier Science B.V. All rights reserved. JEL classification: M4; G0; B2; D8", "title": "" }, { "docid": "neg:1840064_8", "text": "Previous attempts for data augmentation are designed manually, and the augmentation policies are dataset-specific. Recently, an automatic data augmentation approach, named AutoAugment, is proposed using reinforcement learning. AutoAugment searches for the augmentation polices in the discrete search space, which may lead to a sub-optimal solution. In this paper, we employ the Augmented Random Search method (ARS) to improve the performance of AutoAugment. Our key contribution is to change the discrete search space to continuous space, which will improve the searching performance and maintain the diversities between sub-policies. With the proposed method, state-of-the-art accuracies are achieved on CIFAR-10, CIFAR-100, and ImageNet (without additional data). Our code is available at https://github.com/gmy2013/ARS-Aug.", "title": "" }, { "docid": "neg:1840064_9", "text": "Snapchat is a social media platform that allows users to send images, videos, and text with a specified amount of time for the receiver(s) to view the content before it becomes permanently inaccessible to the receiver. Using focus group methodology and in-depth interviews, the current study sought to understand young adult (18e23 years old; n 1⁄4 34) perceptions of how Snapchat behaviors influenced their interpersonal relationships (family, friends, and romantic). Young adults indicated that Snapchat served as a double-edged swordda communication modality that could lead to relational challenges, but also facilitate more congruent communication within young adult interpersonal relationships. © 2016 Elsevier Ltd. All rights reserved. Technology is now a regular part of contemporary young adult (18e25 years old) life (Coyne, Padilla-Walker, & Howard, 2013; Vaterlaus, Jones, Patten, & Cook, 2015). With technological convergence (i.e. accessibility of multiple media on one device; Brown & Bobkowski, 2011) young adults can access both entertainment media (e.g., television, music) and social media (e.g., social networking, text messaging) on a single device. Among adults, smartphone ownership is highest among young adults (85% of 18e29 year olds; Smith, 2015). Perrin (2015) reported that 90% of young adults (ages 18e29) use social media. Facebook remains the most popular social networking platform, but several new social media apps (i.e., applications) have begun to gain popularity among young adults (e.g., Twitter, Instagram, Pinterest; Duggan, Ellison, Lampe, Lenhart, & Madden, 2015). Considering the high frequency of social media use, Subrahmanyam and Greenfield (2008) have advocated for more research on how these technologies influence interpersonal relationships. The current exploratory study aterlaus), Kathryn_barnett@ (C. Roche), youngja2@unk. was designed to understand the perceived role of Snapchat (see www.snapchat.com) in young adults' interpersonal relationships (i.e. family, social, and romantic). 1. Theoretical framework Uses and Gratifications Theory (U&G) purports that media and technology users are active, self-aware, and goal directed (Katz, Blumler, & Gurevitch, 1973). Technology consumers link their need gratification with specific technology options, which puts different technology sources in competition with one another to satisfy a consumer's needs. Since the emergence of U&G nearly 80 years ago, there have been significant advances in media and technology, which have resulted in many more media and technology options for consumers (Ruggiero, 2000). Writing about the internet and U&G in 2000, Roggiero forecasted: “If the internet is a technology that many predict will be genuinely transformative, it will lead to profound changes in media users' personal and social habits and roles” (p.28). Advances in accessibility to the internet and the development of social media, including Snapchat, provide support for the validity of this prediction. Despite the advances in technology, the needs users seek to gratify are likely more consistent over time. Supporting this point Katz, Gurevitch, and Haas J.M. Vaterlaus et al. / Computers in Human Behavior 62 (2016) 594e601 595", "title": "" }, { "docid": "neg:1840064_10", "text": "This paper extends the traditional pinhole camera projection geometry used in computer graphics to a more realistic camera model which approximates the effects of a lens and an aperture function of an actual camera. This model allows the generation of synthetic images which have a depth of field and can be focused on an arbitrary plane; it also permits selective modeling of certain optical characteristics of a lens. The model can be expanded to include motion blur and special-effect filters. These capabilities provide additional tools for highlighting important areas of a scene and for portraying certain physical characteristics of an object in an image.", "title": "" }, { "docid": "neg:1840064_11", "text": "Efficient provisioning of resources is a challenging problem in cloud computing environments due to its dynamic nature and the need for supporting heterogeneous applications with different performance requirements. Currently, cloud datacenter providers either do not offer any performance guarantee or prefer static VM allocation over dynamic, which lead to inefficient utilization of resources. Earlier solutions, concentrating on a single type of SLAs (Service Level Agreements) or resource usage patterns of applications, are not suitable for cloud computing environments. In this paper, we tackle the resource allocation problem within a datacenter that runs different type of application workloads, particularly non-interactive and transactional applications. We propose admission control and scheduling mechanism which not only maximizes the resource utilization and profit, but also ensures the SLA requirements of users. In our experimental study, the proposed mechanism has shown to provide substantial improvement over static server consolidation and reduces SLA Violations.", "title": "" }, { "docid": "neg:1840064_12", "text": "This paper summaries the state-of-the-art of image quality assessment (IQA) and human visual system (HVS). IQA provides an objective index or real value to measure the quality of the specified image. Since human beings are the ultimate receivers of visual information in practical applications, the most reliable IQA is to build a computational model to mimic the HVS. According to the properties and cognitive mechanism of the HVS, the available HVS-based IQA methods can be divided into two categories, i.e., bionics methods and engineering methods. This paper briefly introduces the basic theories and development histories of the above two kinds of HVS-based IQA methods. Finally, some promising research issues are pointed out in the end of the paper.", "title": "" }, { "docid": "neg:1840064_13", "text": "We present CROSSGRAD, a method to use multi-domain training data to learn a classifier that generalizes to new domains. CROSSGRAD does not need an adaptation phase via labeled or unlabeled data, or domain features in the new domain. Most existing domain adaptation methods attempt to erase domain signals using techniques like domain adversarial training. In contrast, CROSSGRAD is free to use domain signals for predicting labels, if it can prevent overfitting on training domains. We conceptualize the task in a Bayesian setting, in which a sampling step is implemented as data augmentation, based on domain-guided perturbations of input instances. CROSSGRAD parallelly trains a label and a domain classifier on examples perturbed by loss gradients of each other’s objectives. This enables us to directly perturb inputs, without separating and re-mixing domain signals while making various distributional assumptions. Empirical evaluation on three different applications where this setting is natural establishes that (1) domain-guided perturbation provides consistently better generalization to unseen domains, compared to generic instance perturbation methods, and that (2) data augmentation is a more stable and accurate method than domain adversarial training.", "title": "" }, { "docid": "neg:1840064_14", "text": "Ligament reconstruction is the current standard of care for active patients with an anterior cruciate ligament (ACL) rupture. Although the majority of ACL reconstruction (ACLR) surgeries successfully restore the mechanical stability of the injured knee, postsurgical outcomes remain widely varied. Less than half of athletes who undergo ACLR return to sport within the first year after surgery, and it is estimated that approximately 1 in 4 to 1 in 5 young, active athletes who undergo ACLR will go on to a second knee injury. The outcomes after a second knee injury and surgery are significantly less favorable than outcomes after primary injuries. As advances in graft reconstruction and fixation techniques have improved to consistently restore passive joint stability to the preinjury level, successful return to sport after ACLR appears to be predicated on numerous postsurgical factors. Importantly, a secondary ACL injury is most strongly related to modifiable postsurgical risk factors. Biomechanical abnormalities and movement asymmetries, which are more prevalent in this cohort than previously hypothesized, can persist despite high levels of functional performance, and also represent biomechanical and neuromuscular control deficits and imbalances that are strongly associated with secondary injury incidence. Decreased neuromuscular control and high-risk movement biomechanics, which appear to be heavily influenced by abnormal trunk and lower extremity movement patterns, not only predict first knee injury risk but also reinjury risk. These seminal findings indicate that abnormal movement biomechanics and neuromuscular control profiles are likely both residual to, and exacerbated by, the initial injury. Evidence-based medicine (EBM) strategies should be used to develop effective, efficacious interventions targeted to these impairments to optimize the safe return to high-risk activity. In this Current Concepts article, the authors present the latest evidence related to risk factors associated with ligament failure or a secondary (contralateral) injury in athletes who return to sport after ACLR. From these data, they propose an EBM paradigm shift in postoperative rehabilitation and return-to-sport training after ACLR that is focused on the resolution of neuromuscular deficits that commonly persist after surgical reconstruction and standard rehabilitation of athletes.", "title": "" }, { "docid": "neg:1840064_15", "text": "The science of ecology was born from the expansive curiosity of the biologists of the late 19th century, who wished to understand the distribution, abundance and interactions of the earth's organisms. Why do we have so many species, and why not more, they asked--and what causes them to be distributed as they are? What are the characteristics of a biological community that cause it to recover in a particular way after a disturbance?", "title": "" }, { "docid": "neg:1840064_16", "text": "The resilience perspective is increasingly used as an approach for understanding the dynamics of social–ecological systems. This article presents the origin of the resilience perspective and provides an overview of its development to date. With roots in one branch of ecology and the discovery of multiple basins of attraction in ecosystems in the 1960–1970s, it inspired social and environmental scientists to challenge the dominant stable equilibrium view. The resilience approach emphasizes non-linear dynamics, thresholds, uncertainty and surprise, how periods of gradual change interplay with periods of rapid change and how such dynamics interact across temporal and spatial scales. The history was dominated by empirical observations of ecosystem dynamics interpreted in mathematical models, developing into the adaptive management approach for responding to ecosystem change. Serious attempts to integrate the social dimension is currently taking place in resilience work reflected in the large numbers of sciences involved in explorative studies and new discoveries of linked social–ecological systems. Recent advances include understanding of social processes like, social learning and social memory, mental models and knowledge–system integration, visioning and scenario building, leadership, agents and actor groups, social networks, institutional and organizational inertia and change, adaptive capacity, transformability and systems of adaptive governance that allow for management of essential ecosystem services. r 2006 Published by Elsevier Ltd.", "title": "" }, { "docid": "neg:1840064_17", "text": "Primary task of a recommender system is to improve user’s experience by recommending relevant and interesting items to the users. To this effect, diversity in item suggestion is as important as the accuracy of recommendations. Existing literature aimed at improving diversity primarily suggests a 2-stage mechanism – an existing CF scheme for rating prediction, followed by a modified ranking strategy. This approach requires heuristic selection of parameters and ranking strategies. Also most works focus on diversity from either the user or system’s perspective. In this work, we propose a single stage optimization based solution to achieve high diversity while maintaining requisite levels of accuracy. We propose to incorporate additional diversity enhancing constraints, in the matrix factorization model for collaborative filtering. However, unlike traditional MF scheme generating dense user and item latent factor matrices, our base MF model recovers a dense user and a sparse item latent factor matrix; based on a recent work. The idea is motivated by the fact that although a user will demonstrate some affinity towards all latent factors, an item will never possess all features; thereby yielding a sparse structure. We also propose an algorithm for our formulation. The superiority of our model over existing state of the art techniques is demonstrated by the results of experiments conducted on real world movie database. © 2016 Elsevier Inc. All rights reserved.", "title": "" }, { "docid": "neg:1840064_18", "text": "Magnetically-driven micrometer to millimeter-scale robotic devices have recently shown great capabilities for remote applications in medical procedures, in microfluidic tools and in microfactories. Significant effort recently has been on the creation of mobile or stationary devices with multiple independently-controllable degrees of freedom (DOF) for multiagent or complex mechanism motions. In most applications of magnetic microrobots, however, the relatively large distance from the field generation source and the microscale devices results in controlling magnetic field signals which are applied homogeneously over all agents. While some progress has been made in this area allowing up to six independent DOF to be individually commanded, there has been no rigorous effort in determining the maximum achievable number of DOF for systems with homogeneous magnetic field input. In this work, we show that this maximum is eight and we introduce the theoretical basis for this conclusion, relying on the number of independent usable components in a magnetic field at a point. In order to verify the claim experimentally, we develop a simple demonstration mechanism with 8 DOF designed specifically to show independent actuation. Using this mechanism with $500 \\mu \\mathrm{m}$ magnetic elements, we demonstrate eight independent motions of 0.6 mm with 8.6 % coupling using an eight coil system. These results will enable the creation of richer outputs in future microrobotic devices.", "title": "" }, { "docid": "neg:1840064_19", "text": "Protein catalysis requires the atomic-level orchestration of side chains, substrates and cofactors, and yet the ability to design a small-molecule-binding protein entirely from first principles with a precisely predetermined structure has not been demonstrated. Here we report the design of a novel protein, PS1, that binds a highly electron-deficient non-natural porphyrin at temperatures up to 100 °C. The high-resolution structure of holo-PS1 is in sub-Å agreement with the design. The structure of apo-PS1 retains the remote core packing of the holoprotein, with a flexible binding region that is predisposed to ligand binding with the desired geometry. Our results illustrate the unification of core packing and binding-site definition as a central principle of ligand-binding protein design.", "title": "" } ]
1840065
Loughborough University Institutional Repository Understanding Generation Y and their use of social media : a review and research agenda
[ { "docid": "pos:1840065_0", "text": "The authors study the effect of word-of-mouth (WOM) marketing on member growth at an Internet social networking site and compare it with traditional marketing vehicles. Because social network sites record the electronic invitations sent out by existing members, outbound WOM may be precisely tracked. WOM, along with traditional marketing, can then be linked to the number of new members subsequently joining the site (signups). Due to the endogeneity among WOM, new signups, and traditional marketing activity, the authors employ a Vector Autoregression (VAR) modeling approach. Estimates from the VAR model show that word-ofmouth referrals have substantially longer carryover effects than traditional marketing actions. The long-run elasticity of signups with respect to WOM is estimated to be 0.53 (substantially larger than the average advertising elasticities reported in the literature) and the WOM elasticity is about 20 times higher than the elasticity for marketing events, and 30 times that of media appearances. Based on revenue from advertising impressions served to a new member, the monetary value of a WOM referral can be calculated; this yields an upper bound estimate for the financial incentives the firm might offer to stimulate word-of-mouth.", "title": "" } ]
[ { "docid": "neg:1840065_0", "text": "A new frequency-reconfigurable quasi-Yagi dipole antenna is presented. It consists of a driven dipole element with two varactors in two arms, a director with an additional varactor, a truncated ground plane reflector, a microstrip-to-coplanar-stripline (CPS) transition, and a novel biasing circuit. The effective electrical length of the director element and that of the driven arms are adjusted together by changing the biasing voltages. A 35% continuously frequency-tuning bandwidth, from 1.80 to 2.45 GHz, is achieved. This covers a number of wireless communication systems, including 3G UMTS, US WCS, and WLAN. The length-adjustable director allows the endfire pattern with relatively high gain to be maintained over the entire tuning bandwidth. Measured results show that the gain varies from 5.6 to 7.6 dBi and the front-to-back ratio is better than 10 dB. The H-plane cross polarization is below -15 dB, and that in the E-plane is below -20 dB.", "title": "" }, { "docid": "neg:1840065_1", "text": "This article addresses the concept of quality risk in outsourcing. Recent trends in outsourcing extend a contract manufacturer’s (CM’s) responsibility to several functional areas, such as research and development and design in addition to manufacturing. This trend enables an original equipment manufacturer (OEM) to focus on sales and pricing of its product. However, increasing CM responsibilities also suggest that the OEM’s product quality is mainly determined by its CM. We identify two factors that cause quality risk in this outsourcing relationship. First, the CM and the OEM may not be able to contract on quality; second, the OEM may not know the cost of quality to the CM. We characterize the effects of these two quality risk factors on the firms’ profits and on the resulting product quality. We determine how the OEM’s pricing strategy affects quality risk. We show, for example, that the effect of noncontractible quality is higher than the effect of private quality cost information when the OEM sets the sales price after observing the product’s quality. We also show that committing to a sales price mitigates the adverse effect of quality risk. To obtain these results, we develop and analyze a three-stage decision model. This model is also used to understand the impact of recent information technologies on profits and product quality. For example, we provide a decision tree that an OEM can use in deciding whether to invest in an enterprise-wide quality management system that enables accounting of quality-related activities across the supply chain. © 2009 Wiley Periodicals, Inc. Naval Research Logistics 56: 669–685, 2009", "title": "" }, { "docid": "neg:1840065_2", "text": "Due to the explosive increase of online images, content-based image retrieval has gained a lot of attention. The success of deep learning techniques such as convolutional neural networks have motivated us to explore its applications in our context. The main contribution of our work is a novel end-to-end supervised learning framework that learns probability-based semantic-level similarity and feature-level similarity simultaneously. The main advantage of our novel hashing scheme that it is able to reduce the computational cost of retrieval significantly at the state-of-the-art efficiency level. We report on comprehensive experiments using public available datasets such as Oxford, Holidays and ImageNet 2012 retrieval datasets.", "title": "" }, { "docid": "neg:1840065_3", "text": "In this position paper, we address the problems of automated road congestion detection and alerting systems and their security properties. We review different theoretical adaptive road traffic control approaches, and three widely deployed adaptive traffic control systems (ATCSs), namely, SCATS, SCOOT and InSync. We then discuss some related research questions, and the corresponding possible approaches, as well as the adversary model and potential attack scenarios. Two theoretical concepts of automated road congestion alarm systems (including system architecture, communication protocol, and algorithms) are proposed on top of ATCSs, such as SCATS, SCOOT and InSync, by incorporating secure wireless vehicle-to-infrastructure (V2I) communications. Finally, the security properties of the proposed system have been discussed and analysed using the ProVerif protocol verification tool.", "title": "" }, { "docid": "neg:1840065_4", "text": "The term ‘resource use efficiency in agriculture’ may be broadly defined to include the concepts of technical efficiency, allocative efficiency and environmental efficiency. An efficient farmer allocates his land, labour, water and other resources in an optimal manner, so as to maximise his income, at least cost, on sustainable basis. However, there are countless studies showing that farmers often use their resources sub-optimally. While some farmers may attain maximum physical yield per unit of land at a high cost, some others achieve maximum profit per unit of inputs used. Also in the process of achieving maximum yield and returns, some farmers may ignore the environmentally adverse consequences, if any, of their resource use intensity. Logically all enterprising farmers would try to maximise their farm returns by allocating resources in an efficient manner. But as resources (both qualitatively and quantitatively) and managerial efficiency of different farmers vary widely, the net returns per unit of inputs used also vary significantly from farm to farm. Also a farmer’s access to technology, credit, market and other infrastructure and policy support, coupled with risk perception and risk management capacity under erratic weather and price situations would determine his farm efficiency. Moreover, a farmer knowingly or unknowingly may over-exploit his land and water resources for maximising farm income in the short run, thereby resulting in soil and water degradation and rapid depletion of ground water, and also posing a problem of sustainability of agriculture in the long run. In fact, soil degradation, depletion of groundwater and water pollution due to farmers’ managerial inefficiency or otherwise, have a social cost, while farmers who forego certain agricultural practices which cause any such sustainability problem may have a high opportunity cost. Furthermore, a farmer may not be often either fully aware or properly guided and aided for alternative, albeit best possible uses of his scarce resources like land and water. Thus, there are economic as well as environmental aspects of resource use efficiency. In addition, from the point of view of public exchequer, the resource use efficiency would mean that public investment, subsidies and credit for agriculture are", "title": "" }, { "docid": "neg:1840065_5", "text": "We study the problem of single-image depth estimation for images in the wild. We collect human annotated surface normals and use them to help train a neural network that directly predicts pixel-wise depth. We propose two novel loss functions for training with surface normal annotations. Experiments on NYU Depth, KITTI, and our own dataset demonstrate that our approach can significantly improve the quality of depth estimation in the wild.", "title": "" }, { "docid": "neg:1840065_6", "text": "Prediction of potential fraudulent activities may prevent both the stakeholders and the appropriate regulatory authorities of national or international level from being deceived. The objective difficulties on collecting adequate data that are obsessed by completeness affects the reliability of the most supervised Machine Learning methods. This work examines the effectiveness of forecasting fraudulent financial statements using semi-supervised classification techniques (SSC) that require just a few labeled examples for achieving robust learning behaviors mining useful data patterns from a larger pool of unlabeled examples. Based on data extracted from Greek firms, a number of comparisons between supervised and semi-supervised algorithms has been conducted. According to the produced results, the later algorithms are favored being examined over several scenarios of different Labeled Ratio (R) values.", "title": "" }, { "docid": "neg:1840065_7", "text": "We present LEAR (Lexical Entailment AttractRepel), a novel post-processing method that transforms any input word vector space to emphasise the asymmetric relation of lexical entailment (LE), also known as the IS-A or hyponymy-hypernymy relation. By injecting external linguistic constraints (e.g., WordNet links) into the initial vector space, the LE specialisation procedure brings true hyponymyhypernymy pairs closer together in the transformed Euclidean space. The proposed asymmetric distance measure adjusts the norms of word vectors to reflect the actual WordNetstyle hierarchy of concepts. Simultaneously, a joint objective enforces semantic similarity using the symmetric cosine distance, yielding a vector space specialised for both lexical relations at once. LEAR specialisation achieves state-of-the-art performance in the tasks of hypernymy directionality, hypernymy detection, and graded lexical entailment, demonstrating the effectiveness and robustness of the proposed asymmetric specialisation model.", "title": "" }, { "docid": "neg:1840065_8", "text": "The author describes five separate projects he has undertaken in the intersection of computer science and Canadian income tax law. They are:A computer-assisted instruction (CAI) course for teaching income tax, programmed using conventional CAI techniques;\nA “document modeling” computer program for generating the documentation for a tax-based transaction and advising the lawyer-user as to what decisions should be made and what the tax effects will be, programmed in a conventional language;\nA prototype expert system for determining the income tax effects of transactions and tax-defined relationships, based on a PROLOG representation of the rules of the Income Tax Act;\nAn intelligent CAI (ICAI) system for generating infinite numbers of randomized quiz questions for students, computing the answers, and matching wrong answers to particular student errors, based on a PROLOG representation of the rules of the Income Tax Act; and\nA Hypercard stack for providing information about income tax, enabling both education and practical research to follow the user's needs path.\n\nThe author shows that non-AI approaches are a way to produce packages quickly and efficiently. Their primary disadvantage is the massive rewriting required when the tax law changes. AI approaches based on PROLOG, on the other hand, are harder to develop to a practical level but will be easier to audit and maintain. The relationship between expert systems and CAI is discussed.", "title": "" }, { "docid": "neg:1840065_9", "text": "We present a new approach to the geometric alignment of a point cloud to a surface and to related registration problems. The standard algorithm is the familiar ICP algorithm. Here we provide an alternative concept which relies on instantaneous kinematics and on the geometry of the squared distance function of a surface. The proposed algorithm exhibits faster convergence than ICP; this is supported both by results of a local convergence analysis and by experiments.", "title": "" }, { "docid": "neg:1840065_10", "text": "Affect intensity (AI) may reconcile 2 seemingly paradoxical findings: Women report more negative affect than men but equal happiness as men. AI describes people's varying response intensity to identical emotional stimuli. A college sample of 66 women and 34 men was assessed on both positive and negative affect using 4 measurement methods: self-report, peer report, daily report, and memory performance. A principal-components analysis revealed an affect balance component and an AI component. Multimeasure affect balance and AI scores were created, and t tests were computed that showed women to be as happy as and more intense than men. Gender accounted for less than 1% of the variance in happiness but over 13% in AI. Thus, depression findings of more negative affect in women do not conflict with well-being findings of equal happiness across gender. Generally, women's more intense positive emotions balance their higher negative affect.", "title": "" }, { "docid": "neg:1840065_11", "text": "We present a hybrid algorithm to compute the convex hull of points in three or higher dimensional spaces. Our formulation uses a GPU-based interior point filter to cull away many of the points that do not lie on the boundary. The convex hull of remaining points is computed on a CPU. The GPU-based filter proceeds in an incremental manner and computes a pseudo-hull that is contained inside the convex hull of the original points. The pseudo-hull computation involves only localized operations and maps well to GPU architectures. Furthermore, the underlying approach extends to high dimensional point sets and deforming points. In practice, our culling filter can reduce the number of candidate points by two orders of magnitude. We have implemented the hybrid algorithm on commodity GPUs, and evaluated its performance on several large point sets. In practice, the GPU-based filtering algorithm can cull up to 85M interior points per second on an NVIDIA GeForce GTX 580 and the hybrid algorithm improves the overall performance of convex hull computation by 10 − 27 times (for static point sets) and 22 − 46 times (for deforming point sets).", "title": "" }, { "docid": "neg:1840065_12", "text": "In 1999, ISPOR formed the Quality of Life Special Interest group (QoL-SIG)--Translation and Cultural Adaptation group (TCA group) to stimulate discussion on and create guidelines and standards for the translation and cultural adaptation of patient-reported outcome (PRO) measures. After identifying a general lack of consistency in current methods and published guidelines, the TCA group saw a need to develop a holistic perspective that synthesized the full spectrum of published methods. This process resulted in the development of Translation and Cultural Adaptation of Patient Reported Outcomes Measures--Principles of Good Practice (PGP), a report on current methods, and an appraisal of their strengths and weaknesses. The TCA Group undertook a review of evidence from current practice, a review of the literature and existing guidelines, and consideration of the issues facing the pharmaceutical industry, regulators, and the broader outcomes research community. Each approach to translation and cultural adaptation was considered systematically in terms of rationale, components, key actors, and the potential benefits and risks associated with each approach and step. The results of this review were subjected to discussion and challenge within the TCA group, as well as consultation with the outcomes research community at large. Through this review, a consensus emerged on a broad approach, along with a detailed critique of the strengths and weaknesses of the differing methodologies. The results of this review are set out as \"Translation and Cultural Adaptation of Patient Reported Outcomes Measures--Principles of Good Practice\" and are reported in this document.", "title": "" }, { "docid": "neg:1840065_13", "text": "We investigate a family of bugs in blockchain-based smart contracts, which we call event-ordering (or EO) bugs. These bugs are intimately related to the dynamic ordering of contract events, i.e., calls of its functions on the blockchain, and enable potential exploits of millions of USD worth of Ether. Known examples of such bugs and prior techniques to detect them have been restricted to a small number of event orderings, typicall 1 or 2. Our work provides a new formulation of this general class of EO bugs as finding concurrency properties arising in long permutations of such events. The technical challenge in detecting our formulation of EO bugs is the inherent combinatorial blowup in path and state space analysis, even for simple contracts. We propose the first use of partial-order reduction techniques, using happen-before relations extracted automatically for contracts, along with several other optimizations built on a dynamic symbolic execution technique. We build an automatic tool called ETHRACER that requires no hints from users and runs directly on Ethereum bytecode. It flags 7-11% of over ten thousand contracts analyzed in roughly 18.5 minutes per contract, providing compact event traces that human analysts can run as witnesses. These witnesses are so compact that confirmations require only a few minutes of human effort. Half of the flagged contracts have subtle EO bugs, including in ERC-20 contracts that carry hundreds of millions of dollars worth of Ether. Thus, ETHRACER is effective at detecting a subtle yet dangerous class of bugs which existing tools miss.", "title": "" }, { "docid": "neg:1840065_14", "text": "In this paper, we describe a positioning control for a SCARA robot using a recurrent neural network. The simultaneous perturbation optimization method is used for the learning rule of the recurrent neural network. Then the recurrent neural network learns inverse dynamics of the SCARA robot. We present details of the control scheme using the simultaneous perturbation. Moreover, we consider an example for two target positions using an actual SCARA robot. The result is shown.", "title": "" }, { "docid": "neg:1840065_15", "text": "Evaluation of retrieval performance is a crucial problem in content-based image retrieval (CBIR). Many different methods for measuring the performance of a system have been created and used by researchers. This article discusses the advantages and shortcomings of the performance measures currently used. Problems such as a common image database for performance comparisons and a means of getting relevance judgments (or ground truth) for queries are explained. The relationship between CBIR and information retrieval (IR) is made clear, since IR researchers have decades of experience with the evaluation problem. Many of their solutions can be used for CBIR, despite the differences between the fields. Several methods used in text retrieval are explained. Proposals for performance measures and means of developing a standard test suite for CBIR, similar to that used in IR at the annual Text REtrieval Conference (TREC), are presented. MULLER, Henning, et al. Performance Evaluation in Content-Based Image Retrieval: Overview and Proposals. Genève : 1999", "title": "" }, { "docid": "neg:1840065_16", "text": "We report the first two Malaysian children with partial deletion 9p syndrome, a well delineated but rare clinical entity. Both patients had trigonocephaly, arching eyebrows, anteverted nares, long philtrum, abnormal ear lobules, congenital heart lesions and digital anomalies. In addition, the first patient had underdeveloped female genitalia and anterior anus. The second patient had hypocalcaemia and high arched palate and was initially diagnosed with DiGeorge syndrome. Chromosomal analysis revealed a partial deletion at the short arm of chromosome 9. Karyotyping should be performed in patients with craniostenosis and multiple abnormalities as an early syndromic diagnosis confers prognostic, counselling and management implications.", "title": "" }, { "docid": "neg:1840065_17", "text": "Understanding wound healing today involves much more than simply stating that there are three phases: \"inflammation, proliferation, and maturation.\" Wound healing is a complex series of reactions and interactions among cells and \"mediators.\" Each year, new mediators are discovered and our understanding of inflammatory mediators and cellular interactions grows. This article will attempt to provide a concise report of the current literature on wound healing by first reviewing the phases of wound healing followed by \"the players\" of wound healing: inflammatory mediators (cytokines, growth factors, proteases, eicosanoids, kinins, and more), nitric oxide, and the cellular elements. The discussion will end with a pictorial essay summarizing the wound-healing process.", "title": "" }, { "docid": "neg:1840065_18", "text": "Recent work has demonstrated the value of social media monitoring for health surveillance (e.g., tracking influenza or depression rates). It is an open question whether such data can be used to make causal inferences (e.g., determining which activities lead to increased depression rates). Even in traditional, restricted domains, estimating causal effects from observational data is highly susceptible to confounding bias. In this work, we estimate the effect of exercise on mental health from Twitter, relying on statistical matching methods to reduce confounding bias. We train a text classifier to estimate the volume of a user’s tweets expressing anxiety, depression, or anger, then compare two groups: those who exercise regularly (identified by their use of physical activity trackers like Nike+), and a matched control group. We find that those who exercise regularly have significantly fewer tweets expressing depression or anxiety; there is no significant difference in rates of tweets expressing anger. We additionally perform a sensitivity analysis to investigate how the many experimental design choices in such a study impact the final conclusions, including the quality of the classifier and the construction of the control group.", "title": "" }, { "docid": "neg:1840065_19", "text": "In a previous paper we reported the successful use of graph coloring techniques for doing global register allocation in an experimental PL/I optimizing compiler. When the compiler cannot color the register conflict graph with a number of colors equal to the number of available machine registers, it must add code to spill and reload registers to and from storage. Previously the compiler produced spill code whose quality sometimes left much to be desired, and the ad hoc techniques used took considerable amounts of compile time. We have now discovered how to extend the graph coloring approach so that it naturally solves the spilling problem. Spill decisions are now made on the basis of the register conflict graph and cost estimates of the value of keeping the result of a computation in a register rather than in storage. This new approach produces better object code and takes much less compile time.", "title": "" } ]
1840066
A survey of data mining techniques for analyzing crime patterns
[ { "docid": "pos:1840066_0", "text": "Data mining is the extraction of knowledge from large databases. One of the popular data mining techniques is Classification in which different objects are classified into different classes depending on the common properties among them. Decision Trees are widely used in Classification. This paper proposes a tool which applies an enhanced Decision Tree Algorithm to detect the suspicious e-mails about the criminal activities. An improved ID3 Algorithm with enhanced feature selection method and attribute- importance factor is applied to generate a better and faster Decision Tree. The objective is to detect the suspicious criminal activities and minimize them. That's why the tool is named as “Z-Crime” depicting the “Zero Crime” in the society. This paper aims at highlighting the importance of data mining technology to design proactive application to detect the suspicious criminal activities.", "title": "" }, { "docid": "pos:1840066_1", "text": "Fast and high-quality document clustering algorithms play an important role in providing intuitive navigation and browsing mechanisms by organizing large amounts of information into a small number of meaningful clusters. In particular, hierarchical clustering solutions provide a view of the data at different levels of granularity, making them ideal for people to visualize and interactively explore large document collections.In this paper we evaluate different partitional and agglomerative approaches for hierarchical clustering. Our experimental evaluation showed that partitional algorithms always lead to better clustering solutions than agglomerative algorithms, which suggests that partitional clustering algorithms are well-suited for clustering large document datasets due to not only their relatively low computational requirements, but also comparable or even better clustering performance. We present a new class of clustering algorithms called constrained agglomerative algorithms that combine the features of both partitional and agglomerative algorithms. Our experimental results showed that they consistently lead to better hierarchical solutions than agglomerative or partitional algorithms alone.", "title": "" } ]
[ { "docid": "neg:1840066_0", "text": "computational method for its solution. A Psychological Description of LSA as a Theory of Learning, Memory, and Knowledge We give a more complete description of LSA as a mathematical model later when we use it to simulate lexical acquisition. However, an overall outline is necessary to understand a roughly equivalent psychological theory we wish to present first. The input to LSA is a matrix consisting of rows representing unitary event types by columns representing contexts in which instances of the event types appear. One example is a matrix of unique word types by many individual paragraphs in which the words are encountered, where a cell contains the number of times that a particular word type, say model, appears in a particular paragraph, say this one. After an initial transformation of the cell entries, this matrix is analyzed by a statistical technique called singular value decomposition (SVD) closely akin to factor analysis, which allows event types and individual contexts to be re-represented as points or vectors in a high dimensional abstract space (Golub, Lnk, & Overton, 1981 ). The final output is a representation from which one can calculate similarity measures between all pairs consisting of either event types or con-space (Golub, Lnk, & Overton, 1981 ). The final output is a representation from which one can calculate similarity measures between all pairs consisting of either event types or contexts (e.g., word-word, word-paragraph, or paragraph-paragraph similarities). Psychologically, the data that the model starts with are raw, first-order co-occurrence relations between stimuli and the local contexts or episodes in which they occur. The stimuli or event types may be thought of as unitary chunks of perception or memory. The first-order process by which initial pairwise associations are entered and transformed in LSA resembles classical conditioning in that it depends on contiguity or co-occurrence, but weights the result first nonlinearly with local occurrence frequency, then inversely with a function of the number of different contexts in which the particular component is encountered overall and the extent to which its occurrences are spread evenly over contexts. However, there are possibly important differences in the details as currently implemented; in particular, LSA associations are symmetrical; a context is associated with the individual events it contains by the same cell entry as the events are associated with the context. This would not be a necessary feature of the model; it would be possible to make the initial matrix asymmetrical, with a cell indicating the co-occurrence relation, for example, between a word and closely following words. Indeed, Lund and Burgess (in press; Lund, Burgess, & Atchley, 1995), and SchUtze (1992a, 1992b), have explored related models in which such data are the input. The first step of the LSA analysis is to transform each cell entry from the number of times that a word appeared in a particular context to the log of that frequency. This approximates the standard empirical growth functions of simple learning. The fact that this compressive function begins anew with each context also yields a kind of spacing effect; the association of A and B is greater if both appear in two different contexts than if they each appear twice in one context. In a second transformation, all cell entries for a given word are divided by the entropy for that word, Z p log p over all its contexts. Roughly speaking, this step accomplishes much the same thing as conditioning rules such as those described by Rescorla & Wagner (1972), in that it makes the primary association better represent the informative relation between the entities rather than the mere fact that they occurred together. Somewhat more formally, the inverse entropy measure estimates the degree to which observing the occurrence of a component specifies what context it is in; the larger the entropy of, say, a word, the less information its observation transmits about the places it has occurred, so the less usage-defined meaning it acquires, and conversely, the less the meaning of a particular context is determined by containing the word. It is interesting to note that automatic information retrieval methods (including LSA when used for the purpose) are greatly improved by transformations of this general form, the present one usually appearing to be the best (Harman, 1986). It does not seem far-fetched to believe that the necessary transform for good information retrieval, retrieval that brings back text corresponding to what a person has in mind when the person offers one or more query words, corresponds to the functional relations in basic associative processes. Anderson (1990) has drawn attention to the analogy between information retrieval in external systems and those in the human mind. It is not clear which way the relationship goes. Does information retrieval in automatic systems work best when it mimics the circumstances that make people think two things are related, or is there a general logic that tends to make them have similar forms? In automatic information retrieval the logic is usually assumed to be that idealized searchers have in mind exactly the same text as they would like the system to find and draw the words in 2 Although this exploratory process takes some advantage of chance, there is no reason why any number of dimensions should be much better than any other unless some mechanism like the one proposed is at work. In all cases, the model's remaining parameters were fitted only to its input (training) data and not to the criterion (generalization) test. THE LATENT SEMANTIC ANALYSIS THEORY OF KNOWLEDGE 217 their queries from that text (see Bookstein & Swanson, 1974). Then the system's challenge is to estimate the probability that each text in its store is the one that the searcher was thinking about. This characterization, then, comes full circle to the kind of communicative agreement model we outlined above: The sender issues a word chosen to express a meaning he or she has in mind, and the receiver tries to estimate the probability of each of the sender's possible messages. Gallistel (1990), has argued persuasively for the need to separate local conditioning or associative processes from global representation of knowledge. The LSA model expresses such a separation in a very clear and precise way. The initial matrix after transformation to log frequency divided by entropy represents the product of the local or pairwise processes? The subsequent analysis and dimensionality reduction takes all of the previously acquired local information and turns it into a unified representation of knowledge. Thus, the first processing step of the model, modulo its associational symmetry, is a rough approximation to conditioning or associative processes. However, the model's next steps, the singular value decomposition and dimensionality optimization, are not contained as such in any extant psychological theory of learning, although something of the kind may be hinted at in some modem discussions of conditioning and, on a smaller scale and differently interpreted, is often implicit and sometimes explicit in many neural net and spreading-activation architectures. This step converts the transformed associative data into a condensed representation. The condensed representation can be seen as achieving several things, although they are at heart the result of only one mechanism. First, the re-representation captures indirect, higher-order associations. That is, jf a particular stimulus, X, (e.g., a word) has been associated with some other stimulus, Y, by being frequently found in joint context (i.e., contiguity), and Y is associated with Z, then the condensation can cause X and Z to have similar representations. However, the strength of the indirect XZ association depends on much more than a combination of the strengths of XY and YZ. This is because the relation between X and Z also depends, in a wellspecified manner, on the relation of each of the stimuli, X, Y, and Z, to every other entity in the space. In the past, attempts to predict indirect associations by stepwise chaining rules have not been notably successful (see, e.g., Pollio, 1968; Young, 1968). If associations correspond to distances in space, as supposed by LSA, stepwise chaining rules would not be expected to work well; if X is two units from Y and Y is two units from Z, all we know about the distance from X to Z is that it must be between zero and four. But with data about the distances between X, Y, Z, and other points, the estimate of XZ may be greatly improved by also knowing XY and YZ. An alternative view of LSA's effects is the one given earlier, the induction of a latent higher order similarity structure (thus its name) among representations of a large collection of events. Imagine, for example, that every time a stimulus (e.g., a word) is encountered, the distance between its representation and that of every other stimulus that occurs in close proximity to it is adjusted to be slightly smaller. The adjustment is then allowed to percolate through the whole previously constructed structure of relations, each point pulling on its neighbors until all settle into a compromise configuration (physical objects, weather systems, and Hopfield nets do this too; Hopfield, 1982). It is easy to see that the resulting relation between any two representations depends not only on direct experience with them but with everything else ever experienced. Although the current mathematical implementation of LSA does not work in this incremental way, its effects are much the same. The question, then, is whether such a mechanism, when combined with the statistics of experience, produces a faithful reflection of human knowledge. Finally, to anticipate what is developed later, the computational scheme used by LSA for combining and condensing local information into a common", "title": "" }, { "docid": "neg:1840066_1", "text": "Chassis cavities have recently been proposed as a new mounting position for vehicular antennas. Cavities can be concealed and potentially offer more space for antennas than shark-fin modules mounted on top of the roof. An antenna cavity for the front or rear edge of the vehicle roof is designed, manufactured and measured for 5.9 GHz. The cavity offers increased radiation in the horizontal plane and to angles below horizon, compared to cavities located in the roof center.", "title": "" }, { "docid": "neg:1840066_2", "text": "In this study, we report a multimodal energy harvesting device that combines electromagnetic and piezoelectric energy harvesting mechanism. The device consists of piezoelectric crystals bonded to a cantilever beam. The tip of the cantilever beam has an attached permanent magnet which, oscillates within a stationary coil fixed to the top of the package. The permanent magnet serves two purpose (i) acts as a tip mass for the cantilever beam and lowers the resonance frequency, and (ii) acts as a core which oscillates between the inductive coils resulting in electric current generation through Faraday’s effect. Thus, this design combines the energy harvesting from two different mechanisms, piezoelectric and electromagnetic, on the same platform. The prototype system was optimized using the finite element software, ANSYS, to find the resonance frequency and stress distribution. The power generated from the fabricated prototype was found to be 0.25W using the electromagnetic mechanism and 0.25mW using the piezoelectric mechanism at 35 g acceleration and 20Hz frequency.", "title": "" }, { "docid": "neg:1840066_3", "text": "When a software system starts behaving abnormally during normal operations, system administrators resort to the use of logs, execution traces, and system scanners (e.g., anti-malwares, intrusion detectors, etc.) to diagnose the cause of the anomaly. However, the unpredictable context in which the system runs and daily emergence of new software threats makes it extremely challenging to diagnose anomalies using current tools. Host-based anomaly detection techniques can facilitate the diagnosis of unknown anomalies but there is no common platform with the implementation of such techniques. In this paper, we propose an automated anomaly detection framework (Total ADS) that automatically trains different anomaly detection techniques on a normal trace stream from a software system, raise anomalous alarms on suspicious behaviour in streams of trace data, and uses visualization to facilitate the analysis of the cause of the anomalies. Total ADS is an extensible Eclipse-based open source framework that employs a common trace format to use different types of traces, a common interface to adapt to a variety of anomaly detection techniques (e.g., HMM, sequence matching, etc.). Our case study on a modern Linux server shows that Total ADS automatically detects attacks on the server, shows anomalous paths in traces, and provides forensic insights.", "title": "" }, { "docid": "neg:1840066_4", "text": "We introduce a novel method to learn a policy from unsupervised demonstrations of a process. Given a model of the system and a set of sequences of outputs, we find a policy that has a comparable performance to the original policy, without requiring access to the inputs of these demonstrations. We do so by first estimating the inputs of the system from observed unsupervised demonstrations. Then, we learn a policy by applying vanilla supervised learning algorithms to the (estimated)input-output pairs. For the input estimation, we present a new adaptive linear estimator (AdaL-IE) that explicitly trades-off variance and bias in the estimation. As we show empirically, AdaL-IE produces estimates with lower error compared to the state-of-the-art input estimation method, (UMV-IE) [Gillijns and De Moor, 2007]. Using AdaL-IE in conjunction with imitation learning enables us to successfully learn control policies that consistently outperform those using UMV-IE.", "title": "" }, { "docid": "neg:1840066_5", "text": "Time-parameterized queries (TP queries for short) retrieve (i) the actual result at the time that the query is issued, (ii) the validity period of the result given the current motion of the query and the database objects, and (iii) the change that causes the expiration of the result. Due to the highly dynamic nature of several spatio-temporal applications, TP queries are important both as standalone methods, as well as building blocks of more complex operations. However, little work has been done towards their efficient processing. In this paper, we propose a general framework that covers time-parameterized variations of the most common spatial queries, namely window queries, k-nearest neighbors and spatial joins. In particular, each of these TP queries is reduced to nearest neighbor search where the distance functions are defined according to the query type. This reduction allows the application and extension of well-known branch and bound techniques to the current problem. The proposed methods can be applied with mobile queries, mobile objects or both, given a suitable indexing method. Our experimental evaluation is based on R-trees and their extensions for dynamic objects.", "title": "" }, { "docid": "neg:1840066_6", "text": "In the previous NTCIR8-GeoTime task, ABRIR (Appropriate Boolean query Reformulation for Information Retrieval) proved to be one of the most effective systems for retrieving documents with Geographic and Temporal constraints. However, failure analysis showed that the identification of named entities and relationships between these entities and the query is important in improving the quality of the system. In this paper, we propose to use Wikipedia and GeoNames as resources for extracting knowledge about named entities. We also modify our system to use such information.", "title": "" }, { "docid": "neg:1840066_7", "text": "Social engineering is a method of attack involving the exploitation of human weakness, gullibility and ignorance. Although related techniques have existed for some time, current awareness of social engineering and its many guises is relatively low and efforts are therefore required to improve the protection of the user community. This paper begins by examining the problems posed by social engineering, and outlining some of the previous efforts that have been made to address the threat. This leads toward the discussion of a new awareness-raising website that has been specifically designed to aid users in understanding and avoiding the risks. Findings from an experimental trial involving 46 participants are used to illustrate that the system served to increase users’ understanding of threat concepts, as well as providing an engaging environment in which they would be likely to persevere with their learning.", "title": "" }, { "docid": "neg:1840066_8", "text": "The increasing popularity of Facebook among adolescents has stimulated research to investigate the relationship between Facebook use and loneliness, which is particularly prevalent in adolescence. The aim of the present study was to improve our understanding of the relationship between Facebook use and loneliness. Specifically, we examined how Facebook motives and two relationship-specific forms of adolescent loneliness are associated longitudinally. Cross-lagged analysis based on data from 256 adolescents (64% girls, M(age) = 15.88 years) revealed that peer-related loneliness was related over time to using Facebook for social skills compensation, reducing feelings of loneliness, and having interpersonal contact. Facebook use for making new friends reduced peer-related loneliness over time, whereas Facebook use for social skills compensation increased peer-related loneliness over time. Hence, depending on adolescents' Facebook motives, either the displacement or the stimulation hypothesis is supported. Implications and suggestions for future research are discussed.", "title": "" }, { "docid": "neg:1840066_9", "text": "Employees’ failure to comply with IS security procedures is a key concern for organizations today. A number of socio-cognitive theories have been used to explain this. However, prior studies have not examined the influence of past and automatic behavior on employee decisions to comply. This is an important omission because past behavior has been assumed to strongly affect decision-making. To address this gap, we integrated habit (a routinized form of past behavior) with Protection Motivation Theory (PMT), to explain compliance. An empirical test showed that habitual IS security compliance strongly reinforced the cognitive processes theorized by PMT, as well as employee intention for future compliance. We also found that nearly all components of PMT significantly impacted employee intention to comply with IS security policies. Together, these results highlighted the importance of addressing employees’ past and automatic behavior in order to improve compliance. 2012 Elsevier B.V. All rights reserved. * Corresponding author. Tel.: +1 801 361 2531; fax: +1 509 275 0886. E-mail addresses: anthony@vance.name (A. Vance), mikko.siponen@oulu.fi (M. Siponen), seppo.pahnila@oulu.fi (S. Pahnila). URL: http://www.anthonyvance.com 1 http://www.issrc.oulu.fi/.", "title": "" }, { "docid": "neg:1840066_10", "text": "The nine degrees-of-freedom (DOF) inertial measurement units (IMU) are generally composed of three kinds of sensor: accelerometer, gyroscope and magnetometer. The calibration of these sensor suites not only requires turn-table or purpose-built fixture, but also entails a complex and laborious procedure in data sampling. In this paper, we propose a method to calibrate a 9-DOF IMU by using a set of casually sampled raw sensor measurement. Our sampling procedure allows the sensor suite to move by hand and only requires about six minutes of fast and slow arbitrary rotations with intermittent pauses. It requires neither the specially-designed fixture and equipment, nor the strict sequences of sampling steps. At the core of our method are the techniques of data filtering and a hierarchical scheme for calibration. All the raw sensor measurements are preprocessed by a series of band-pass filters before use. And our calibration scheme makes use of the gravity and the ambient magnetic field as references, and hierarchically calibrates the sensor model parameters towards the minimization of the mis-alignment, scaling and bias errors. Moreover, the calibration steps are formulated as a series of function optimization problems and are solved by an evolutionary algorithm. Finally, the performance of our method is experimentally evaluated. The results show that our method can effectively calibrate the sensor model parameters from one set of raw sensor measurement, and yield consistent calibration results.", "title": "" }, { "docid": "neg:1840066_11", "text": "Learning-to-Rank models based on additive ensembles of regression trees have been proven to be very effective for scoring query results returned by large-scale Web search engines. Unfortunately, the computational cost of scoring thousands of candidate documents by traversing large ensembles of trees is high. Thus, several works have investigated solutions aimed at improving the efficiency of document scoring by exploiting advanced features of modern CPUs and memory hierarchies. In this article, we present QuickScorer, a new algorithm that adopts a novel cache-efficient representation of a given tree ensemble, performs an interleaved traversal by means of fast bitwise operations, and supports ensembles of oblivious trees. An extensive and detailed test assessment is conducted on two standard Learning-to-Rank datasets and on a novel very large dataset we made publicly available for conducting significant efficiency tests. The experiments show unprecedented speedups over the best state-of-the-art baselines ranging from 1.9 × to 6.6 × . The analysis of low-level profiling traces shows that QuickScorer efficiency is due to its cache-aware approach in terms of both data layout and access patterns and to a control flow that entails very low branch mis-prediction rates.", "title": "" }, { "docid": "neg:1840066_12", "text": "This paper presents a fast and energy-efficient current mirror based level shifter with wide shifting range from sub-threshold voltage up to I/O voltage. Small delay and low power consumption are achieved by addressing the non-full output swing and charge sharing issues in the level shifter from [4]. The measurement results show that the proposed level shifter can convert from 0.21V up to 3.3V with significantly improved delay and power consumption over the existing level shifters. Compared with [4], the maximum reduction of delay, switching energy and leakage power are 3X, 19X, 29X respectively when converting 0.3V to a higher voltage between 0.6V and 3.3V.", "title": "" }, { "docid": "neg:1840066_13", "text": "BACKGROUND\nSurvey research including multiple health indicators requires brief indices for use in cross-cultural studies, which have, however, rarely been tested in terms of their psychometric quality. Recently, the EUROHIS-QOL 8-item index was developed as an adaptation of the WHOQOL-100 and the WHOQOL-BREF. The aim of the current study was to test the psychometric properties of the EUROHIS-QOL 8-item index.\n\n\nMETHODS\nIn a survey on 4849 European adults, the EUROHIS-QOL 8-item index was assessed across 10 countries, with equal samples adjusted for selected sociodemographic data. Participants were also investigated with a chronic condition checklist, measures on general health perception, mental health, health-care utilization and social support.\n\n\nRESULTS\nFindings indicated good internal consistencies across a range of countries, showing acceptable convergent validity with physical and mental health measures, and the measure discriminates well between individuals that report having a longstanding condition and healthy individuals across all countries. Differential item functioning was less frequently observed in those countries that were geographically and culturally closer to the UK, but acceptable across all countries. A universal one-factor structure with a good fit in structural equation modelling analyses (SEM) was identified with, however, limitations in model fit for specific countires.\n\n\nCONCLUSIONS\nThe short EUROHIS-QOL 8-item index showed good cross-cultural field study performance and a satisfactory convergent and discriminant validity, and can therefore be recommended for use in public health research. In future studies the measure should also be tested in multinational clinical studies, particularly in order to test its sensitivity.", "title": "" }, { "docid": "neg:1840066_14", "text": "The degree to which perceptual awareness of threat stimuli and bodily states of arousal modulates neural activity associated with fear conditioning is unknown. We used functional magnetic neuroimaging (fMRI) to study healthy subjects and patients with peripheral autonomic denervation to examine how the expression of conditioning-related activity is modulated by stimulus awareness and autonomic arousal. In controls, enhanced amygdala activity was evident during conditioning to both \"seen\" (unmasked) and \"unseen\" (backward masked) stimuli, whereas insula activity was modulated by perceptual awareness of a threat stimulus. Absent peripheral autonomic arousal, in patients with autonomic denervation, was associated with decreased conditioning-related activity in insula and amygdala. The findings indicate that the expression of conditioning-related neural activity is modulated by both awareness and representations of bodily states of autonomic arousal.", "title": "" }, { "docid": "neg:1840066_15", "text": "We propose a novel recursive partitioning method for identifying subgroups of subjects with enhanced treatment effects based on a differential effect search algorithm. The idea is to build a collection of subgroups by recursively partitioning a database into two subgroups at each parent group, such that the treatment effect within one of the two subgroups is maximized compared with the other subgroup. The process of data splitting continues until a predefined stopping condition has been satisfied. The method is similar to 'interaction tree' approaches that allow incorporation of a treatment-by-split interaction in the splitting criterion. However, unlike other tree-based methods, this method searches only within specific regions of the covariate space and generates multiple subgroups of potential interest. We develop this method and provide guidance on key topics of interest that include generating multiple promising subgroups using different splitting criteria, choosing optimal values of complexity parameters via cross-validation, and addressing Type I error rate inflation inherent in data mining applications using a resampling-based method. We evaluate the operating characteristics of the procedure using a simulation study and illustrate the method with a clinical trial example.", "title": "" }, { "docid": "neg:1840066_16", "text": "Emotional adaptation increases pro-social behavior of humans towards robotic interaction partners. Social cues are an important factor in this context. This work investigates, if emotional adaptation still works under absence of human-like facial Action Units. A human-robot dialog scenario is chosen using NAO pretending to work for a supermarket and involving humans providing object names to the robot for training purposes. In a user study, two conditions are implemented with or without explicit emotional adaptation of NAO to the human user in a between-subjects design. Evaluations of user experience and acceptance are conducted based on evaluated measures of human-robot interaction (HRI). The results of the user study reveal a significant increase of helpfulness (number of named objects), anthropomorphism, and empathy in the explicit emotional adaptation condition even without social cues of facial Action Units, but only in case of prior robot contact of the test persons. Otherwise, an opposite effect is found. These findings suggest, that reduction of these social cues can be overcome by robot experience prior to the interaction task, e.g. realizable by an additional bonding phase, confirming the importance of such from previous work. Additionally, an interaction with academic background of the participants is found.", "title": "" }, { "docid": "neg:1840066_17", "text": "Activity prediction is an essential task in practical human-centered robotics applications, such as security, assisted living, etc., which targets at inferring ongoing human activities based on incomplete observations. To address this challenging problem, we introduce a novel bio-inspired predictive orientation decomposition (BIPOD) approach to construct representations of people from 3D skeleton trajectories. Our approach is inspired by biological research in human anatomy. In order to capture spatio-temporal information of human motions, we spatially decompose 3D human skeleton trajectories and project them onto three anatomical planes (i.e., coronal, transverse and sagittal planes); then, we describe short-term time information of joint motions and encode high-order temporal dependencies. By estimating future skeleton trajectories that are not currently observed, we endow our BIPOD representation with the critical predictive capability. Empirical studies validate that our BIPOD approach obtains promising performance, in terms of accuracy and efficiency, using a physical TurtleBot2 robotic platform to recognize ongoing human activities. Experiments on benchmark datasets further demonstrate that our new BIPOD representation significantly outperforms previous approaches for real-time activity classification and prediction from 3D human skeleton trajectories.", "title": "" }, { "docid": "neg:1840066_18", "text": "We investigated the reliability of a test assessing quadriceps strength, endurance and fatigability in a single session. We used femoral nerve magnetic stimulation (FMNS) to distinguish central and peripheral factors of neuromuscular fatigue. We used a progressive incremental loading with multiple assessments to limit the influence of subject's cooperation and motivation. Twenty healthy subjects (10 men and 10 women) performed the test on two different days. Maximal voluntary strength and evoked quadriceps responses via FMNS were measured before, after each set of 10 submaximal isometric contractions (5-s on/5-s off; starting at 10% of maximal voluntary strength with 10% increments), immediately and 30min after task failure. The test induced progressive peripheral (41±13% reduction in single twitch at task failure) and central fatigue (3±7% reduction in voluntary activation at task failure). Good inter-day reliability was found for the total number of submaximal contractions achieved (i.e. endurance index: ICC=0.83), for reductions in maximal voluntary strength (ICC>0.81) and evoked muscular responses (i.e. fatigue index: ICC>0.85). Significant sex-differences were also detected. This test shows good reliability for strength, endurance and fatigability assessments. Further studies should be conducted to evaluate its feasibility and reliability in patients.", "title": "" } ]
1840067
NewsCube: delivering multiple aspects of news to mitigate media bias
[ { "docid": "pos:1840067_0", "text": "An overwhelming number of news articles are available every day via the internet. Unfortunately, it is impossible for us to peruse more than a handful; furthermore it is difficult to ascertain an article’s social context, i.e., is it popular, what sorts of people are reading it, etc. In this paper, we develop a system to address this problem in the restricted domain of political news by harnessing implicit and explicit contextual information from the blogosphere. Specifically, we track thousands of blogs and the news articles they cite, collapsing news articles that have highly overlapping content. We then tag each article with the number of blogs citing it, the political orientation of those blogs, and the level of emotional charge expressed in the blog posts that link to the news article. We summarize and present the results to the user via a novel visualization which displays this contextual information; the user can then find the most popular articles, the articles most cited by liberals, the articles most emotionally discussed in the political blogosphere, etc.", "title": "" }, { "docid": "pos:1840067_1", "text": "Mobile devices have already been widely used to access the Web. However, because most available web pages are designed for desktop PC in mind, it is inconvenient to browse these large web pages on a mobile device with a small screen. In this paper, we propose a new browsing convention to facilitate navigation and reading on a small-form-factor device. A web page is organized into a two level hierarchy with a thumbnail representation at the top level for providing a global view and index to a set of sub-pages at the bottom level for detail information. A page adaptation technique is also developed to analyze the structure of an existing web page and split it into small and logically related units that fit into the screen of a mobile device. For a web page not suitable for splitting, auto-positioning or scrolling-by-block is used to assist the browsing as an alterative. Our experimental results show that our proposed browsing convention and developed page adaptation scheme greatly improve the user's browsing experiences on a device with a small display.", "title": "" } ]
[ { "docid": "neg:1840067_0", "text": "Conceptual blending has been proposed as a creative cognitive process, but most theories focus on the analysis of existing blends rather than mechanisms for the efficient construction of novel blends. While conceptual blending is a powerful model for creativity, there are many challenges related to the computational application of blending. Inspired by recent theoretical research, we argue that contexts and context-induced goals provide insights into algorithm design for creative systems using conceptual blending. We present two case studies of creative systems that use goals and contexts to efficiently produce novel, creative artifacts in the domains of story generation and virtual characters engaged in pretend play respectively.", "title": "" }, { "docid": "neg:1840067_1", "text": "Long Short-Term Memory (LSTM) has achieved state-of-the-art performances on a wide range of tasks. Its outstanding performance is guaranteed by the long-term memory ability which matches the sequential data perfectly and the gating structure controlling the information flow. However, LSTMs are prone to be memory-bandwidth limited in realistic applications and need an unbearable period of training and inference time as the model size is ever-increasing. To tackle this problem, various efficient model compression methods have been proposed. Most of them need a big and expensive pre-trained model which is a nightmare for resource-limited devices where the memory budget is strictly limited. To remedy this situation, in this paper, we incorporate the Sparse Evolutionary Training (SET) procedure into LSTM, proposing a novel model dubbed SET-LSTM. Rather than starting with a fully-connected architecture, SET-LSTM has a sparse topology and dramatically fewer parameters in both phases, training and inference. Considering the specific architecture of LSTMs, we replace the LSTM cells and embedding layers with sparse structures and further on, use an evolutionary strategy to adapt the sparse connectivity to the data. Additionally, we find that SET-LSTM can provide many different good combinations of sparse connectivity to substitute the overparameterized optimization problem of dense neural networks. Evaluated on four sentiment analysis classification datasets, the results demonstrate that our proposed model is able to achieve usually better performance than its fully connected counterpart while having less than 4% of its parameters. Department of Mathematics and Computer Science, Eindhoven University of Technology, Netherlands. Correspondence to: Shiwei Liu <s.liu3@tue.nl>.", "title": "" }, { "docid": "neg:1840067_2", "text": "Pakistan is a developing country with more than half of its population located in rural areas. These areas neither have sufficient health care facilities nor a strong infrastructure that can address the health needs of the people. The expansion of Information and Communication Technology (ICT) around the globe has set up an unprecedented opportunity for delivery of healthcare facilities and infrastructure in these rural areas of Pakistan as well as in other developing countries. Mobile Health (mHealth)—the provision of health care services through mobile telephony—will revolutionize the way health care is delivered. From messaging campaigns to remote monitoring, mobile technology will impact every aspect of health systems. This paper highlights the growth of ICT sector and status of health care facilities in the developing countries, and explores prospects of mHealth as a transformer for health systems and service delivery especially in the remote rural areas.", "title": "" }, { "docid": "neg:1840067_3", "text": "Hierarchies have long been used for organization, summarization, and access to information. In this paper we define summarization in terms of a probabilistic language model and use the definition to explore a new technique for automatically generating topic hierarchies by applying a graph-theoretic algorithm, which is an approximation of the Dominating Set Problem. The algorithm efficiently chooses terms according to a language model. We compare the new technique to previous methods proposed for constructing topic hierarchies including subsumption and lexical hierarchies, as well as the top TF.IDF terms. Our results show that the new technique consistently performs as well as or better than these other techniques. They also show the usefulness of hierarchies compared with a list of terms.", "title": "" }, { "docid": "neg:1840067_4", "text": "This study explores one of the contributors to group composition-the basis on which people choose others with whom they want to work. We use a combined model to explore individual attributes, relational attributes, and previous structural ties as determinants of work partner choice. Four years of data from participants in 33 small project groups were collected, some of which reflects individual participant characteristics and some of which is social network data measuring the previous relationship between two participants. Our results suggest that when selecting future group members people are biased toward others of the same race, others who have a reputation for being competent and hard working, and others with whom they have developed strong working relationships in the past. These results suggest that people strive for predictability when choosing future work group members. Copyright 2000 Academic Press.", "title": "" }, { "docid": "neg:1840067_5", "text": "SUMO is an open source traffic simulation package including net import and demand modeling components. We describe the current state of the package as well as future developments and extensions. SUMO helps to investigate several research topics e.g. route choice and traffic light algorithm or simulating vehicular communication. Therefore the framework is used in different projects to simulate automatic driving or traffic management strategies. Keywordsmicroscopic traffic simulation, software, open", "title": "" }, { "docid": "neg:1840067_6", "text": "This paper proposes three design concepts for developing a crawling robot inspired by an inchworm, called the Omegabot. First, for locomotion, the robot strides by bending its body into an omega shape; anisotropic friction pads enable the robot to move forward using this simple motion. Second, the robot body is made of a single part but has two four-bar mechanisms and one spherical six-bar mechanism; the mechanisms are 2-D patterned into a single piece of composite and folded to become a robot body that weighs less than 1 g and that can crawl and steer. This design does not require the assembly of various mechanisms of the body structure, thereby simplifying the fabrication process. Third, a new concept for using a shape-memory alloy (SMA) coil-spring actuator is proposed; the coil spring is designed to have a large spring index and to work over a large pitch-angle range. This large-index-and-pitch SMA spring actuator cools faster and requires less energy, without compromising the amount of force and displacement that it can produce. Therefore, the frequency and the efficiency of the actuator are improved. A prototype was used to demonstrate that the inchworm-inspired, novel, small-scale, lightweight robot manufactured on a single piece of composite can crawl and steer.", "title": "" }, { "docid": "neg:1840067_7", "text": "We present a real-time method for synthesizing highly complex human motions using a novel training regime we call the auto-conditioned Recurrent Neural Network (acRNN). Recently, researchers have attempted to synthesize new motion by using autoregressive techniques, but existing methods tend to freeze or diverge after a couple of seconds due to an accumulation of errors that are fed back into the network. Furthermore, such methods have only been shown to be reliable for relatively simple human motions, such as walking or running. In contrast, our approach can synthesize arbitrary motions with highly complex styles, including dances or martial arts in addition to locomotion. The acRNN is able to accomplish this by explicitly accommodating for autoregressive noise accumulation during training. Our work is the first to our knowledge that demonstrates the ability to generate over 18,000 continuous frames (300 seconds) of new complex human motion w.r.t. different styles.", "title": "" }, { "docid": "neg:1840067_8", "text": "The purpose of this study is to develop a fuzzy-AHP multi-criteria decision making model for procurement process. It aims to measure the procurement performance in the automotive industry. As such measurement of procurement will enable competitive advantage and provide a model for continuous improvement. The rapid growth in the market and the level of competition in the global economy transformed procurement as a strategic issue; which is broader in scope and responsibilities as compared to purchasing. This study reviews the existing literature in procurement performance measurement to identify the key areas of measurement and a hierarchical model is developed with a set of generic measures. In addition, a questionnaire is developed for pair-wise comparison and to collect opinion from practitioners, researchers, managers etc. The relative importance of the measurement criteria are assessed using Analytical Hierarchy Process (AHP) and fuzzy-AHP. The validity of the model is c onfirmed with the results obtained.", "title": "" }, { "docid": "neg:1840067_9", "text": "The n-gram language model, which has its roots in statistical natural language processing, has been shown to successfully capture the repetitive and predictable regularities (“naturalness\") of source code, and help with tasks such as code suggestion, porting, and designing assistive coding devices. However, we show in this paper that this natural-language-based model fails to exploit a special property of source code: localness. We find that human-written programs are localized: they have useful local regularities that can be captured and exploited. We introduce a novel cache language model that consists of both an n-gram and an added “cache\" component to exploit localness. We show empirically that the additional cache component greatly improves the n-gram approach by capturing the localness of software, as measured by both cross-entropy and suggestion accuracy. Our model’s suggestion accuracy is actually comparable to a state-of-the-art, semantically augmented language model; but it is simpler and easier to implement. Our cache language model requires nothing beyond lexicalization, and thus is applicable to all programming languages.", "title": "" }, { "docid": "neg:1840067_10", "text": "This paper examines important factors for link prediction in networks and provides a general, high-performance framework for the prediction task. Link prediction in sparse networks presents a significant challenge due to the inherent disproportion of links that can form to links that do form. Previous research has typically approached this as an unsupervised problem. While this is not the first work to explore supervised learning, many factors significant in influencing and guiding classification remain unexplored. In this paper, we consider these factors by first motivating the use of a supervised framework through a careful investigation of issues such as network observational period, generality of existing methods, variance reduction, topological causes and degrees of imbalance, and sampling approaches. We also present an effective flow-based predicting algorithm, offer formal bounds on imbalance in sparse network link prediction, and employ an evaluation method appropriate for the observed imbalance. Our careful consideration of the above issues ultimately leads to a completely general framework that outperforms unsupervised link prediction methods by more than 30% AUC.", "title": "" }, { "docid": "neg:1840067_11", "text": "Efficient actuation control of flapping-wing microrobots requires a low-power frequency reference with good absolute accuracy. To meet this requirement, we designed a fully-integrated 10MHz relaxation oscillator in a 40nm CMOS process. By adaptively biasing the continuous-time comparator, we are able to achieve a power consumption of 20μW, a 68% reduction to the conventional fixed bias design. A built-in self-calibration controller enables fast post-fabrication calibration of the clock frequency. Measurements show a frequency drift of 1.2% as the battery voltage changes from 3V to 4.1V.", "title": "" }, { "docid": "neg:1840067_12", "text": "Early diagnosis, playing an important role in preventing progress and treating the Alzheimer's disease (AD), is based on classification of features extracted from brain images. The features have to accurately capture main AD-related variations of anatomical brain structures, such as, e.g., ventricles size, hippocampus shape, cortical thickness, and brain volume. This paper proposed to predict the AD with a deep 3D convolutional neural network (3D-CNN), which can learn generic features capturing AD biomarkers and adapt to different domain datasets. The 3D-CNN is built upon a 3D convolutional autoencoder, which is pre-trained to capture anatomical shape variations in structural brain MRI scans. Fully connected upper layers of the 3D-CNN are then fine-tuned for each task-specific AD classification. Experiments on the CADDementia MRI dataset with no skull-stripping preprocessing have shown our 3D-CNN outperforms several conventional classifiers by accuracy. Abilities of the 3D-CNN to generalize the features learnt and adapt to other domains have been validated on the ADNI dataset.", "title": "" }, { "docid": "neg:1840067_13", "text": "It is common practice for developers of user-facing software to transform a mock-up of a graphical user interface (GUI) into code. This process takes place both at an application’s inception and in an evolutionary context as GUI changes keep pace with evolving features. Unfortunately, this practice is challenging and time-consuming. In this paper, we present an approach that automates this process by enabling accurate prototyping of GUIs via three tasks: detection, classification, and assembly. First, logical components of a GUI are detected from a mock-up artifact using either computer vision techniques or mock-up metadata. Then, software repository mining, automated dynamic analysis, and deep convolutional neural networks are utilized to accurately classify GUI-components into domain-specific types (e.g., toggle-button). Finally, a data-driven, K-nearest-neighbors algorithm generates a suitable hierarchical GUI structure from which a prototype application can be automatically assembled. We implemented this approach for Android in a system called REDRAW. Our evaluation illustrates that REDRAW achieves an average GUI-component classification accuracy of 91% and assembles prototype applications that closely mirror target mock-ups in terms of visual affinity while exhibiting reasonable code structure. Interviews with industrial practitioners illustrate ReDraw’s potential to improve real development workflows.", "title": "" }, { "docid": "neg:1840067_14", "text": "Technology is increasingly shaping our social structures and is becoming a driving force in altering human biology. Besides, human activities already proved to have a significant impact on the Earth system which in turn generates complex feedback loops between social and ecological systems. Furthermore, since our species evolved relatively fast from small groups of hunter-gatherers to large and technology-intensive urban agglomerations, it is not a surprise that the major institutions of human society are no longer fit to cope with the present complexity. In this note we draw foundational parallelisms between neurophysiological systems and ICT-enabled social systems, discussing how frameworks rooted in biology and physics could provide heuristic value in the design of evolutionary systems relevant to politics and economics. In this regard we highlight how the governance of emerging technology (i.e. nanotechnology, biotechnology, information technology, and cognitive science), and the one of climate change both presently confront us with a number of connected challenges. In particular: historically high level of inequality; the co-existence of growing multipolar cultural systems in an unprecedentedly connected world; the unlikely reaching of the institutional agreements required to deviate abnormal trajectories of development. We argue that wise general solutions to such interrelated issues should embed the deep understanding of how to elicit mutual incentives in the socio-economic subsystems of Earth system in order to jointly concur to a global utility function (e.g. avoiding the reach of planetary boundaries and widespread social unrest). We leave some open questions on how techno-social systems can effectively learn and adapt with respect to our understanding of geopolitical", "title": "" }, { "docid": "neg:1840067_15", "text": "This paper presents a system for calibrating the extrinsic parameters and timing offsets of an array of cameras, 3-D lidars, and global positioning system/inertial navigation system sensors, without the requirement of any markers or other calibration aids. The aim of the approach is to achieve calibration accuracies comparable with state-of-the-art methods, while requiring less initial information about the system being calibrated and thus being more suitable for use by end users. The method operates by utilizing the motion of the system being calibrated. By estimating the motion each individual sensor observes, an estimate of the extrinsic calibration of the sensors is obtained. Our approach extends standard techniques for motion-based calibration by incorporating estimates of the accuracy of each sensor's readings. This yields a probabilistic approach that calibrates all sensors simultaneously and facilitates the estimation of the uncertainty in the final calibration. In addition, we combine this motion-based approach with appearance information. This gives an approach that requires no initial calibration estimate and takes advantage of all available alignment information to provide an accurate and robust calibration for the system. The new framework is validated with datasets collected with different platforms and different sensors' configurations, and compared with state-of-the-art approaches.", "title": "" }, { "docid": "neg:1840067_16", "text": "Eleven cases of sudden death of men restrained in a prone position by police officers are reported. Nine of the men were hogtied, one was tied to a hospital gurney, and one was manually held prone. All subjects were in an excited delirious state when restrained. Three were psychotic, whereas the others were acutely delirious from drugs (six from cocaine, one from methamphetamine, and one from LSD). Two were shocked with stun guns shortly before death. The literature is reviewed and mechanisms of death are discussed.", "title": "" }, { "docid": "neg:1840067_17", "text": "Kriging or Gaussian Process Regression is applied in many fields as a non-linear regression model as well as a surrogate model in the field of evolutionary computation. However, the computational and space complexity of Kriging, that is cubic and quadratic in the number of data points respectively, becomes a major bottleneck with more and more data available nowadays. In this paper, we propose a general methodology for the complexity reduction, called cluster Kriging, where the whole data set is partitioned into smaller clusters and multiple Kriging models are built on top of them. In addition, four Kriging approximation algorithms are proposed as candidate algorithms within the new framework. Each of these algorithms can be applied to much larger data sets while maintaining the advantages and power of Kriging. The proposed algorithms are explained in detail and compared empirically against a broad set of existing state-of-the-art Kriging approximation methods on a welldefined testing framework. According to the empirical study, the proposed algorithms consistently outperform the existing algorithms. Moreover, some practical suggestions are provided for using the proposed algorithms.", "title": "" }, { "docid": "neg:1840067_18", "text": "Item recommendation is a personalized ranking task. To this end, many recommender systems optimize models with pairwise ranking objectives, such as the Bayesian Personalized Ranking (BPR). Using matrix Factorization (MF) - the most widely used model in recommendation - as a demonstration, we show that optimizing it with BPR leads to a recommender model that is not robust. In particular, we find that the resultant model is highly vulnerable to adversarial perturbations on its model parameters, which implies the possibly large error in generalization. To enhance the robustness of a recommender model and thus improve its generalization performance, we propose a new optimization framework, namely Adversarial Personalized Ranking (APR). In short, our APR enhances the pairwise ranking method BPR by performing adversarial training. It can be interpreted as playing a minimax game, where the minimization of the BPR objective function meanwhile defends an adversary, which adds adversarial perturbations on model parameters to maximize the BPR objective function. To illustrate how it works, we implement APR on MF by adding adversarial perturbations on the embedding vectors of users and items. Extensive experiments on three public real-world datasets demonstrate the effectiveness of APR - by optimizing MF with APR, it outperforms BPR with a relative improvement of 11.2% on average and achieves state-of-the-art performance for item recommendation. Our implementation is available at: \\urlhttps://github.com/hexiangnan/adversarial_personalized_ranking.", "title": "" }, { "docid": "neg:1840067_19", "text": "To investigate fast human reaching movements in 3D, we asked 11 right-handed persons to catch a tennis ball while we tracked the movements of their arms. To ensure consistent trajectories of the ball, we used a catapult to throw the ball from three different positions. Tangential velocity profiles of the hand were in general bell-shaped and hand movements in 3D coincided with well known results for 2D point-to-point movements such as minimum jerk theory or the 2/3rd power law. Furthermore, two phases, consisting of fast reaching and slower fine movements at the end of hand placement could clearly be seen. The aim of this study was to find a way to generate human-like (catching) trajectories for a humanoid robot.", "title": "" } ]
1840068
Empirical evidence for resource-rational anchoring and adjustment.
[ { "docid": "pos:1840068_0", "text": "In spite of its familiar phenomenology, the mechanistic basis for mental effort remains poorly understood. Although most researchers agree that mental effort is aversive and stems from limitations in our capacity to exercise cognitive control, it is unclear what gives rise to those limitations and why they result in an experience of control as costly. The presence of these control costs also raises further questions regarding how best to allocate mental effort to minimize those costs and maximize the attendant benefits. This review explores recent advances in computational modeling and empirical research aimed at addressing these questions at the level of psychological process and neural mechanism, examining both the limitations to mental effort exertion and how we manage those limited cognitive resources. We conclude by identifying remaining challenges for theoretical accounts of mental effort as well as possible applications of the available findings to understanding the causes of and potential solutions for apparent failures to exert the mental effort required of us.", "title": "" }, { "docid": "pos:1840068_1", "text": "This article reviews a diverse set of proposals for dual processing in higher cognition within largely disconnected literatures in cognitive and social psychology. All these theories have in common the distinction between cognitive processes that are fast, automatic, and unconscious and those that are slow, deliberative, and conscious. A number of authors have recently suggested that there may be two architecturally (and evolutionarily) distinct cognitive systems underlying these dual-process accounts. However, it emerges that (a) there are multiple kinds of implicit processes described by different theorists and (b) not all of the proposed attributes of the two kinds of processing can be sensibly mapped on to two systems as currently conceived. It is suggested that while some dual-process theories are concerned with parallel competing processes involving explicit and implicit knowledge systems, others are concerned with the influence of preconscious processes that contextualize and shape deliberative reasoning and decision-making.", "title": "" } ]
[ { "docid": "neg:1840068_0", "text": "Blended learning involves the combination of two fields of concern: education and educational technology. To gain the scholarly recognition from educationists, it is necessary to revisit its models and educational theory underpinned. This paper respond to this issue by reviewing models related to blended learning based on two prominent educational theorists, Maslow’s and Vygotsky’s view. Four models were chosen due to their holistic ideas or vast citations related to blended learning: (1) E-Moderation Model emerging from Open University of UK; (2) Learning Ecology Model by Sun Microsoft System; (3) Blended Learning Continuum in University of Glamorgan; and (4) Inquirybased Framework by Garrison and Vaughan. The discussion of each model concerning pedagogical impact to learning and teaching are made. Critical review of the models in accordance to Maslow or Vygotsky is argued. Such review is concluded with several key principles for the design and practice in", "title": "" }, { "docid": "neg:1840068_1", "text": "Saliency methods aim to explain the predictions of deep neural networks. These methods lack reliability when the explanation is sensitive to factors that do not contribute to the model prediction. We use a simple and common pre-processing step —adding a constant shift to the input data— to show that a transformation with no effect on the model can cause numerous methods to incorrectly attribute. In order to guarantee reliability, we posit that methods should fulfill input invariance, the requirement that a saliency method mirror the sensitivity of the model with respect to transformations of the input. We show, through several examples, that saliency methods that do not satisfy input invariance result in misleading attribution.", "title": "" }, { "docid": "neg:1840068_2", "text": "Colorectal cancer (CRC) shows variable underlying molecular changes with two major mechanisms of genetic instability: chromosomal instability and microsatellite instability. This review aims to delineate the different pathways of colorectal carcinogenesis and provide an overview of the most recent advances in molecular pathological classification systems for colorectal cancer. Two molecular pathological classification systems for CRC have recently been proposed. Integrated molecular analysis by The Cancer Genome Atlas project is based on a wide-ranging genomic and transcriptomic characterisation study of CRC using array-based and sequencing technologies. This approach classified CRC into two major groups consistent with previous classification systems: (1) ∼16 % hypermutated cancers with either microsatellite instability (MSI) due to defective mismatch repair (∼13 %) or ultramutated cancers with DNA polymerase epsilon proofreading mutations (∼3 %); and (2) ∼84 % non-hypermutated, microsatellite stable (MSS) cancers with a high frequency of DNA somatic copy number alterations, which showed common mutations in APC, TP53, KRAS, SMAD4, and PIK3CA. The recent Consensus Molecular Subtypes (CMS) Consortium analysing CRC expression profiling data from multiple studies described four CMS groups: almost all hypermutated MSI cancers fell into the first category CMS1 (MSI-immune, 14 %) with the remaining MSS cancers subcategorised into three groups of CMS2 (canonical, 37 %), CMS3 (metabolic, 13 %) and CMS4 (mesenchymal, 23 %), with a residual unclassified group (mixed features, 13 %). Although further research is required to validate these two systems, they may be useful for clinical trial designs and future post-surgical adjuvant treatment decisions, particularly for tumours with aggressive features or predicted responsiveness to immune checkpoint blockade.", "title": "" }, { "docid": "neg:1840068_3", "text": "Facial landmark localization is important to many facial recognition and analysis tasks, such as face attributes analysis, head pose estimation, 3D face modelling, and facial expression analysis. In this paper, we propose a new approach to localizing landmarks in facial image by deep convolutional neural network (DCNN). We make two enhancements on the CNN to adapt it to the feature localization task as follows. Firstly, we replace the commonly used max pooling by depth-wise convolution to obtain better localization performance. Secondly, we define a response map for each facial points as a 2D probability map indicating the presence likelihood, and train our model with a KL divergence loss. To obtain robust localization results, our approach first takes the expectations of the response maps of Enhanced CNN and then applies auto-encoder model to the global shape vector, which is effective to rectify the outlier points by the prior global landmark configurations. The proposed ECNN method achieves 5.32% mean error on the experiments on the 300-W dataset, which is comparable to the state-of-the-art performance on this standard benchmark, showing the effectiveness of our methods.", "title": "" }, { "docid": "neg:1840068_4", "text": "In this paper we consider several new versions of approximate string matching with gaps. The main characteristic of these new versions is the existence of gaps in the matching of a given pattern in a text. Algorithms are devised for each version and their time and space complexities are stated. These specific versions of approximate string matching have various applications in computerized music analysis. CR Classification: F.2.2", "title": "" }, { "docid": "neg:1840068_5", "text": "As news is increasingly accessed on smartphones and tablets, the need for personalising news app interactions is apparent. We report a series of three studies addressing key issues in the development of adaptive news app interfaces. We first surveyed users' news reading preferences and behaviours; analysis revealed three primary types of reader. We then implemented and deployed an Android news app that logs users' interactions with the app. We used the logs to train a classifier and showed that it is able to reliably recognise a user according to their reader type. Finally we evaluated alternative, adaptive user interfaces for each reader type. The evaluation demonstrates the differential benefit of the adaptation for different users of the news app and the feasibility of adaptive interfaces for news apps.", "title": "" }, { "docid": "neg:1840068_6", "text": "Objective: The challenging task of heart rate (HR) estimation from the photoplethysmographic (PPG) signal, during intensive physical exercises, is tackled in this paper. Methods: The study presents a detailed analysis of a novel algorithm (WFPV) that exploits a Wiener filter to attenuate the motion artifacts, a phase vocoder to refine the HR estimate and user-adaptive post-processing to track the subject physiology. Additionally, an offline version of the HR estimation algorithm that uses Viterbi decoding is designed for scenarios that do not require online HR monitoring (WFPV+VD). The performance of the HR estimation systems is rigorously compared with existing algorithms on the publically available database of 23 PPG recordings. Results: On the whole dataset of 23 PPG recordings, the algorithms result in average absolute errors of 1.97 and 1.37 BPM in the online and offline modes, respectively. On the test dataset of 10 PPG recordings which were most corrupted with motion artifacts, WFPV has an error of 2.95 BPM on its own and 2.32 BPM in an ensemble with two existing algorithms. Conclusion: The error rate is significantly reduced when compared with the state-of-the art PPG-based HR estimation methods. Significance: The proposed system is shown to be accurate in the presence of strong motion artifacts and in contrast to existing alternatives has very few free parameters to tune. The algorithm has a low computational cost and can be used for fitness tracking and health monitoring in wearable devices. The MATLAB implementation of the algorithm is provided online.", "title": "" }, { "docid": "neg:1840068_7", "text": "Vector modulators are a key component in phased array antennas and communications systems. The paper describes a novel design methodology for a bi-directional, reflection-type balanced vector modulator using metal-oxide-semiconductor field-effect (MOS) transistors as active loads, which provides an improved constellation quality. The fabricated IC occupies 787 × 1325 μm2 and exhibits a minimum transmission loss of 9 dB and return losses better than 14 dB. As an application example, its use in a 16-QAM modulator is verified.", "title": "" }, { "docid": "neg:1840068_8", "text": "Thermoelectric generators (TEGs) provide a unique way for harvesting thermal energy. These devices are compact, durable, inexpensive, and scalable. Unfortunately, the conversion efficiency of TEGs is low. This requires careful design of energy harvesting systems including the interface circuitry between the TEG module and the load, with the purpose of minimizing power losses. In this paper, it is analytically shown that the traditional approach for estimating the internal resistance of TEGs may result in a significant loss of harvested power. This drawback comes from ignoring the dependence of the electrical behavior of TEGs on their thermal behavior. Accordingly, a systematic method for accurately determining the TEG input resistance is presented. Next, through a case study on automotive TEGs, it is shown that compared to prior art, more than 11% of power losses in the interface circuitry that lies between the TEG and the electrical load can be saved by the proposed modeling technique. In addition, it is demonstrated that the traditional approach would have resulted in a deviation from the target regulated voltage by as much as 59%.", "title": "" }, { "docid": "neg:1840068_9", "text": "The heterogeneous cloud radio access network (H-CRAN) is a promising paradigm that incorporates cloud computing into heterogeneous networks (HetNets), thereby taking full advantage of cloud radio access networks (C-RANs) and HetNets. Characterizing cooperative beamforming with fronthaul capacity and queue stability constraints is critical for multimedia applications to improve the energy efficiency (EE) in H-CRANs. An energy-efficient optimization objective function with individual fronthaul capacity and intertier interference constraints is presented in this paper for queue-aware multimedia H-CRANs. To solve this nonconvex objective function, a stochastic optimization problem is reformulated by introducing the general Lyapunov optimization framework. Under the Lyapunov framework, this optimization problem is equivalent to an optimal network-wide cooperative beamformer design algorithm with instantaneous power, average power, and intertier interference constraints, which can be regarded as a weighted sum EE maximization problem and solved by a generalized weighted minimum mean-square error approach. The mathematical analysis and simulation results demonstrate that a tradeoff between EE and queuing delay can be achieved, and this tradeoff strictly depends on the fronthaul constraint.", "title": "" }, { "docid": "neg:1840068_10", "text": "CIPO is the very “tip of the iceberg” of functional gastrointestinal disorders, being a rare and frequently misdiagnosed condition characterized by an overall poor outcome. Diagnosis should be based on clinical features, natural history and radiologic findings. There is no cure for CIPO and management strategies include a wide array of nutritional, pharmacologic, and surgical options which are directed to minimize malnutrition, promote gut motility and reduce complications of stasis (ie, bacterial overgrowth). Pain may become so severe to necessitate major analgesic drugs. Underlying causes of secondary CIPO should be thoroughly investigated and, if detected, treated accordingly. Surgery should be indicated only in a highly selected, well characterized subset of patients, while isolated intestinal or multivisceral transplantation is a rescue therapy only in those patients with intestinal failure unsuitable for or unable to continue with TPN/HPN. Future perspectives in CIPO will be directed toward an accurate genomic/proteomic phenotying of these rare, challenging patients. Unveiling causative mechanisms of neuro-ICC-muscular abnormalities will pave the way for targeted therapeutic options for patients with CIPO.", "title": "" }, { "docid": "neg:1840068_11", "text": "Malicious software in form of Internet worms, computer viru ses, and Trojan horses poses a major threat to the security of network ed systems. The diversity and amount of its variants severely undermine the effectiveness of classical signature-based detection. Yet variants of malware f milies share typical behavioral patternsreflecting its origin and purpose. We aim to exploit these shared patterns for classification of malware and propose a m thod for learning and discrimination of malware behavior. Our method proceed s in three stages: (a) behavior of collected malware is monitored in a sandbox envi ro ment, (b) based on a corpus of malware labeled by an anti-virus scanner a malware behavior classifieris trained using learning techniques and (c) discriminativ e features of the behavior models are ranked for explanation of classifica tion decisions. Experiments with di fferent heterogeneous test data collected over several month s using honeypots demonstrate the e ffectiveness of our method, especially in detecting novel instances of malware families previously not recognized by commercial anti-virus software.", "title": "" }, { "docid": "neg:1840068_12", "text": "In this paper, we present a new image forgery detection method based on deep learning technique, which utilizes a convolutional neural network (CNN) to automatically learn hierarchical representations from the input RGB color images. The proposed CNN is specifically designed for image splicing and copy-move detection applications. Rather than a random strategy, the weights at the first layer of our network are initialized with the basic high-pass filter set used in calculation of residual maps in spatial rich model (SRM), which serves as a regularizer to efficiently suppress the effect of image contents and capture the subtle artifacts introduced by the tampering operations. The pre-trained CNN is used as patch descriptor to extract dense features from the test images, and a feature fusion technique is then explored to obtain the final discriminative features for SVM classification. The experimental results on several public datasets show that the proposed CNN based model outperforms some state-of-the-art methods.", "title": "" }, { "docid": "neg:1840068_13", "text": "Context: Offshore software development outsourcing is a modern business strategy for developing high quality software at low cost. Objective: The objective of this research paper is to identify and analyse factors that are important in terms of the competitiveness of vendor organisations in attracting outsourcing projects. Method: We performed a systematic literature review (SLR) by applying our customised search strings which were derived from our research questions. We performed all the SLR steps, such as the protocol development, initial selection, final selection, quality assessment, data extraction and data synthesis. Results: We have identified factors such as cost-saving, skilled human resource, appropriate infrastructure, quality of product and services, efficient outsourcing relationships management, and an organisation’s track record of successful projects which are generally considered important by the outsourcing clients. Our results indicate that appropriate infrastructure, cost-saving, and skilled human resource are common in three continents, namely Asia, North America and Europe. We identified appropriate infrastructure, cost-saving, and quality of products and services as being common in three types of organisations (small, medium and large). We have also identified four factors-appropriate infrastructure, cost-saving, quality of products and services, and skilled human resource as being common in the two decades (1990–1999 and 2000–mid 2008). Conclusions: Cost-saving should not be considered as the driving factor in the selection process of software development outsourcing vendors. Vendors should rather address other factors in order to compete in as sk the OSDO business, such and services.", "title": "" }, { "docid": "neg:1840068_14", "text": "This paper presents the results of a study conducted at the University of Maryland in which we experimentally investigated the suite of Object-Oriented (OO) design metrics introduced by [Chidamber&Kemerer, 1994]. In order to do this, we assessed these metrics as predictors of fault-prone classes. This study is complementary to [Li&Henry, 1993] where the same suite of metrics had been used to assess frequencies of maintenance changes to clas es. To perform our validation accurately, we collected data on the development of eight medium-sized information management systems based on identical requirements. All eight projects were developed using a sequential life cycle model, a well-known OO analysis/design method and the C++ programming language. Based on experimental results, the advantages and drawbacks of these OO metrics are discussed. Several of Chidamber&Kemerer’s OO metrics appear to be useful to predict class fault-proneness during the early phases of the life-cycle. We also showed that they are, on our data set, better predictors than “traditional” code metrics, which can only be collected at a later phase of the software development processes. Key-words: Object-Oriented Design Metrics; Error Prediction Model; Object-Oriented Software Development; C++ Programming Language. * V. Basili and W. Melo are with the University of Maryland, Institute for Advanced Computer Studies and Computer Science Dept., A. V. Williams Bldg., College Park, MD 20742 USA. {basili | melo}@cs.umd.edu L. Briand is with the CRIM, 1801 McGill College Av., Montréal (Québec), H3A 2N4, Canada. lbriand@crim.ca Technical Report, Univ. of Maryland, Dep. of Computer Science, College Park, MD, 20742 USA. April 1995. CS-TR-3443 2 UMIACS-TR-95-40 1 . Introduction", "title": "" }, { "docid": "neg:1840068_15", "text": "Although the widespread use of gaming for leisure purposes has been well documented, the use of games to support cultural heritage purposes, such as historical teaching and learning, or for enhancing museum visits, has been less well considered. The state-of-the-art in serious game technology is identical to that of the state-of-the-art in entertainment games technology. As a result the field of serious heritage games concerns itself with recent advances in computer games, real-time computer graphics, virtual and augmented reality and artificial intelligence. On the other hand, the main strengths of serious gaming applications may be generalised as being in the areas of communication, visual expression of information, collaboration mechanisms, interactivity and entertainment. In this report, we will focus on the state-of-the-art with respect to the theories, methods and technologies used in serious heritage games. We provide an overview of existing literature of relevance to the domain, discuss the strengths and weaknesses of the described methods and point out unsolved problems and challenges. In addition, several case studies illustrating the application of methods and technologies used in cultural heritage are presented.", "title": "" }, { "docid": "neg:1840068_16", "text": "The Internet of Things (IoT) is intended for ubiquitous connectivity among different entities or “things”. While its purpose is to provide effective and efficient solutions, security of the devices and network is a challenging issue. The number of devices connected along with the ad-hoc nature of the system further exacerbates the situation. Therefore, security and privacy has emerged as a significant challenge for the IoT. In this paper, we aim to provide a thorough survey related to the privacy and security challenges of the IoT. This document addresses these challenges from the perspective of technologies and architecture used. This work focuses also in IoT intrinsic vulnerabilities as well as the security challenges of various layers based on the security principles of data confidentiality, integrity and availability. This survey analyzes articles published for the IoT at the time and relates it to the security conjuncture of the field and its projection to the future.", "title": "" }, { "docid": "neg:1840068_17", "text": "The interest in action and gesture recognition has grown considerably in the last years. In this paper, we present a survey on current deep learning methodologies for action and gesture recognition in image sequences. We introduce a taxonomy that summarizes important aspects of deep learning for approaching both tasks. We review the details of the proposed architectures, fusion strategies, main datasets, and competitions. We summarize and discuss the main works proposed so far with particular interest on how they treat the temporal dimension of data, discussing their main features and identify opportunities and challenges for future research.", "title": "" }, { "docid": "neg:1840068_18", "text": "Research and development (R&D) project selection is an important task for organizations with R&D project management. It is a complicated multi-stage decision-making process, which involves groups of decision makers. Current research on R&D project selection mainly focuses on mathematical decision models and their applications, but ignores the organizational aspect of the decision-making process. This paper proposes an organizational decision support system (ODSS) for R&D project selection. Object-oriented method is used to design the architecture of the ODSS. An organizational decision support system has also been developed and used to facilitate the selection of project proposals in the National Natural Science Foundation of China (NSFC). The proposed system supports the R&D project selection process at the organizational level. It provides useful information for decision-making tasks in the R&D project selection process. D 2004 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "neg:1840068_19", "text": "The Suricata intrusion-detection system for computer-network monitoring has been advanced as an open-source improvement on the popular Snort system that has been available for over a decade. Suricata includes multi-threading to improve processing speed beyond Snort. Previous work comparing the two products has not used a real-world setting. We did this and evaluated the speed, memory requirements, and accuracy of the detection engines in three kinds of experiments: (1) on the full traffic of our school as observed on its \" backbone\" in real time, (2) on a supercomputer with packets recorded from the backbone, and (3) in response to malicious packets sent by a red-teaming product. We used the same set of rules for both products with a few small exceptions where capabilities were missing. We conclude that Suricata can handle larger volumes of traffic than Snort with similar accuracy, and that its performance scaled roughly linearly with the number of processors up to 48. We observed no significant speed or accuracy advantage of Suricata over Snort in its current state, but it is still being developed. Our methodology should be useful for comparing other intrusion-detection products.", "title": "" } ]
1840069
Energy-Efficient Power Control: A Look at 5G Wireless Technologies
[ { "docid": "pos:1840069_0", "text": "We consider data transmissions in a full duplex (FD) multiuser multiple-input multiple-output (MU-MIMO) system, where a base station (BS) bidirectionally communicates with multiple users in the downlink (DL) and uplink (UL) channels on the same system resources. The system model of consideration has been thought to be impractical due to the self-interference (SI) between transmit and receive antennas at the BS. Interestingly, recent advanced techniques in hardware design have demonstrated that the SI can be suppressed to a degree that possibly allows for FD transmission. This paper goes one step further in exploring the potential gains in terms of the spectral efficiency (SE) and energy efficiency (EE) that can be brought by the FD MU-MIMO model. Toward this end, we propose low-complexity designs for maximizing the SE and EE, and evaluate their performance numerically. For the SE maximization problem, we present an iterative design that obtains a locally optimal solution based on a sequential convex approximation method. In this way, the nonconvex precoder design problem is approximated by a convex program at each iteration. Then, we propose a numerical algorithm to solve the resulting convex program based on the alternating and dual decomposition approaches, where analytical expressions for precoders are derived. For the EE maximization problem, using the same method, we first transform it into a concave-convex fractional program, which then can be reformulated as a convex program using the parametric approach. We will show that the resulting problem can be solved similarly to the SE maximization problem. Numerical results demonstrate that, compared to a half duplex system, the FD system of interest with the proposed designs achieves a better SE and a slightly smaller EE when the SI is small.", "title": "" } ]
[ { "docid": "neg:1840069_0", "text": "As Computer curricula have developed, Human-Computer Interaction has gradually become part of many of those curricula and the recent ACM/IEEE report on the core of Computing Science and Engineering, includes HumanComputer Interaction as one of the fundamental sub-areas that should be addressed by any such curricula. However, both technology and Human-Computer Interaction are evolving rapidly, thus a continuous effort is needed to maintain a program, bibliography and a set of practical assignments up to date and adapted to the current technology. This paper briefly presents an introductory course on Human-Computer Interaction offered to Electrical and Computer Engineering students at the University of Aveiro.", "title": "" }, { "docid": "neg:1840069_1", "text": "The previous chapters gave an insightful introduction into the various facets of Business Process Management. We now share a rich understanding of the essential ideas behind designing and managing processes for organizational purposes. We have also learned about the various streams of research and development that have influenced contemporary BPM. As a matter of fact, BPM has become a holistic management discipline. As such, it requires that a plethora of facets needs to be addressed for its successful und sustainable application. This chapter provides a framework that consolidates and structures the essential factors that constitute BPM as a whole. Drawing from research in the field of maturity models, we suggest six core elements of BPM: strategic alignment, governance, methods, information technology, people, and culture. These six elements serve as the structure for this BPM Handbook. 1 Why Looking for BPM Core Elements? A recent global study by Gartner confirmed the significance of BPM with the top issue for CIOs identified for the sixth year in a row being the improvement of business processes (Gartner 2010). While such an interest in BPM is beneficial for professionals in this field, it also increases the expectations and the pressure to deliver on the promises of the process-centered organization. This context demands a sound understanding of how to approach BPM and a framework that decomposes the complexity of a holistic approach such as Business Process Management. A framework highlighting essential building blocks of BPM can particularly serve the following purposes: M. Rosemann (*) Information Systems Discipline, Faculty of Science and Technology, Queensland University of Technology, Brisbane, Australia e-mail: m.rosemann@qut.edu.au J. vom Brocke and M. Rosemann (eds.), Handbook on Business Process Management 1, International Handbooks on Information Systems, DOI 10.1007/978-3-642-00416-2_5, # Springer-Verlag Berlin Heidelberg 2010 107 l Project and Program Management: How can all relevant issues within a BPM approach be safeguarded? When implementing a BPM initiative, either as a project or as a program, is it essential to individually adjust the scope and have different BPM flavors in different areas of the organization? What competencies are relevant? What approach fits best with the culture and BPM history of the organization? What is it that needs to be taken into account “beyond modeling”? People for one thing play an important role like Hammer has pointed out in his chapter (Hammer 2010), but what might be further elements of relevance? In order to find answers to these questions, a framework articulating the core elements of BPM provides invaluable advice. l Vendor Management: How can service and product offerings in the field of BPM be evaluated in terms of their overall contribution to successful BPM? What portfolio of solutions is required to address the key issues of BPM, and to what extent do these solutions need to be sourced from outside the organization? There is, for example, a large list of providers of process-aware information systems, change experts, BPM training providers, and a variety of BPM consulting services. How can it be guaranteed that these offerings cover the required capabilities? In fact, the vast number of BPM offerings does not meet the requirements as distilled in this Handbook; see for example, Hammer (2010), Davenport (2010), Harmon (2010), and Rummler and Ramias (2010). It is also for the purpose of BPM make-or-buy decisions and the overall vendor management, that a framework structuring core elements of BPM is highly needed. l Complexity Management: How can the complexity that results from the holistic and comprehensive nature of BPM be decomposed so that it becomes manageable? How can a number of coexisting BPM initiatives within one organization be synchronized? An overarching picture of BPM is needed in order to provide orientation for these initiatives. Following a “divide-and-conquer” approach, a shared understanding of the core elements can help to focus on special factors of BPM. For each element, a specific analysis could be carried out involving experts from the various fields. Such an assessment should be conducted by experts with the required technical, business-oriented, and socio-cultural know-how. l Standards Management: What elements of BPM need to be standardized across the organization? What BPM elements need to be mandated for every BPM initiative? What BPM elements can be configured individually within each initiative? A comprehensive framework allows an element-by-element decision for the degrees of standardization that are required. For example, it might be decided that a company-wide process model repository will be “enforced” on all BPM initiatives, while performance management and cultural change will be decentralized activities. l Strategy Management: What is the BPM strategy of the organization? How does this strategy materialize in a BPM roadmap? How will the naturally limited attention of all involved stakeholders be distributed across the various BPM elements? How do we measure progression in a BPM initiative (“BPM audit”)? 108 M. Rosemann and J. vom Brocke", "title": "" }, { "docid": "neg:1840069_2", "text": "This paper presents a Bayesian optimization method with exponential convergencewithout the need of auxiliary optimization and without the δ-cover sampling. Most Bayesian optimization methods require auxiliary optimization: an additional non-convex global optimization problem, which can be time-consuming and hard to implement in practice. Also, the existing Bayesian optimization method with exponential convergence [ 1] requires access to the δ-cover sampling, which was considered to be impractical [ 1, 2]. Our approach eliminates both requirements and achieves an exponential convergence rate.", "title": "" }, { "docid": "neg:1840069_3", "text": "GPUs and accelerators have become ubiquitous in modern supercomputing systems. Scientific applications from a wide range of fields are being modified to take advantage of their compute power. However, data movement continues to be a critical bottleneck in harnessing the full potential of a GPU. Data in the GPU memory has to be moved into the host memory before it can be sent over the network. MPI libraries like MVAPICH2 have provided solutions to alleviate this bottleneck using techniques like pipelining. GPUDirect RDMA is a feature introduced in CUDA 5.0, that allows third party devices like network adapters to directly access data in GPU device memory, over the PCIe bus. NVIDIA has partnered with Mellanox to make this solution available for InfiniBand clusters. In this paper, we evaluate the first version of GPUDirect RDMA for InfiniBand and propose designs in MVAPICH2 MPI library to efficiently take advantage of this feature. We highlight the limitations posed by current generation architectures in effectively using GPUDirect RDMA and address these issues through novel designs in MVAPICH2. To the best of our knowledge, this is the first work to demonstrate a solution for internode GPU-to-GPU MPI communication using GPUDirect RDMA. Results show that the proposed designs improve the latency of internode GPU-to-GPU communication using MPI Send/MPI Recv by 69% and 32% for 4Byte and 128KByte messages, respectively. The designs boost the uni-directional bandwidth achieved using 4KByte and 64KByte messages by 2x and 35%, respectively. We demonstrate the impact of the proposed designs using two end-applications: LBMGPU and AWP-ODC. They improve the communication times in these applications by up to 35% and 40%, respectively.", "title": "" }, { "docid": "neg:1840069_4", "text": "We investigate the training and performance of generative adversarial networks using the Maximum Mean Discrepancy (MMD) as critic, termed MMD GANs. As our main theoretical contribution, we clarify the situation with bias in GAN loss functions raised by recent work: we show that gradient estimators used in the optimization process for both MMD GANs and Wasserstein GANs are unbiased, but learning a discriminator based on samples leads to biased gradients for the generator parameters. We also discuss the issue of kernel choice for the MMD critic, and characterize the kernel corresponding to the energy distance used for the Cramér GAN critic. Being an integral probability metric, the MMD benefits from training strategies recently developed for Wasserstein GANs. In experiments, the MMD GAN is able to employ a smaller critic network than the Wasserstein GAN, resulting in a simpler and faster-training algorithm with matching performance. We also propose an improved measure of GAN convergence, the Kernel Inception Distance, and show how to use it to dynamically adapt learning rates during GAN training.", "title": "" }, { "docid": "neg:1840069_5", "text": "Users can enjoy personalized services provided by various context-aware applications that collect users' contexts through sensor-equipped smartphones. Meanwhile, serious privacy concerns arise due to the lack of privacy preservation mechanisms. Currently, most mechanisms apply passive defense policies in which the released contexts from a privacy preservation system are always real, leading to a great probability with which an adversary infers the hidden sensitive contexts about the users. In this paper, we apply a deception policy for privacy preservation and present a novel technique, FakeMask, in which fake contexts may be released to provably preserve users' privacy. The output sequence of contexts by FakeMask can be accessed by the untrusted context-aware applications or be used to answer queries from those applications. Since the output contexts may be different from the original contexts, an adversary has greater difficulty in inferring the real contexts. Therefore, FakeMask limits what adversaries can learn from the output sequence of contexts about the user being in sensitive contexts, even if the adversaries are powerful enough to have the knowledge about the system and the temporal correlations among the contexts. The essence of FakeMask is a privacy checking algorithm which decides whether to release a fake context for the current context of the user. We present a novel privacy checking algorithm and an efficient one to accelerate the privacy checking process. Extensive evaluation experiments on real smartphone context traces of users demonstrate the improved performance of FakeMask over other approaches.", "title": "" }, { "docid": "neg:1840069_6", "text": "Rootkits Trojan virus, which can control attacked computers, delete import files and even steal password, are much popular now. Interrupt Descriptor Table (IDT) hook is rootkit technology in kernel level of Trojan. The paper makes deeply analysis on the IDT hooks handle procedure of rootkit Trojan according to previous other researchers methods. We compare its IDT structure and programs to find how Trojan interrupt handler code can respond the interrupt vector request in both real address mode and protected address mode. Finally, we analyze the IDT hook detection methods of rootkits Trojan by Windbg or other professional tools.", "title": "" }, { "docid": "neg:1840069_7", "text": "In this paper, we explore the use of the Stellar Consensus Protocol (SCP) and its Federated Byzantine Agreement (FBA) algorithm for ensuring trust and reputation between federated, cloud-based platform instances (nodes) and their participants. Our approach is grounded on federated consensus mechanisms, which promise data quality managed through computational trust and data replication, without a centralized authority. We perform our experimentation on the ground of the NIMBLE cloud manufacturing platform, which is designed to support growth of B2B digital manufacturing communities and their businesses through federated platform services, managed by peer-to-peer networks. We discuss the message exchange flow between the NIMBLE application logic and Stellar consensus logic.", "title": "" }, { "docid": "neg:1840069_8", "text": "Latency of interactive computer systems is a product of the processing, transport and synchronisation delays inherent to the components that create them. In a virtual environment (VE) system, latency is known to be detrimental to a user's sense of immersion, physical performance and comfort level. Accurately measuring the latency of a VE system for study or optimisation, is not straightforward. A number of authors have developed techniques for characterising latency, which have become progressively more accessible and easier to use. In this paper, we characterise these techniques. We describe a simple mechanical simulator designed to simulate a VE with various amounts of latency that can be finely controlled (to within 3ms). We develop a new latency measurement technique called Automated Frame Counting to assist in assessing latency using high speed video (to within 1ms). We use the mechanical simulator to measure the accuracy of Steed's and Di Luca's measurement techniques, proposing improvements where they may be made. We use the methods to measure latency of a number of interactive systems that may be of interest to the VE engineer, with a significant level of confidence. All techniques were found to be highly capable however Steed's Method is both accurate and easy to use without requiring specialised hardware.", "title": "" }, { "docid": "neg:1840069_9", "text": "In this study, we apply learning-to-rank algorithms to design trading strategies using relative performance of a group of stocks based on investors’ sentiment toward these stocks. We show that learning-to-rank algorithms are effective in producing reliable rankings of the best and the worst performing stocks based on investors’ sentiment. More specifically, we use the sentiment shock and trend indicators introduced in the previous studies, and we design stock selection rules of holding long positions of the top 25% stocks and short positions of the bottom 25% stocks according to rankings produced by learning-to-rank algorithms. We then apply two learning-to-rank algorithms, ListNet and RankNet, in stock selection processes and test long-only and long-short portfolio selection strategies using 10 years of market and news sentiment data. Through backtesting of these strategies from 2006 to 2014, we demonstrate that our portfolio strategies produce risk-adjusted returns superior to the S&P500 index return, the hedge fund industry average performance HFRIEMN, and some sentiment-based approaches without learning-to-rank algorithm during the same period.", "title": "" }, { "docid": "neg:1840069_10", "text": "A chronic alcoholic who had also been submitted to partial gastrectomy developed a syndrome of continuous motor unit activity responsive to phenytoin therapy. There were signs of minimal distal sensorimotor polyneuropathy. Symptoms of the syndrome of continuous motor unit activity were fasciculation, muscle stiffness, myokymia, impaired muscular relaxation and percussion myotonia. Electromyography at rest showed fasciculation, doublets, triplets, multiplets, trains of repetitive discharges and myotonic discharges. Trousseau's and Chvostek's signs were absent. No abnormality of serum potassium, calcium, magnesium, creatine kinase, alkaline phosphatase, arterial blood gases and pH were demonstrated, but the serum Vitamin B12 level was reduced. The electrophysiological findings and muscle biopsy were compatible with a mixed sensorimotor polyneuropathy. Tests of neuromuscular transmission showed a significant decrement in the amplitude of the evoked muscle action potential in the abductor digiti minimi on repetitive nerve stimulation. These findings suggest that hyperexcitability and hyperactivity of the peripheral motor axons underlie the syndrome of continuous motor unit activity in the present case. Ein chronischer Alkoholiker, mit subtotaler Gastrectomie, litt an einem Syndrom dauernder Muskelfaseraktivität, das mit Diphenylhydantoin behandelt wurde. Der Patient wies minimale Störungen im Sinne einer distalen sensori-motorischen Polyneuropathie auf. Die Symptome dieses Syndroms bestehen in: Fazikulationen, Muskelsteife, Myokymien, eine gestörte Erschlaffung nach der Willküraktivität und eine Myotonie nach Beklopfen des Muskels. Das Elektromyogramm in Ruhe zeigt: Faszikulationen, Doublets, Triplets, Multiplets, Trains repetitiver Potentiale und myotonische Entladungen. Trousseau- und Chvostek-Zeichen waren nicht nachweisbar. Gleichzeitig lagen die Kalium-, Calcium-, Magnesium-, Kreatinkinase- und Alkalinphosphatase-Werte im Serumspiegel sowie O2, CO2 und pH des arteriellen Blutes im Normbereich. Aber das Niveau des Vitamin B12 im Serumspiegel war deutlich herabgesetzt. Die muskelbioptische und elektrophysiologische Veränderungen weisen auf eine gemischte sensori-motorische Polyneuropathie hin. Die Abnahme der Amplitude der evozierten Potentiale, vom M. abductor digiti minimi abgeleitet, bei repetitiver Reizung des N. ulnaris, stellten eine Störung der neuromuskulären Überleitung dar. Aufgrund unserer klinischen und elektrophysiologischen Befunde könnten wir die Hypererregbarkeit und Hyperaktivität der peripheren motorischen Axonen als Hauptmechanismus des Syndroms dauernder motorischer Einheitsaktivität betrachten.", "title": "" }, { "docid": "neg:1840069_11", "text": "Multiple networks naturally appear in numerous high-impact applications. Network alignment (i.e., finding the node correspondence across different networks) is often the very first step for many data mining tasks. Most, if not all, of the existing alignment methods are solely based on the topology of the underlying networks. Nonetheless, many real networks often have rich attribute information on nodes and/or edges. In this paper, we propose a family of algorithms FINAL to align attributed networks. The key idea is to leverage the node/edge attribute information to guide (topology-based) alignment process. We formulate this problem from an optimization perspective based on the alignment consistency principle, and develop effective and scalable algorithms to solve it. Our experiments on real networks show that (1) by leveraging the attribute information, our algorithms can significantly improve the alignment accuracy (i.e., up to a 30% improvement over the existing methods); (2) compared with the exact solution, our proposed fast alignment algorithm leads to a more than 10 times speed-up, while preserving a 95% accuracy; and (3) our on-query alignment method scales linearly, with an around 90% ranking accuracy compared with our exact full alignment method and a near real-time response time.", "title": "" }, { "docid": "neg:1840069_12", "text": "In a number of information security scenarios, human beings can be better than technical security measures at detecting threats. This is particularly the case when a threat is based on deception of the user rather than exploitation of a specific technical flaw, as is the case of spear-phishing, application spoofing, multimedia masquerading and other semantic social engineering attacks. Here, we put the concept of the human-as-a-security-sensor to the test with a first case study on a small number of participants subjected to different attacks in a controlled laboratory environment and provided with a mechanism to report these attacks if they spot them. A key challenge is to estimate the reliability of each report, which we address with a machine learning approach. For comparison, we evaluate the ability of known technical security countermeasures in detecting the same threats. This initial proof of concept study shows that the concept is viable.", "title": "" }, { "docid": "neg:1840069_13", "text": "Many language processing tasks can be reduced to breaking the text into segments with prescribed properties. Such tasks include sentence splitting, tokenization, named-entity extraction, and chunking. We present a new model of text segmentation based on ideas from multilabel classification. Using this model, we can naturally represent segmentation problems involving overlapping and non-contiguous segments. We evaluate the model on entity extraction and noun-phrase chunking and show that it is more accurate for overlapping and non-contiguous segments, but it still performs well on simpler data sets for which sequential tagging has been the best method.", "title": "" }, { "docid": "neg:1840069_14", "text": "Enforcing open source licenses such as the GNU General Public License (GPL), analyzing a binary for possible vulnerabilities, and code maintenance are all situations where it is useful to be able to determine the source code provenance of a binary. While previous work has either focused on computing binary-to-binary similarity or source-to-source similarity, BinPro is the first work we are aware of to tackle the problem of source-to-binary similarity. BinPro can match binaries with their source code even without knowing which compiler was used to produce the binary, or what optimization level was used with the compiler. To do this, BinPro utilizes machine learning to compute optimal code features for determining binaryto-source similarity and a static analysis pipeline to extract and compute similarity based on those features. Our experiments show that on average BinPro computes a similarity of 81% for matching binaries and source code of the same applications, and an average similarity of 25% for binaries and source code of similar but different applications. This shows that BinPro’s similarity score is useful for determining if a binary was derived from a particular source code.", "title": "" }, { "docid": "neg:1840069_15", "text": "Research about the artificial muscle made of fishing lines or sewing threads, called the twisted and coiled polymer actuator (abbreviated as TCA in this paper) has collected many interests, recently. Since TCA has a specific power surpassing the human skeletal muscle theoretically, it is expected to be a new generation of the artificial muscle actuator. In order that the TCA is utilized as a useful actuator, this paper introduces the fabrication and the modeling of the temperature-controllable TCA. With an embedded micro thermistor, the TCA is able to measure temperature directly, and feedback control is realized. The safe range of the force and temperature for the continuous use of the TCA was identified through experiments, and the closed-loop temperature control is successfully performed without the breakage of TCA.", "title": "" }, { "docid": "neg:1840069_16", "text": "PinOS is an extension of the Pin dynamic instrumentation framework for whole-system instrumentation, i.e., to instrument both kernel and user-level code. It achieves this by interposing between the subject system and hardware using virtualization techniques. Specifically, PinOS is built on top of the Xen virtual machine monitor with Intel VT technology to allow instrumentation of unmodified OSes. PinOS is based on software dynamic translation and hence can perform pervasive fine-grain instrumentation. By inheriting the powerful instrumentation API from Pin, plus introducing some new API for system-level instrumentation, PinOS can be used to write system-wide instrumentation tools for tasks like program analysis and architectural studies. As of today, PinOS can boot Linux on IA-32 in uniprocessor mode, and can instrument complex applications such as database and web servers.", "title": "" }, { "docid": "neg:1840069_17", "text": "We propose an automatic method to infer high dynamic range illumination from a single, limited field-of-view, low dynamic range photograph of an indoor scene. In contrast to previous work that relies on specialized image capture, user input, and/or simple scene models, we train an end-to-end deep neural network that directly regresses a limited field-of-view photo to HDR illumination, without strong assumptions on scene geometry, material properties, or lighting. We show that this can be accomplished in a three step process: 1) we train a robust lighting classifier to automatically annotate the location of light sources in a large dataset of LDR environment maps, 2) we use these annotations to train a deep neural network that predicts the location of lights in a scene from a single limited field-of-view photo, and 3) we fine-tune this network using a small dataset of HDR environment maps to predict light intensities. This allows us to automatically recover high-quality HDR illumination estimates that significantly outperform previous state-of-the-art methods. Consequently, using our illumination estimates for applications like 3D object insertion, produces photo-realistic results that we validate via a perceptual user study.", "title": "" }, { "docid": "neg:1840069_18", "text": "Improving a decision maker’s1 situational awareness of the cyber domain isn’t greatly different than enabling situation awareness in more traditional domains2. Situation awareness necessitates working with processes capable of identifying domain specific activities as well as processes capable of identifying activities that cross domains. These processes depend on the context of the environment, the domains, and the goals and interests of the decision maker but they can be defined to support any domain. This chapter will define situation awareness in its broadest sense, describe our situation awareness reference and process models, describe some of the applicable processes, and identify a set of metrics usable for measuring the performance of a capability supporting situation awareness. These techniques are independent of domain but this chapter will also describe how they apply to the cyber domain. 2.1 What is Situation Awareness (SA)? One of the challenges in working in this area is that there are a multitude of definitions and interpretations concerning the answer to this simple question. A keyword search (executed on 8 April 2009) of ‘situation awareness’ on Google yields over 18,000,000 links the first page of which ranged from a Wikipedia page through the importance of “SA while driving” and ends with a link to a free internet radio show. Also on this first search page are several links to publications by Dr. Mica Endsley whose work in SA is arguably providing a standard for SA definitions and George P. Tadda and John S. Salerno, Air Force Research Laboratory Rome NY 1 Decision maker is used very loosely to describe anyone who uses information to make decisions within a complex dynamic environment. This is necessary because, as will be discussed, situation awareness is unique and dependant on the environment being considered, the context of the decision to be made, and the user of the information. 2 Traditional domains could include land, air, or sea. S. Jajodia et al., (eds.), Cyber Situational Awareness, 15 Advances in Information Security 46, DOI 10.1007/978-1-4419-0140-8 2, c © Springer Science+Business Media, LLC 2010 16 George P. Tadda and John S. Salerno techniques particularly for dynamic environments. In [5], Dr. Endsley provides a general definition of SA in dynamic environments: “Situation awareness is the perception of the elements of the environment within a volume of time and space, the comprehension of their meaning, and the projection of their status in the near future.” Also in [5], Endsley differentiates between situation awareness, “a state of knowledge”, and situation assessment, “process of achieving, acquiring, or maintaining SA.” This distinction becomes exceedingly important when trying to apply computer automation to SA. Since situation awareness is “a state of knowledge”, it resides primarily in the minds of humans (cognitive), while situation assessment as a process or set of processes lends itself to automated techniques. Endsley goes on to note that: “SA, decision making, and performance are different stages with different factors influencing them and with wholly different approaches for dealing with each of them; thus it is important to treat these constructs separately.” The “stages” that Endsley defines have a direct correlation with Boyd’s ubiquitous OODA loop with SA relating to Observe and Orient, decision making to Decide, and performance to Act. We’ll see these stages as well as Endsley’s three “levels” of SA (perception, comprehension, and projection) manifest themselves again throughout this discussion. As first mentioned, there are several definitions for SA, from the Army Field Manual 1-02 (September 2004), Situational Awareness is: “Knowledge and understanding of the current situation which promotes timely, relevant and accurate assessment of friendly, competitive and other operations within the battlespace in order to facilitate decision making. An informational perspective and skill that fosters an ability to determine quickly the context and relevance of events that are unfolding.”", "title": "" } ]
1840070
Regressing a 3D Face Shape from a Single Image
[ { "docid": "pos:1840070_0", "text": "We present an efficient and robust method of locating a set of feature points in an object of interest. From a training set we construct a joint model of the appearance of each feature together with their relative positions. The model is fitted to an unseen image in an iterative manner by generating templates using the joint model and the current parameter estimates, correlating the templates with the target image to generate response images and optimising the shape parameters so as to maximise the sum of responses. The appearance model is similar to that used in the Active Appearance Models (AAM) [T.F. Cootes, G.J. Edwards, C.J. Taylor, Active appearance models, in: Proceedings of the 5th European Conference on Computer Vision 1998, vol. 2, Freiburg, Germany, 1998.]. However in our approach the appearance model is used to generate likely feature templates, instead of trying to approximate the image pixels directly. We show that when applied to a wide range of data sets, our Constrained Local Model (CLM) algorithm is more robust and more accurate than the AAM search method, which relies on the image reconstruction error to update the model parameters. We demonstrate improved localisation accuracy on photographs of human faces, magnetic resonance (MR) images of the brain and a set of dental panoramic tomograms. We also show improved tracking performance on a challenging set of in car video sequences. 2008 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "pos:1840070_1", "text": "To enable real-time, person-independent 3D registration from 2D video, we developed a 3D cascade regression approach in which facial landmarks remain invariant across pose over a range of approximately 60 degrees. From a single 2D image of a person's face, a dense 3D shape is registered in real time for each frame. The algorithm utilizes a fast cascade regression framework trained on high-resolution 3D face-scans of posed and spontaneous emotion expression. The algorithm first estimates the location of a dense set of markers and their visibility, then reconstructs face shapes by fitting a part-based 3D model. Because no assumptions are required about illumination or surface properties, the method can be applied to a wide range of imaging conditions that include 2D video and uncalibrated multi-view video. The method has been validated in a battery of experiments that evaluate its precision of 3D reconstruction and extension to multi-view reconstruction. Experimental findings strongly support the validity of real-time, 3D registration and reconstruction from 2D video. The software is available online at http://zface.org.", "title": "" }, { "docid": "pos:1840070_2", "text": "Face information processing relies on the quality of data resource. From the data modality point of view, a face database can be 2D or 3D, and static or dynamic. From the task point of view, the data can be used for research of computer based automatic face recognition, face expression recognition, face detection, or cognitive and psychological investigation. With the advancement of 3D imaging technologies, 3D dynamic facial sequences (called 4D data) have been used for face information analysis. In this paper, we focus on the modality of 3D dynamic data for the task of facial expression recognition. We present a newly created high-resolution 3D dynamic facial expression database, which is made available to the scientific research community. The database contains 606 3D facial expression sequences captured from 101 subjects of various ethnic backgrounds. The database has been validated through our facial expression recognition experiment using an HMM based 3D spatio-temporal facial descriptor. It is expected that such a database shall be used to facilitate the facial expression analysis from a static 3D space to a dynamic 3D space, with a goal of scrutinizing facial behavior at a higher level of detail in a real 3D spatio-temporal domain.", "title": "" } ]
[ { "docid": "neg:1840070_0", "text": "This paper describes the design of a six-axis microelectromechanical systems (MEMS) force-torque sensor. A movable body is suspended by flexures that allow deflections and rotations along the x-, y-, and z-axes. The orientation of this movable body is sensed by seven capacitors. Transverse sensing is used for all capacitors, resulting in a high sensitivity. A batch fabrication process is described as capable of fabricating these multiaxis sensors with a high yield. The force sensor is experimentally investigated, and a multiaxis calibration method is described. Measurements show that the resolution is on the order of a micro-Newton and nano-Newtonmeter. This is the first six-axis MEMS force sensor that has been successfully developed.", "title": "" }, { "docid": "neg:1840070_1", "text": "A comparative analysis between Nigerian English (NE) and American English (AE) is presented in this article. The study is aimed at highlighting differences in the speech parameters, and how they influence speech processing and automatic speech recognition (ASR). The UILSpeech corpus of Nigerian-Accented English isolated word recordings, read speech utterances, and video recordings are used as a reference for Nigerian English. The corpus captures the linguistic diversity of Nigeria with data collected from native speakers of Hausa, Igbo, and Yoruba languages. The UILSpeech corpus is intended to provide a unique opportunity for application and expansion of speech processing techniques to a limited resource language dialect. The acoustic-phonetic differences between American English (AE) and Nigerian English (NE) are studied in terms of pronunciation variations, vowel locations in the formant space, mean fundamental frequency, and phone model distances in the acoustic space, as well as through visual speech analysis of the speakers’ articulators. A strong impact of the AE–NE acoustic mismatch on ASR is observed. A combination of model adaptation and extension of the AE lexicon for newly established NE pronunciation variants is shown to substantially improve performance of the AE-trained ASR system in the new NE task. This study is a part of the pioneering efforts towards incorporating speech technology in Nigerian English and is intended to provide a development basis for other low resource language dialects and languages.", "title": "" }, { "docid": "neg:1840070_2", "text": "Although cuckoo hashing has significant applications in both theoretical and practical settings, a relevant downside is that it requires lookups to multiple locations. In many settings, where lookups are expensive, cuckoo hashing becomes a less compelling alternative. One such standard setting is when memory is arranged in large pages, and a major cost is the number of page accesses. We propose the study of cuckoo hashing with pages, advocating approaches where each key has several possible locations, or cells, on a single page, and additional choices on a second backup page. We show experimentally that with k cell choices on one page and a single backup cell choice, one can achieve nearly the same loads as when each key has k+1 random cells to choose from, with most lookups requiring just one page access, even when keys are placed online using a simple algorithm. While our results are currently experimental, they suggest several interesting new open theoretical questions for cuckoo hashing with pages.", "title": "" }, { "docid": "neg:1840070_3", "text": "Gender-affirmation surgery is often the final gender-confirming medical intervention sought by those patients suffering from gender dysphoria. In the male-to-female (MtF) transgendered patient, the creation of esthetic and functional external female genitalia with a functional vaginal channel is of the utmost importance. The aim of this review and meta-analysis is to evaluate the epidemiology, presentation, management, and outcomes of neovaginal complications in the MtF transgender reassignment surgery patients. PUBMED was searched in accordance with PRISMA guidelines for relevant articles (n = 125). Ineligible articles were excluded and articles meeting all inclusion criteria went on to review and analysis (n = 13). Ultimately, studies reported on 1,684 patients with an overall complication rate of 32.5% and a reoperation rate of 21.7% for non-esthetic reasons. The most common complication was stenosis of the neo-meatus (14.4%). Wound infection was associated with an increased risk of all tissue-healing complications. Use of sacrospinous ligament fixation (SSL) was associated with a significantly decreased risk of prolapse of the neovagina. Gender-affirmation surgery is important in the treatment of gender dysphoric patients, but there is a high complication rate in the reported literature. Variability in technique and complication reporting standards makes it difficult to assess the accurately the current state of MtF gender reassignment surgery. Further research and implementation of standards is necessary to improve patient outcomes. Clin. Anat. 31:191-199, 2018. © 2017 Wiley Periodicals, Inc.", "title": "" }, { "docid": "neg:1840070_4", "text": "OBJECTIVE\nTo evaluate the feasibility and safety of home rehabilitation of the hand using a robotic glove, and, in addition, its effectiveness, in hemiplegic patients after stroke.\n\n\nMETHODS\nIn this non-randomized pilot study, 21 hemiplegic stroke patients (Ashworth spasticity index ≤ 3) were prescribed, after in-hospital rehabilitation, a 2-month home-program of intensive hand training using the Gloreha Lite glove that provides computer-controlled passive mobilization of the fingers. Feasibility was measured by: number of patients who completed the home-program, minutes of exercise and number of sessions/patient performed. Safety was assessed by: hand pain with a visual analog scale (VAS), Ashworth spasticity index for finger flexors, opponents of the thumb and wrist flexors, and hand edema (circumference of forearm, wrist and fingers), measured at start (T0) and end (T1) of rehabilitation. Hand motor function (Motricity Index, MI), fine manual dexterity (Nine Hole Peg Test, NHPT) and strength (Grip test) were also measured at T0 and T1.\n\n\nRESULTS\nPatients performed, over a mean period 56 (49-63) days, a total of 1699 (1353-2045) min/patient of exercise with Gloreha Lite, 5.1 (4.3-5.8) days/week. Seventeen patients (81%) completed the full program. The mean VAS score of hand pain, Ashworth spasticity index and hand edema did not change significantly at T1 compared to T0. The MI, NHPT and Grip test improved significantly (p = 0.0020, 0.0156 and 0.0024, respectively) compared to baseline.\n\n\nCONCLUSION\nGloreha Lite is feasible and safe for use in home rehabilitation. The efficacy data show a therapeutic effect which need to be confirmed by a randomized controlled study.", "title": "" }, { "docid": "neg:1840070_5", "text": "Dealing with high-dimensional input spaces, like visual input, is a challenging task for reinforcement learning (RL). Neuroevolution (NE), used for continuous RL problems, has to either reduce the problem dimensionality by (1) compressing the representation of the neural network controllers or (2) employing a pre-processor (compressor) that transforms the high-dimensional raw inputs into low-dimensional features. In this paper, we are able to evolve extremely small recurrent neural network (RNN) controllers for a task that previously required networks with over a million weights. The high-dimensional visual input, which the controller would normally receive, is first transformed into a compact feature vector through a deep, max-pooling convolutional neural network (MPCNN). Both the MPCNN preprocessor and the RNN controller are evolved successfully to control a car in the TORCS racing simulator using only visual input. This is the first use of deep learning in the context evolutionary RL.", "title": "" }, { "docid": "neg:1840070_6", "text": "OBJECTIVES\nCross-language qualitative research occurs when a language barrier is present between researchers and participants. The language barrier is frequently mediated through the use of a translator or interpreter. The purpose of this analysis of cross-language qualitative research was threefold: (1) review the methods literature addressing cross-language research; (2) synthesize the methodological recommendations from the literature into a list of criteria that could evaluate how researchers methodologically managed translators and interpreters in their qualitative studies; (3) test these criteria on published cross-language qualitative studies.\n\n\nDATA SOURCES\nA group of 40 purposively selected cross-language qualitative studies found in nursing and health sciences journals.\n\n\nREVIEW METHODS\nThe synthesis of the cross-language methods literature produced 14 criteria to evaluate how qualitative researchers managed the language barrier between themselves and their study participants. To test the criteria, the researcher conducted a summative content analysis framed by discourse analysis techniques of the 40 cross-language studies.\n\n\nRESULTS\nThe evaluation showed that only 6 out of 40 studies met all the criteria recommended by the cross-language methods literature for the production of trustworthy results in cross-language qualitative studies. Multiple inconsistencies, reflecting disadvantageous methodological choices by cross-language researchers, appeared in the remaining 33 studies. To name a few, these included rendering the translator or interpreter as an invisible part of the research process, failure to pilot test interview questions in the participant's language, no description of translator or interpreter credentials, failure to acknowledge translation as a limitation of the study, and inappropriate methodological frameworks for cross-language research.\n\n\nCONCLUSIONS\nThe finding about researchers making the role of the translator or interpreter invisible during the research process supports studies completed by other authors examining this issue. The analysis demonstrated that the criteria produced by this study may provide useful guidelines for evaluating cross-language research and for novice cross-language researchers designing their first studies. Finally, the study also indicates that researchers attempting cross-language studies need to address the methodological issues surrounding language barriers between researchers and participants more systematically.", "title": "" }, { "docid": "neg:1840070_7", "text": "Sentiment analysis is one of the most popular natural language processing techniques. It aims to identify the sentiment polarity (positive, negative, neutral or mixed) within a given text. The proper lexicon knowledge is very important for the lexicon-based sentiment analysis methods since they hinge on using the polarity of the lexical item to determine a text's sentiment polarity. However, it is quite common that some lexical items appear positive in the text of one domain but appear negative in another. In this paper, we propose an innovative knowledge building algorithm to extract sentiment lexicon knowledge through computing their polarity value based on their polarity distribution in text dataset, such as in a set of domain specific reviews. The proposed algorithm was tested by a set of domain microblogs. The results demonstrate the effectiveness of the proposed method. The proposed lexicon knowledge extraction method can enhance the performance of knowledge based sentiment analysis.", "title": "" }, { "docid": "neg:1840070_8", "text": "Hyperspectral imagery typically provides a wealth of information captured in a wide range of the electromagnetic spectrum for each pixel in the image; however, when used in statistical pattern-classification tasks, the resulting high-dimensional feature spaces often tend to result in ill-conditioned formulations. Popular dimensionality-reduction techniques such as principal component analysis, linear discriminant analysis, and their variants typically assume a Gaussian distribution. The quadratic maximum-likelihood classifier commonly employed for hyperspectral analysis also assumes single-Gaussian class-conditional distributions. Departing from this single-Gaussian assumption, a classification paradigm designed to exploit the rich statistical structure of the data is proposed. The proposed framework employs local Fisher's discriminant analysis to reduce the dimensionality of the data while preserving its multimodal structure, while a subsequent Gaussian mixture model or support vector machine provides effective classification of the reduced-dimension multimodal data. Experimental results on several different multiple-class hyperspectral-classification tasks demonstrate that the proposed approach significantly outperforms several traditional alternatives.", "title": "" }, { "docid": "neg:1840070_9", "text": "In this paper, we describe a system for automatic construction of user disease progression timelines from their posts in online support groups using minimal supervision. In recent years, several online support groups have been established which has led to a huge increase in the amount of patient-authored text available. Creating systems which can automatically extract important medical events and create disease progression timelines for users from such text can help in patient health monitoring as well as studying links between medical events and users’ participation in support groups. Prior work in this domain has used manually constructed keyword sets to detect medical events. In this work, our aim is to perform medical event detection using minimal supervision in order to develop a more general timeline construction system. Our system achieves an accuracy of 55.17%, which is 92% of the performance achieved by a supervised baseline system.", "title": "" }, { "docid": "neg:1840070_10", "text": "Human pose estimation using deep neural networks aims to map input images with large variations into multiple body keypoints, which must satisfy a set of geometric constraints and interdependence imposed by the human body model. This is a very challenging nonlinear manifold learning process in a very high dimensional feature space. We believe that the deep neural network, which is inherently an algebraic computation system, is not the most efficient way to capture highly sophisticated human knowledge, for example those highly coupled geometric characteristics and interdependence between keypoints in human poses. In this work, we propose to explore how external knowledge can be effectively represented and injected into the deep neural networks to guide its training process using learned projections that impose proper prior. Specifically, we use the stacked hourglass design and inception-resnet module to construct a fractal network to regress human pose images into heatmaps with no explicit graphical modeling. We encode external knowledge with visual features, which are able to characterize the constraints of human body models and evaluate the fitness of intermediate network output. We then inject these external features into the neural network using a projection matrix learned using an auxiliary cost function. The effectiveness of the proposed inception-resnet module and the benefit in guided learning with knowledge projection is evaluated on two widely used human pose estimation benchmarks. Our approach achieves state-of-the-art performance on both datasets.", "title": "" }, { "docid": "neg:1840070_11", "text": "This article reports on the design, implementation, and usage of the CourseMarker (formerly known as CourseMaster) courseware Computer Based Assessment (CBA) system at the University of Nottingham. Students use CourseMarker to solve (programming) exercises and to submit their solutions. CourseMarker returns immediate results and feedback to the students. Educators author a variety of exercises that benefit the students while offering practical benefits. To date, both educators and students have been hampered by CBA software that has been constructed to assess text-based or multiple-choice answers only. Although there exist a few CBA systems with some capability to automatically assess programming coursework, none assess Java programs and none are as flexible, architecture-neutral, robust, or secure as the CourseMarker CBA system.", "title": "" }, { "docid": "neg:1840070_12", "text": "Recognizing multiple mixed group activities from one still image is not a hard problem for humans but remains highly challenging for computer recognition systems. When modelling interactions among multiple units (i.e., more than two groups or persons), the existing approaches tend to divide them into interactions between pairwise units. However, no mathematical evidence supports this transformation. Therefore, these approaches’ performance is limited on images containing multiple activities. In this paper, we propose a generative model to provide a more reasonable interpretation for the mixed group activities contained in one image. We design a four level structure and convert the original intra-level interactions into inter-level interactions, in order to implement both interactions among multiple groups and interactions among multiple persons within a group. The proposed four-level structure makes our model more robust against the occlusion and overlap of the visible poses in images. Experimental results demonstrate that our model makes good interpretations for mixed group activities and outperforms the state-of-the-art methods on the Collective Activity Classification dataset.", "title": "" }, { "docid": "neg:1840070_13", "text": "This paper considers John Dewey’s dual reformist-preservationist agenda for education in the context of current debates about the role of experience in management learning. The paper argues for preserving experience-based approaches to management learning by revising the concept of experience to more clearly account for the relationship between personal and social (i.e. , tacit/explicit) knowledge. By reviewing, comparing and extending critiques of Kolb’s experiential learning theory and reconceptualizing the learning process based on post-structural analysis of psychoanalyst Jacque Lacan, the paper defines experience within the context of language and social action. This perspective is contrasted to action, cognition, critical reflection and other experience-based approaches to management learning. Implications for management theory, pedagogy and practice suggest greater emphasis on language and conversation in the learning process. Future directions for research are explored.", "title": "" }, { "docid": "neg:1840070_14", "text": "Digital mammogram has become the most effective technique for early breast cancer detection modality. Digital mammogram takes an electronic image of the breast and stores it directly in a computer. High quality mammogram images are high resolution and large size images. Processing these images require high computational capabilities. The transmission of these images over the net is sometimes critical especially if the diagnosis of remote radiologists is required. The aim of this study is to develop an automated system for assisting the analysis of digital mammograms. Computer image processing techniques will be applied to enhance images and this is followed by segmentation of the region of interest (ROI). Subsequently, the textural features will be extracted from the ROI. The texture features will be used to classify the ROIs as either masses or non-masses.", "title": "" }, { "docid": "neg:1840070_15", "text": "STRANGER is an automata-based string analysis tool for finding and eliminating string-related security vulnerabilities in P H applications. STRANGER uses symbolic forward and backward reachability analyses t o compute the possible values that the string expressions can take during progr am execution. STRANGER can automatically (1) prove that an application is free from specified attacks or (2) generate vulnerability signatures that c racterize all malicious inputs that can be used to generate attacks.", "title": "" }, { "docid": "neg:1840070_16", "text": "Search Ranking and Recommendations are fundamental problems of crucial interest to major Internet companies, including web search engines, content publishing websites and marketplaces. However, despite sharing some common characteristics a one-size-fits-all solution does not exist in this space. Given a large difference in content that needs to be ranked, personalized and recommended, each marketplace has a somewhat unique challenge. Correspondingly, at Airbnb, a short-term rental marketplace, search and recommendation problems are quite unique, being a two-sided marketplace in which one needs to optimize for host and guest preferences, in a world where a user rarely consumes the same item twice and one listing can accept only one guest for a certain set of dates. In this paper we describe Listing and User Embedding techniques we developed and deployed for purposes of Real-time Personalization in Search Ranking and Similar Listing Recommendations, two channels that drive 99% of conversions. The embedding models were specifically tailored for Airbnb marketplace, and are able to capture guest's short-term and long-term interests, delivering effective home listing recommendations. We conducted rigorous offline testing of the embedding models, followed by successful online tests before fully deploying them into production.", "title": "" }, { "docid": "neg:1840070_17", "text": "Clustering analysis is a descriptive task that seeks to identify homogeneous groups of objects based on the values of their attributes. This paper proposes a new algorithm for K-medoids clustering which runs like the K-means algorithm and tests several methods for selecting initial medoids. The proposed algorithm calculates the distance matrix once and uses it for finding new medoids at every iterative step. We evaluate the proposed algorithm using real and artificial data and compare with the results of other algorithms. The proposed algorithm takes the reduced time in computation with comparable performance as compared to the Partitioning Around Medoids.", "title": "" }, { "docid": "neg:1840070_18", "text": "Table of", "title": "" }, { "docid": "neg:1840070_19", "text": "We are living in the era of the fourth industrial revolution, namely Industry 4.0. This paper presents the main aspects related to Industry 4.0, the technologies that will enable this revolution, and the main application domains that will be affected by it. The effects that the introduction of Internet of Things (IoT), Cyber-Physical Systems (CPS), crowdsensing, crowdsourcing, cloud computing and big data will have on industrial processes will be discussed. The main objectives will be represented by improvements in: production efficiency, quality and cost-effectiveness; workplace health and safety, as well as quality of working conditions; products’ quality and availability, according to mass customisation requirements. The paper will further discuss the common denominator of these enhancements, i.e., data collection and analysis. As data and information will be crucial for Industry 4.0, crowdsensing and crowdsourcing will introduce new advantages and challenges, which will make most of the industrial processes easier with respect to traditional technologies.", "title": "" } ]
1840071
Turbo and Turbo-Like Codes: Principles and Applications in Telecommunications
[ { "docid": "pos:1840071_0", "text": "DVB-S2 is the second-generation specification for satellite broad-band applications, developed by the Digital Video Broadcasting (DVB) Project in 2003. The system is structured as a toolkit to allow the implementation of the following satellite applications: TV and sound broadcasting, interactivity (i.e., Internet access), and professional services, such as TV contribution links and digital satellite news gathering. It has been specified around three concepts: best transmission performance approaching the Shannon limit, total flexibility, and reasonable receiver complexity. Channel coding and modulation are based on more recent developments by the scientific community: low density parity check codes are adopted, combined with QPSK, 8PSK, 16APSK, and 32APSK modulations for the system to work properly on the nonlinear satellite channel. The framing structure allows for maximum flexibility in a versatile system and also synchronization in worst case configurations (low signal-to-noise ratios). Adaptive coding and modulation, when used in one-to-one links, then allows optimization of the transmission parameters for each individual user,dependant on path conditions. Backward-compatible modes are also available,allowing existing DVB-S integrated receivers-decoders to continue working during the transitional period. The paper provides a tutorial overview of the DVB-S2 system, describing its main features and performance in various scenarios and applications.", "title": "" } ]
[ { "docid": "neg:1840071_0", "text": "This article presents the state of the art in passive devices for enhancing limb movement in people with neuromuscular disabilities. Both upper- and lower-limb projects and devices are described. Special emphasis is placed on a passive functional upper-limb orthosis called the Wilmington Robotic Exoskeleton (WREX). The development and testing of the WREX with children with limited arm strength are described. The exoskeleton has two links and 4 degrees of freedom. It uses linear elastic elements that balance the effects of gravity in three dimensions. The experiences of five children with arthrogryposis who used the WREX are described.", "title": "" }, { "docid": "neg:1840071_1", "text": "We present a practical framework to automatically detect shadows in real world scenes from a single photograph. Previous works on shadow detection put a lot of effort in designing shadow variant and invariant hand-crafted features. In contrast, our framework automatically learns the most relevant features in a supervised manner using multiple convolutional deep neural networks (ConvNets). The 7-layer network architecture of each ConvNet consists of alternating convolution and sub-sampling layers. The proposed framework learns features at the super-pixel level and along the object boundaries. In both cases, features are extracted using a context aware window centered at interest points. The predicted posteriors based on the learned features are fed to a conditional random field model to generate smooth shadow contours. Our proposed framework consistently performed better than the state-of-the-art on all major shadow databases collected under a variety of conditions.", "title": "" }, { "docid": "neg:1840071_2", "text": "Recently, very deep networks, with as many as hundreds of layers, have shown great success in image classification tasks. One key component that has enabled such deep models is the use of “skip connections”, including either residual or highway connections, to alleviate the vanishing and exploding gradient problems. While these connections have been explored for speech, they have mainly been explored for feed-forward networks. Since recurrent structures, such as LSTMs, have produced state-of-the-art results on many of our Voice Search tasks, the goal of this work is to thoroughly investigate different approaches to adding depth to recurrent structures. Specifically, we experiment with novel Highway-LSTM models with bottlenecks skip connections and show that a 10 layer model can outperform a state-of-the-art 5 layer LSTM model with the same number of parameters by 2% relative WER. In addition, we experiment with Recurrent Highway layers and find these to be on par with Highway-LSTM models, when given sufficient depth.", "title": "" }, { "docid": "neg:1840071_3", "text": "Ionizing radiation effects on CMOS image sensors (CIS) manufactured using a 0.18 mum imaging technology are presented through the behavior analysis of elementary structures, such as field oxide FET, gated diodes, photodiodes and MOSFETs. Oxide characterizations appear necessary to understand ionizing dose effects on devices and then on image sensors. The main degradations observed are photodiode dark current increases (caused by a generation current enhancement), minimum size NMOSFET off-state current rises and minimum size PMOSFET radiation induced narrow channel effects. All these effects are attributed to the shallow trench isolation degradation which appears much more sensitive to ionizing radiation than inter layer dielectrics. Unusual post annealing effects are reported in these thick oxides. Finally, the consequences on sensor design are discussed thanks to an irradiated pixel array and a comparison with previous work is discussed.", "title": "" }, { "docid": "neg:1840071_4", "text": "Web-based businesses succeed by cultivating consumers' trust, starting with their beliefs, attitudes, intentions, and willingness to perform transactions at Web sites and with the organizations behind them.", "title": "" }, { "docid": "neg:1840071_5", "text": "A key question concerns the extent to which sexual differentiation of human behavior is influenced by sex hormones present during sensitive periods of development (organizational effects), as occurs in other mammalian species. The most important sensitive period has been considered to be prenatal, but there is increasing attention to puberty as another organizational period, with the possibility of decreasing sensitivity to sex hormones across the pubertal transition. In this paper, we review evidence that sex hormones present during the prenatal and pubertal periods produce permanent changes to behavior. There is good evidence that exposure to high levels of androgens during prenatal development results in masculinization of activity and occupational interests, sexual orientation, and some spatial abilities; prenatal androgens have a smaller effect on gender identity, and there is insufficient information about androgen effects on sex-linked behavior problems. There is little good evidence regarding long-lasting behavioral effects of pubertal hormones, but there is some suggestion that they influence gender identity and perhaps some sex-linked forms of psychopathology, and there are many opportunities to study this issue.", "title": "" }, { "docid": "neg:1840071_6", "text": "In this study, we investigated a pattern-recognition technique based on an artificial neural network (ANN), which is called a massive training artificial neural network (MTANN), for reduction of false positives in computerized detection of lung nodules in low-dose computed tomography (CT) images. The MTANN consists of a modified multilayer ANN, which is capable of operating on image data directly. The MTANN is trained by use of a large number of subregions extracted from input images together with the teacher images containing the distribution for the \"likelihood of being a nodule.\" The output image is obtained by scanning an input image with the MTANN. The distinction between a nodule and a non-nodule is made by use of a score which is defined from the output image of the trained MTANN. In order to eliminate various types of non-nodules, we extended the capability of a single MTANN, and developed a multiple MTANN (Multi-MTANN). The Multi-MTANN consists of plural MTANNs that are arranged in parallel. Each MTANN is trained by using the same nodules, but with a different type of non-nodule. Each MTANN acts as an expert for a specific type of non-nodule, e.g., five different MTANNs were trained to distinguish nodules from various-sized vessels; four other MTANNs were applied to eliminate some other opacities. The outputs of the MTANNs were combined by using the logical AND operation such that each of the trained MTANNs eliminated none of the nodules, but removed the specific type of non-nodule with which the MTANN was trained, and thus removed various types of non-nodules. The Multi-MTANN consisting of nine MTANNs was trained with 10 typical nodules and 10 non-nodules representing each of nine different non-nodule types (90 training non-nodules overall) in a training set. The trained Multi-MTANN was applied to the reduction of false positives reported by our current computerized scheme for lung nodule detection based on a database of 63 low-dose CT scans (1765 sections), which contained 71 confirmed nodules including 66 biopsy-confirmed primary cancers, from a lung cancer screening program. The Multi-MTANN was applied to 58 true positives (nodules from 54 patients) and 1726 false positives (non-nodules) reported by our current scheme in a validation test; these were different from the training set. The results indicated that 83% (1424/1726) of non-nodules were removed with a reduction of one true positive (nodule), i.e., a classification sensitivity of 98.3% (57 of 58 nodules). By using the Multi-MTANN, the false-positive rate of our current scheme was improved from 0.98 to 0.18 false positives per section (from 27.4 to 4.8 per patient) at an overall sensitivity of 80.3% (57/71).", "title": "" }, { "docid": "neg:1840071_7", "text": "This paper presents a method for learning an And-Or model to represent context and occlusion for car detection and viewpoint estimation. The learned And-Or model represents car-to-car context and occlusion configurations at three levels: (i) spatially-aligned cars, (ii) single car under different occlusion configurations, and (iii) a small number of parts. The And-Or model embeds a grammar for representing large structural and appearance variations in a reconfigurable hierarchy. The learning process consists of two stages in a weakly supervised way (i.e., only bounding boxes of single cars are annotated). First, the structure of the And-Or model is learned with three components: (a) mining multi-car contextual patterns based on layouts of annotated single car bounding boxes, (b) mining occlusion configurations between single cars, and (c) learning different combinations of part visibility based on CAD simulations. The And-Or model is organized in a directed and acyclic graph which can be inferred by Dynamic Programming. Second, the model parameters (for appearance, deformation and bias) are jointly trained using Weak-Label Structural SVM. In experiments, we test our model on four car detection datasets-the KITTI dataset [1] , the PASCAL VOC2007 car dataset [2] , and two self-collected car datasets, namely the Street-Parking car dataset and the Parking-Lot car dataset, and three datasets for car viewpoint estimation-the PASCAL VOC2006 car dataset [2] , the 3D car dataset [3] , and the PASCAL3D+ car dataset [4] . Compared with state-of-the-art variants of deformable part-based models and other methods, our model achieves significant improvement consistently on the four detection datasets, and comparable performance on car viewpoint estimation.", "title": "" }, { "docid": "neg:1840071_8", "text": "Security flaws are open doors to attack embedded systems and must be carefully assessed in order to determine threats to safety and security. Subsequently securing a system, that is, integrating security mechanisms into the system's architecture can itself impact the system's safety, for instance deadlines could be missed due to an increase in computations and communications latencies. SysML-Sec addresses these issues with a model-driven approach that promotes the collaboration between system designers and security experts at all design and development stages, e.g., requirements, attacks, partitioning, design, and validation. A central point of SysML-Sec is its partitioning stage during which safety-related and security-related functions are explored jointly and iteratively with regards to requirements and attacks. Once partitioned, the system is designed in terms of system's functions and security mechanisms, and formally verified from both the safety and the security perspectives. Our paper illustrates the whole methodology with the evaluation of a security mechanism added to an existing automotive system.", "title": "" }, { "docid": "neg:1840071_9", "text": "This paper focuses on subword-based Neural Machine Translation (NMT). We hypothesize that in the NMT model, the appropriate subword units for the following three modules (layers) can differ: (1) the encoder embedding layer, (2) the decoder embedding layer, and (3) the decoder output layer. We find the subword based on Sennrich et al. (2016) has a feature that a large vocabulary is a superset of a small vocabulary and modify the NMT model enables the incorporation of several different subword units in a single embedding layer. We refer these small subword features as hierarchical subword features. To empirically investigate our assumption, we compare the performance of several different subword units and hierarchical subword features for both the encoder and decoder embedding layers. We confirmed that incorporating hierarchical subword features in the encoder consistently improves BLEU scores on the IWSLT evaluation datasets. Title and Abstract in Japanese 階層的部分単語特徴を用いたニューラル機械翻訳 本稿では、部分単語 (subword) を用いたニューラル機械翻訳 (Neural Machine Translation, NMT)に着目する。NMTモデルでは、エンコーダの埋め込み層、デコーダの埋め込み層お よびデコーダの出力層の 3箇所で部分単語が用いられるが、それぞれの層で適切な部分単 語単位は異なるという仮説を立てた。我々は、Sennrich et al. (2016)に基づく部分単語は、 大きな語彙集合が小さい語彙集合を必ず包含するという特徴を利用して、複数の異なる部 分単語列を同時に一つの埋め込み層として扱えるよう NMTモデルを改良する。以降、こ の小さな語彙集合特徴を階層的部分単語特徴と呼ぶ。本仮説を検証するために、様々な部 分単語単位や階層的部分単語特徴をエンコーダ・デコーダの埋め込み層に適用して、その 精度の変化を確認する。IWSLT評価セットを用いた実験により、エンコーダ側で階層的な 部分単語を用いたモデルは BLEUスコアが一貫して向上することが確認できた。", "title": "" }, { "docid": "neg:1840071_10", "text": "The authors propose a theoretical model linking achievement goals and achievement emotions to academic performance. This model was tested in a prospective study with undergraduates (N 213), using exam-specific assessments of both goals and emotions as predictors of exam performance in an introductory-level psychology course. The findings were consistent with the authors’ hypotheses and supported all aspects of the proposed model. In multiple regression analysis, achievement goals (mastery, performance approach, and performance avoidance) were shown to predict discrete achievement emotions (enjoyment, boredom, anger, hope, pride, anxiety, hopelessness, and shame), achievement emotions were shown to predict performance attainment, and 7 of the 8 focal emotions were documented as mediators of the relations between achievement goals and performance attainment. All of these findings were shown to be robust when controlling for gender, social desirability, positive and negative trait affectivity, and scholastic ability. The results are discussed with regard to the underdeveloped literature on discrete achievement emotions and the need to integrate conceptual and applied work on achievement goals and achievement emotions.", "title": "" }, { "docid": "neg:1840071_11", "text": "Antibiotic resistance consists of a dynamic web. In this review, we describe the path by which different antibiotic residues and antibiotic resistance genes disseminate among relevant reservoirs (human, animal, and environmental settings), evaluating how these events contribute to the current scenario of antibiotic resistance. The relationship between the spread of resistance and the contribution of different genetic elements and events is revisited, exploring examples of the processes by which successful mobile resistance genes spread across different niches. The importance of classic and next generation molecular approaches, as well as action plans and policies which might aid in the fight against antibiotic resistance, are also reviewed.", "title": "" }, { "docid": "neg:1840071_12", "text": "The head-direction (HD) cells found in the limbic system in freely mov ing rats represent the instantaneous head direction of the animal in the horizontal plane regardless of the location of the animal. The internal direction represented by these cells uses both self-motion information for inertially based updating and familiar visual landmarks for calibration. Here, a model of the dynamics of the HD cell ensemble is presented. The stability of a localized static activity profile in the network and a dynamic shift mechanism are explained naturally by synaptic weight distribution components with even and odd symmetry, respectively. Under symmetric weights or symmetric reciprocal connections, a stable activity profile close to the known directional tuning curves will emerge. By adding a slight asymmetry to the weights, the activity profile will shift continuously without disturbances to its shape, and the shift speed can be controlled accurately by the strength of the odd-weight component. The generic formulation of the shift mechanism is determined uniquely within the current theoretical framework. The attractor dynamics of the system ensures modality-independence of the internal representation and facilitates the correction for cumulative error by the putative local-view detectors. The model offers a specific one-dimensional example of a computational mechanism in which a truly world-centered representation can be derived from observer-centered sensory inputs by integrating self-motion information.", "title": "" }, { "docid": "neg:1840071_13", "text": "An artificial bee colony (ABC) is a relatively recent swarm intelligence optimization approach. In this paper, we propose the first attempt at applying ABC algorithm in analyzing a microarray gene expression profile. In addition, we propose an innovative feature selection algorithm, minimum redundancy maximum relevance (mRMR), and combine it with an ABC algorithm, mRMR-ABC, to select informative genes from microarray profile. The new approach is based on a support vector machine (SVM) algorithm to measure the classification accuracy for selected genes. We evaluate the performance of the proposed mRMR-ABC algorithm by conducting extensive experiments on six binary and multiclass gene expression microarray datasets. Furthermore, we compare our proposed mRMR-ABC algorithm with previously known techniques. We reimplemented two of these techniques for the sake of a fair comparison using the same parameters. These two techniques are mRMR when combined with a genetic algorithm (mRMR-GA) and mRMR when combined with a particle swarm optimization algorithm (mRMR-PSO). The experimental results prove that the proposed mRMR-ABC algorithm achieves accurate classification performance using small number of predictive genes when tested using both datasets and compared to previously suggested methods. This shows that mRMR-ABC is a promising approach for solving gene selection and cancer classification problems.", "title": "" }, { "docid": "neg:1840071_14", "text": "This paper presents the analysis and operation of a three-phase pulsewidth modulation rectifier system formed by the star-connection of three single-phase boost rectifier modules (Y-rectifier) without a mains neutral point connection. The current forming operation of the Y-rectifier is analyzed and it is shown that the phase current has the same high quality and low ripple as the Vienna rectifier. The isolated star point of Y-rectifier results in a mutual coupling of the individual phase module outputs and has to be considered for control of the module dc link voltages. An analytical expression for the coupling coefficients of the Y-rectifier phase modules is derived. Based on this expression, a control concept with reduced calculation effort is designed and it provides symmetric loading of the phase modules and solves the balancing problem of the dc link voltages. The analysis also provides insight that enables the derivation of a control concept for two phase operation, such as in the case of a mains phase failure. The theoretical and simulated results are proved by experimental analysis on a fully digitally controlled, 5.4-kW prototype.", "title": "" }, { "docid": "neg:1840071_15", "text": "Convolutional neural networks (CNNs) have attracted increasing attention in the remote sensing community. Most CNNs only take the last fully-connected layers as features for the classification of remotely sensed images, discarding the other convolutional layer features which may also be helpful for classification purposes. In this paper, we propose a new adaptive deep pyramid matching (ADPM) model that takes advantage of the features from all of the convolutional layers for remote sensing image classification. To this end, the optimal fusing weights for different convolutional layers are learned from the data itself. In remotely sensed scenes, the objects of interest exhibit different scales in distinct scenes, and even a single scene may contain objects with different sizes. To address this issue, we select the CNN with spatial pyramid pooling (SPP-net) as the basic deep network, and further construct a multi-scale ADPM model to learn complementary information from multi-scale images. Our experiments have been conducted using two widely used remote sensing image databases, and the results show that the proposed method significantly improves the performance when compared to other state-of-the-art methods. Keywords—Convolutional neural network (CNN), adaptive deep pyramid matching (ADPM), convolutional features, multi-scale ensemble, remote-sensing scene classification.", "title": "" }, { "docid": "neg:1840071_16", "text": "The availability of several Advanced Driver Assistance Systems has put a correspondingly large number of inexpensive, yet capable sensors on production vehicles. By combining this reality with expertise from the DARPA Grand and Urban Challenges in building autonomous driving platforms, we were able to design and develop an Autonomous Valet Parking (AVP) system on a 2006 Volkwagen Passat Wagon TDI using automotive grade sensors. AVP provides the driver with both convenience and safety benefits - the driver can leave the vehicle at the entrance of a parking garage, allowing the vehicle to navigate the structure, find a spot, and park itself. By leveraging existing software modules from the DARPA Urban Challenge, our efforts focused on developing a parking spot detector, a localization system that did not use GPS, and a back-in parking planner. This paper focuses on describing the design and development of the last two modules.", "title": "" }, { "docid": "neg:1840071_17", "text": "A review of various properties of ceramic-reinforced aluminium matrix composites is presented in this paper. The properties discussed include microstructural, optical, physical and mechanical behaviour of ceramic-reinforced aluminium matrix composites and effects of reinforcement fraction, particle size, heat treatment and extrusion process on these properties. The results obtained by many researchers indicated the uniform distribution of reinforced particles with localized agglomeration at some places, when the metal matrix composite was processed through stir casting method. The density, hardness, compressive strength and toughness increased with increasing reinforcement fraction; however, these properties may reduce in the presence of porosity in the composite material. The particle size of reinforcements affected the hardness adversely. Tensile strength and flexural strength were observed to be increased up to a certain reinforcement fraction in the composites, beyond which these were reduced. The mechanical properties of the composite materials were improved by either thermal treatment or extrusion process. Initiation and growth of fine microcracks leading to macroscopic failure, ductile failure of the aluminium matrix, combination of particle fracture and particle pull-out, overload failure under tension and brittle fracture were the failure mode and mechanisms, as observed by previous researchers, during fractography analysis of tensile specimens of ceramic-reinforced aluminium matrix composites.", "title": "" }, { "docid": "neg:1840071_18", "text": "We used a three layer Convolutional Neural Network (CNN) to make move predictions in chess. The task was defined as a two-part classification problem: a piece-selector CNN is trained to score which white pieces should be made to move, and move-selector CNNs for each piece produce scores for where it should be moved. This approach reduced the intractable class space in chess by a square root. The networks were trained using 20,000 games consisting of 245,000 moves made by players with an ELO rating higher than 2000 from the Free Internet Chess Server. The piece-selector network was trained on all of these moves, and the move-selector networks trained on all moves made by the respective piece. Black moves were trained on by using a data augmentation to frame it as a move made by the", "title": "" }, { "docid": "neg:1840071_19", "text": "This paper reports a novel deep architecture referred to as Maxout network In Network (MIN), which can enhance model discriminability and facilitate the process of information abstraction within the receptive field. The proposed network adopts the framework of the recently developed Network In Network structure, which slides a universal approximator, multilayer perceptron (MLP) with rectifier units, to exact features. Instead of MLP, we employ maxout MLP to learn a variety of piecewise linear activation functions and to mediate the problem of vanishing gradients that can occur when using rectifier units. Moreover, batch normalization is applied to reduce the saturation of maxout units by pre-conditioning the model and dropout is applied to prevent overfitting. Finally, average pooling is used in all pooling layers to regularize maxout MLP in order to facilitate information abstraction in every receptive field while tolerating the change of object position. Because average pooling preserves all features in the local patch, the proposed MIN model can enforce the suppression of irrelevant information during training. Our experiments demonstrated the state-of-the-art classification performance when the MIN model was applied to MNIST, CIFAR-10, and CIFAR-100 datasets and comparable performance for SVHN dataset.", "title": "" } ]
1840072
Improving ChangeDistiller Improving Abstract Syntax Tree based Source Code Change Detection
[ { "docid": "pos:1840072_0", "text": "Detecting and representing changes to data is important for active databases, data warehousing, view maintenance, and version and configuration management. Most previous work in change management has dealt with flat-file and relational data; we focus on hierarchically structured data. Since in many cases changes must be computed from old and new versions of the data, we define the hierarchical change detection problem as the problem of finding a \"minimum-cost edit script\" that transforms one data tree to another, and we present efficient algorithms for computing such an edit script. Our algorithms make use of some key domain characteristics to achieve substantially better performance than previous, general-purpose algorithms. We study the performance of our algorithms both analytically and empirically, and we describe the application of our techniques to hierarchically structured documents.", "title": "" } ]
[ { "docid": "neg:1840072_0", "text": "The current generation of adolescents grows up in a media-saturated world. However, it is unclear how media influences the maturational trajectories of brain regions involved in social interactions. Here we review the neural development in adolescence and show how neuroscience can provide a deeper understanding of developmental sensitivities related to adolescents’ media use. We argue that adolescents are highly sensitive to acceptance and rejection through social media, and that their heightened emotional sensitivity and protracted development of reflective processing and cognitive control may make them specifically reactive to emotion-arousing media. This review illustrates how neuroscience may help understand the mutual influence of media and peers on adolescents’ well-being and opinion formation. The current generation of adolescents grows up in a media-saturated world. Here, Crone and Konijn review the neural development in adolescence and show how neuroscience can provide a deeper understanding of developmental sensitivities related to adolescents’ media use.", "title": "" }, { "docid": "neg:1840072_1", "text": "A single high-directivity microstrip patch antenna (MPA) having a rectangular profile, which can substitute a linear array is proposed. It is designed by using genetic algorithms with the advantage of not requiring a feeding network. The patch fits inside an area of 2.54 x 0.25, resulting in a broadside pattern with a directivity of 12 dBi and a fractional impedance bandwidth of 4 %. The antenna is fabricated and the measurements are in good agreement with the simulated results. The genetic MPA provides a similar directivity as linear arrays using a corporate or series feeding, with the advantage that the genetic MPA results in more bandwidth.", "title": "" }, { "docid": "neg:1840072_2", "text": "This paper reports a 6-bit 220-MS/s time-interleaving successive approximation register analog-to-digital converter (SAR ADC) for low-power low-cost CMOS integrated systems. The major concept of the design is based on the proposed set-and-down capacitor switching method in the DAC capacitor array. Compared to the conventional switching method, the average switching energy is reduced about 81%. At 220-MS/s sampling rate, the measured SNDR and SFDR are 32.62 dB and 48.96 dB respectively. The resultant ENOB is 5.13 bits. The total power consumption is 6.8 mW. Fabricated in TSMC 0.18-µm 1P5M Digital CMOS technology, the ADC only occupies 0.032 mm2 active area.", "title": "" }, { "docid": "neg:1840072_3", "text": "To build a flexible and an adaptable architecture network supporting variety of services and their respective requirements, 5G NORMA introduced a network of functions based architecture breaking the major design principles followed in the current network of entities based architecture. This revolution exploits the advantages of the new technologies like Software-Defined Networking (SDN) and Network Function Virtualization (NFV) in conjunction with the network slicing and multitenancy concepts. In this paper we focus on the concept of Software Defined for Mobile Network Control (SDM-C) network: its definition, its role in controlling the intra network slices resources, its specificity to be QoE aware thanks to the QoE/QoS monitoring and modeling component and its complementarity with the orchestration component called SDM-O. To operate multiple network slices on the same infrastructure efficiently through controlling resources and network functions sharing among instantiated network slices, a common entity named SDM-X is introduced. The proposed design brings a set of new capabilities to make the network energy efficient, a feature that is discussed through some use cases.", "title": "" }, { "docid": "neg:1840072_4", "text": "This paper analyzes the basic method of digital video image processing, studies the vehicle license plate recognition system based on image processing in intelligent transport system, presents a character recognition approach based on neural network perceptron to solve the vehicle license plate recognition in real-time traffic flow. Experimental results show that the approach can achieve better positioning effect, has a certain robustness and timeliness.", "title": "" }, { "docid": "neg:1840072_5", "text": "We present an asymptotic fully polynomial approximation scheme for strip-packing, or packing rectangles into a rectangle of xed width and minimum height, a classical NP-hard cutting-stock problem. The algorithm nds a packing of n rectangles whose total height is within a factor of (1 +) of optimal (up to an additive term), and has running time polynomial both in n and in 1==. It is based on a reduction to fractional bin-packing. R esum e Nous pr esentons un sch ema totalement polynomial d'approximation pour la mise en boite de rectangles dans une boite de largeur x ee, avec hauteur mi-nimale, qui est un probleme NP-dur classique, de coupes par guillotine. L'al-gorithme donne un placement des rectangles, dont la hauteur est au plus egale a (1 +) (hauteur optimale) et a un temps d'execution polynomial en n et en 1==. Il utilise une reduction au probleme de la mise en boite fractionaire. Abstract We present an asymptotic fully polynomial approximation scheme for strip-packing, or packing rectangles into a rectangle of xed width and minimum height, a classical N P-hard cutting-stock problem. The algorithm nds a packing of n rectangles whose total height is within a factor of (1 +) of optimal (up to an additive term), and has running time polynomial both in n and in 1==. It is based on a reduction to fractional bin-packing.", "title": "" }, { "docid": "neg:1840072_6", "text": "Complex tasks with a visually rich component, like diagnosing seizures based on patient video cases, not only require the acquisition of conceptual but also of perceptual skills. Medical education has found that besides biomedical knowledge (knowledge of scientific facts) clinical knowledge (actual experience with patients) is crucial. One important aspect of clinical knowledge that medical education has hardly focused on, yet, are perceptual skills, like visually searching, detecting, and interpreting relevant features. Research on instructional design has shown that in a visually rich, but simple classification task perceptual skills could be conveyed by means of showing the eye movements of a didactically behaving expert. The current study applied this method to medical education in a complex task. This was done by example video cases, which were verbally explained by an expert. In addition the experimental groups saw a display of the expert’s eye movements recorded, while he performed the task. Results show that blurring non-attended areas of the expert enhances diagnostic performance of epileptic seizures by medical students in contrast to displaying attended areas as a circle and to a control group without attention guidance. These findings show that attention guidance fosters learning of perceptual aspects of clinical knowledge, if implemented in a spotlight manner.", "title": "" }, { "docid": "neg:1840072_7", "text": "The Electric Vehicle Routing Problem with Time Windows (EVRPTW) is an extension to the well-known Vehicle Routing Problem with Time Windows (VRPTW) where the fleet consists of electric vehicles (EVs). Since EVs have limited driving range due to their battery capacities they may need to visit recharging stations while servicing the customers along their route. The recharging may take place at any battery level and after the recharging the battery is assumed to be full. In this paper, we relax the full recharge restriction and allow partial recharging (EVRPTW-PR) which is more practical in the real world due to shorter recharging duration. We formulate this problem as 0-1 mixed integer linear program and develop an Adaptive Large Neighborhood Search (ALNS) algorithm to solve it efficiently. We apply several removal and insertion mechanisms by selecting them dynamically and adaptively based on their past performances, including new mechanisms specifically designed for EVRPTW and EVRPTWPR. We test the performance of ALNS by using benchmark instances from the recent literature. The computational results show that the proposed method is effective in finding high quality solutions and the partial recharging option may significantly improve the routing decisions.", "title": "" }, { "docid": "neg:1840072_8", "text": "Rescue operations play an important role in disaster management and in most of the cases rescue operation are challenged by the conditions where human intervention is highly unlikely allowed, in such cases a device which can replace human limitations with advanced technology in robotics and humanoids which can track or follow a route to find the targets. In this paper we use Cellular mobile communication technology as communication channel between the transmitter and the receiving robot device. A phone is established between the transmitter mobile phone and the one on robot with a DTMF decoder which receives the motion control commands from the keypad via mobile phone. The implemented system is built around on the ARM7 LPC2148. It processes the information came from sensors and DTMF module and send to the motor driver bridge to control the motors to change direction and position of the robot. This system is designed to use best in the conditions of accidents or incidents happened in coal mining, fire accidents, bore well incidents and so on.", "title": "" }, { "docid": "neg:1840072_9", "text": "This paper describes a dynamic artificial neural network based mobile robot motion and path planning system. The method is able to navigate a robot car on flat surface among static and moving obstacles, from any starting point to any endpoint. The motion controlling ANN is trained online with an extended backpropagation through time algorithm, which uses potential fields for obstacle avoidance. The paths of the moving obstacles are predicted with other ANNs for better obstacle avoidance. The method is presented through the realization of the navigation system of a mobile robot.", "title": "" }, { "docid": "neg:1840072_10", "text": "Latent variable models, and probabilistic graphical models more generally, provide a declarative language for specifying prior knowledge and structural relationships in complex datasets. They have a long and rich history in natural language processing, having contributed to fundamental advances such as statistical alignment for translation (Brown et al., 1993), topic modeling (Blei et al., 2003), unsupervised part-of-speech tagging (Brown et al., 1992), and grammar induction (Klein and Manning, 2004), among others. Deep learning, broadly construed, is a toolbox for learning rich representations (i.e., features) of data through numerical optimization. Deep learning is the current dominant paradigm in natural language processing, and some of the major successes include language modeling (Bengio et al., 2003; Mikolov et al., 2010; Zaremba et al., 2014), machine translation (Sutskever et al., 2014; Cho et al., 2014; Bahdanau et al., 2015; Vaswani et al., 2017), and natural language understanding tasks such as question answering and natural language inference.", "title": "" }, { "docid": "neg:1840072_11", "text": "Neural Machine Translation (NMT) models are often trained on heterogeneous mixtures of domains, from news to parliamentary proceedings, each with unique distributions and language. In this work we show that training NMT systems on naively mixed data can degrade performance versus models fit to each constituent domain. We demonstrate that this problem can be circumvented, and propose three models that do so by jointly learning domain discrimination and translation. We demonstrate the efficacy of these techniques by merging pairs of domains in three languages: Chinese, French, and Japanese. After training on composite data, each approach outperforms its domain-specific counterparts, with a model based on a discriminator network doing so most reliably. We obtain consistent performance improvements and an average increase of 1.1 BLEU.", "title": "" }, { "docid": "neg:1840072_12", "text": "Variational autoencoders (VAE) are a powerful and widely-used class of models to learn complex data distributions in an unsupervised fashion. One important limitation of VAEs is the prior assumption that latent sample representations are independent and identically distributed. However, for many important datasets, such as time-series of images, this assumption is too strong: accounting for covariances between samples, such as those in time, can yield to a more appropriate model specification and improve performance in downstream tasks. In this work, we introduce a new model, the Gaussian Process (GP) Prior Variational Autoencoder (GPPVAE), to specifically address this issue. The GPPVAE aims to combine the power of VAEs with the ability to model correlations afforded by GP priors. To achieve efficient inference in this new class of models, we leverage structure in the covariance matrix, and introduce a new stochastic backpropagation strategy that allows for computing stochastic gradients in a distributed and low-memory fashion. We show that our method outperforms conditional VAEs (CVAEs) and an adaptation of standard VAEs in two image data applications.", "title": "" }, { "docid": "neg:1840072_13", "text": "OBJECTIVE\nTo compare the prevalence of anxiety, depression, and stress in medical students from all semesters of a Brazilian medical school and assess their respective associated factors.\n\n\nMETHOD\nA cross-sectional study of students from the twelve semesters of a Brazilian medical school was carried out. Students filled out a questionnaire including sociodemographics, religiosity (DUREL - Duke Religion Index), and mental health (DASS-21 - Depression, Anxiety, and Stress Scale). The students were compared for mental health variables (Chi-squared/ANOVA). Linear regression models were employed to assess factors associated with DASS-21 scores.\n\n\nRESULTS\n761 (75.4%) students answered the questionnaire; 34.6% reported depressive symptomatology, 37.2% showed anxiety symptoms, and 47.1% stress symptoms. Significant differences were found for: anxiety - ANOVA: [F = 2.536, p=0.004] between first and tenth (p=0.048) and first and eleventh (p=0.025) semesters; depression - ANOVA: [F = 2.410, p=0.006] between first and second semesters (p=0.045); and stress - ANOVA: [F = 2.968, p=0.001] between seventh and twelfth (p=0.044), tenth and twelfth (p=0.011), and eleventh and twelfth (p=0.001) semesters. The following factors were associated with (a) stress: female gender, anxiety, and depression; (b) depression: female gender, intrinsic religiosity, anxiety, and stress; and (c) anxiety: course semester, depression, and stress.\n\n\nCONCLUSION\nOur findings revealed high levels of depression, anxiety, and stress symptoms in medical students, with marked differences among course semesters. Gender and religiosity appeared to influence the mental health of the medical students.", "title": "" }, { "docid": "neg:1840072_14", "text": "A large body of evidence supports the hypothesis that mesolimbic dopamine (DA) mediates, in animal models, the reinforcing effects of central nervous system stimulants such as cocaine and amphetamine. The role DA plays in mediating amphetamine-type subjective effects of stimulants in humans remains to be established. Both amphetamine and cocaine increase norepinephrine (NE) via stimulation of release and inhibition of reuptake, respectively. If increases in NE mediate amphetamine-type subjective effects of stimulants in humans, then one would predict that stimulant medications that produce amphetamine-type subjective effects in humans should share the ability to increase NE. To test this hypothesis, we determined, using in vitro methods, the neurochemical mechanism of action of amphetamine, 3,4-methylenedioxymethamphetamine (MDMA), (+)-methamphetamine, ephedrine, phentermine, and aminorex. As expected, their rank order of potency for DA release was similar to their rank order of potency in published self-administration studies. Interestingly, the results demonstrated that the most potent effect of these stimulants is to release NE. Importantly, the oral dose of these stimulants, which produce amphetamine-type subjective effects in humans, correlated with the their potency in releasing NE, not DA, and did not decrease plasma prolactin, an effect mediated by DA release. These results suggest that NE may contribute to the amphetamine-type subjective effects of stimulants in humans.", "title": "" }, { "docid": "neg:1840072_15", "text": "OBJECTIVE\nThe pathophysiology of peptic ulcer disease (PUD) in liver cirrhosis (LC) and chronic hepatitis has not been established. The aim of this study was to assess the role of portal hypertension from PUD in patients with LC and chronic hepatitis.\n\n\nMATERIALS AND METHODS\nWe analyzed the medical records of 455 hepatic vein pressure gradient (HVPG) and esophagogastroduodenoscopy patients who had LC or chronic hepatitis in a single tertiary hospital. The association of PUD with LC and chronic hepatitis was assessed by univariate and multivariate analysis.\n\n\nRESULTS\nA total of 72 PUD cases were detected. PUD was associated with LC more than with chronic hepatitis (odds ratio [OR]: 4.13, p = 0.03). In the univariate analysis, taking an ulcerogenic medication was associated with PUD in patients with LC (OR: 4.34, p = 0.04) and smoking was associated with PUD in patients with chronic hepatitis (OR: 3.61, p = 0.04). In the multivariate analysis, taking an ulcerogenic medication was associated with PUD in patients with LC (OR: 2.93, p = 0.04). However, HVPG was not related to PUD in patients with LC or chronic hepatitis.\n\n\nCONCLUSION\nAccording to the present study, patients with LC have a higher risk of PUD than those with chronic hepatitis. The risk factor was taking ulcerogenic medication. However, HVPG reflecting portal hypertension was not associated with PUD in LC or chronic hepatitis (Clinicaltrial number NCT01944878).", "title": "" }, { "docid": "neg:1840072_16", "text": "This paper presents a method for measuring signal backscattering from RFID tags, and for calculating a tag's radar cross section (RCS). We derive a theoretical formula for the RCS of an RFID tag with a minimum-scattering antenna. We describe an experimental measurement technique, which involves using a network analyzer connected to an anechoic chamber with and without the tag. The return loss measured in this way allows us to calculate the backscattered power and to find the tag's RCS. Measurements were performed using an RFID tag operating in the UHF band. To determine whether the tag was turned on, we used an RFID tag tester. The tag's RCS was also calculated theoretically, using electromagnetic simulation software. The theoretical results were found to be in good agreement with experimental data", "title": "" }, { "docid": "neg:1840072_17", "text": "This article builds a theoretical framework to help explain governance patterns in global value chains. It draws on three streams of literature – transaction costs economics, production networks, and technological capability and firm-level learning – to identify three variables that play a large role in determining how global value chains are governed and change. These are: (1) the complexity of transactions, (2) the ability to codify transactions, and (3) the capabilities in the supply-base. The theory generates five types of global value chain governance – hierarchy, captive, relational, modular, and market – which range from high to low levels of explicit coordination and power asymmetry. The article highlights the dynamic and overlapping nature of global value chain governance through four brief industry case studies: bicycles, apparel, horticulture and electronics.", "title": "" }, { "docid": "neg:1840072_18", "text": "Music, an abstract stimulus, can arouse feelings of euphoria and craving, similar to tangible rewards that involve the striatal dopaminergic system. Using the neurochemical specificity of [11C]raclopride positron emission tomography scanning, combined with psychophysiological measures of autonomic nervous system activity, we found endogenous dopamine release in the striatum at peak emotional arousal during music listening. To examine the time course of dopamine release, we used functional magnetic resonance imaging with the same stimuli and listeners, and found a functional dissociation: the caudate was more involved during the anticipation and the nucleus accumbens was more involved during the experience of peak emotional responses to music. These results indicate that intense pleasure in response to music can lead to dopamine release in the striatal system. Notably, the anticipation of an abstract reward can result in dopamine release in an anatomical pathway distinct from that associated with the peak pleasure itself. Our results help to explain why music is of such high value across all human societies.", "title": "" } ]
1840073
Hate Speech Detection: A Solved Problem? The Challenging Case of Long Tail on Twitter
[ { "docid": "pos:1840073_0", "text": "In this paper, we propose a novel semi-supervised approach for detecting profanity-related offensive content in Twitter. Our approach exploits linguistic regularities in profane language via statistical topic modeling on a huge Twitter corpus, and detects offensive tweets using automatically these generated features. Our approach performs competitively with a variety of machine learning (ML) algorithms. For instance, our approach achieves a true positive rate (TP) of 75.1% over 4029 testing tweets using Logistic Regression, significantly outperforming the popular keyword matching baseline, which has a TP of 69.7%, while keeping the false positive rate (FP) at the same level as the baseline at about 3.77%. Our approach provides an alternative to large scale hand annotation efforts required by fully supervised learning approaches.", "title": "" }, { "docid": "pos:1840073_1", "text": "Irony is a pervasive aspect of many online texts, one made all the more difficult by the absence of face-to-face contact and vocal intonation. As our media increasingly become more social, the problem of irony detection will become even more pressing. We describe here a set of textual features for recognizing irony at a linguistic level, especially in short texts created via social media such as Twitter postings or ‘‘tweets’’. Our experiments concern four freely available data sets that were retrieved from Twitter using content words (e.g. ‘‘Toyota’’) and user-generated tags (e.g. ‘‘#irony’’). We construct a new model of irony detection that is assessed along two dimensions: representativeness and relevance. Initial results are largely positive, and provide valuable insights into the figurative issues facing tasks such as sentiment analysis, assessment of online reputations, or decision making.", "title": "" }, { "docid": "pos:1840073_2", "text": "Hate speech in the form of racist and sexist remarks are a common occurrence on social media. For that reason, many social media services address the problem of identifying hate speech, but the definition of hate speech varies markedly and is largely a manual effort (BBC, 2015; Lomas, 2015). We provide a list of criteria founded in critical race theory, and use them to annotate a publicly available corpus of more than 16k tweets. We analyze the impact of various extra-linguistic features in conjunction with character n-grams for hatespeech detection. We also present a dictionary based the most indicative words in our data.", "title": "" }, { "docid": "pos:1840073_3", "text": "Hate speech in the form of racism and sexism is commonplace on the internet (Waseem and Hovy, 2016). For this reason, there has been both an academic and an industry interest in detection of hate speech. The volume of data to be reviewed for creating data sets encourages a use of crowd sourcing for the annotation efforts. In this paper, we provide an examination of the influence of annotator knowledge of hate speech on classification models by comparing classification results obtained from training on expert and amateur annotations. We provide an evaluation on our own data set and run our models on the data set released by Waseem and Hovy (2016). We find that amateur annotators are more likely than expert annotators to label items as hate speech, and that systems trained on expert annotations outperform systems trained on amateur annotations.", "title": "" }, { "docid": "pos:1840073_4", "text": "The quality of word representations is frequently assessed using correlation with human judgements of word similarity. Here, we question whether such intrinsic evaluation can predict the merits of the representations for downstream tasks. We study the correlation between results on ten word similarity benchmarks and tagger performance on three standard sequence labeling tasks using a variety of word vectors induced from an unannotated corpus of 3.8 billion words, and demonstrate that most intrinsic evaluations are poor predictors of downstream performance. We argue that this issue can be traced in part to a failure to distinguish specific similarity from relatedness in intrinsic evaluation datasets. We make our evaluation tools openly available to facilitate further study.", "title": "" } ]
[ { "docid": "neg:1840073_0", "text": "This paper describes the Sonic Banana, a bend-sensor based alternative MIDI controller.", "title": "" }, { "docid": "neg:1840073_1", "text": "In this paper we describe a database of static images of human faces. Images were taken in uncontrolled indoor environment using five video surveillance cameras of various qualities. Database contains 4,160 static images (in visible and infrared spectrum) of 130 subjects. Images from different quality cameras should mimic real-world conditions and enable robust face recognition algorithms testing, emphasizing different law enforcement and surveillance use case scenarios. In addition to database description, this paper also elaborates on possible uses of the database and proposes a testing protocol. A baseline Principal Component Analysis (PCA) face recognition algorithm was tested following the proposed protocol. Other researchers can use these test results as a control algorithm performance score when testing their own algorithms on this dataset. Database is available to research community through the procedure described at www.scface.org .", "title": "" }, { "docid": "neg:1840073_2", "text": "Multimodal wearable sensor data classification plays an important role in ubiquitous computing and has a wide range of applications in scenarios from healthcare to entertainment. However, most existing work in this field employs domain-specific approaches and is thus ineffective in complex situations where multi-modality sensor data are collected. Moreover, the wearable sensor data are less informative than the conventional data such as texts or images. In this paper, to improve the adaptability of such classification methods across different application domains, we turn this classification task into a game and apply a deep reinforcement learning scheme to deal with complex situations dynamically. Additionally, we introduce a selective attention mechanism into the reinforcement learning scheme to focus on the crucial dimensions of the data. This mechanism helps to capture extra information from the signal and thus it is able to significantly improve the discriminative power of the classifier. We carry out several experiments on three wearable sensor datasets and demonstrate the competitive performance of the proposed approach compared to several state-of-the-art baselines.", "title": "" }, { "docid": "neg:1840073_3", "text": "With the growth of the Internet of Things, many insecure embedded devices are entering into our homes and businesses. Some of these web-connected devices lack even basic security protections such as secure password authentication. As a result, thousands of IoT devices have already been infected with malware and enlisted into malicious botnets and many more are left vulnerable to exploitation. In this paper we analyze the practical security level of 16 popular IoT devices from high-end and low-end manufacturers. We present several low-cost black-box techniques for reverse engineering these devices, including software and fault injection based techniques for bypassing password protection. We use these techniques to recover device rmware and passwords. We also discover several common design aws which lead to previously unknown vulnerabilities. We demonstrate the e ectiveness of our approach by modifying a laboratory version of the Mirai botnet to automatically include these devices. We also discuss how to improve the security of IoT devices without signi cantly increasing their cost.", "title": "" }, { "docid": "neg:1840073_4", "text": "We introduce two appearance-based methods for clustering a set of images of 3-D objects, acquired under varying illumination conditions, into disjoint subsets corresponding to individual objects. The first algorithm is based on the concept of illumination cones. According to the theory, the clustering problem is equivalent to finding convex polyhedral cones in the high-dimensional image space. To efficiently determine the conic structures hidden in the image data, we introduce the concept of conic affinity which measures the likelihood of a pair of images belonging to the same underlying polyhedral cone. For the second method, we introduce another affinity measure based on image gradient comparisons. The algorithm operates directly on the image gradients by comparing the magnitudes and orientations of the image gradient at each pixel. Both methods have clear geometric motivations, and they operate directly on the images without the need for feature extraction or computation of pixel statistics. We demonstrate experimentally that both algorithms are surprisingly effective in clustering images acquired under varying illumination conditions with two large, well-known image data sets.", "title": "" }, { "docid": "neg:1840073_5", "text": "We assessed the rate of detection rate of recurrent prostate cancer by PET/CT using anti-3-18F-FACBC, a new synthetic amino acid, in comparison to that using 11C-choline as part of an ongoing prospective single-centre study. Included in the study were 15 patients with biochemical relapse after initial radical treatment of prostate cancer. All the patients underwent anti-3-18F-FACBC PET/CT and 11C-choline PET/CT within a 7-day period. The detection rates using the two compounds were determined and the target–to-background ratios (TBR) of each lesion are reported. No adverse reactions to anti-3-18F-FACBC PET/CT were noted. On a patient basis, 11C-choline PET/CT was positive in 3 patients and negative in 12 (detection rate 20 %), and anti-3-18F-FACBC PET/CT was positive in 6 patients and negative in 9 (detection rate 40 %). On a lesion basis, 11C-choline detected 6 lesions (4 bone, 1 lymph node, 1 local relapse), and anti-3-18F-FACBC detected 11 lesions (5 bone, 5 lymph node, 1 local relapse). All 11C-choline-positive lesions were also identified by anti-3-18F-FACBC PET/CT. The TBR of anti-3-18F-FACBC was greater than that of 11C-choline in 8/11 lesions, as were image quality and contrast. Our preliminary results indicate that anti-3-18F-FACBC may be superior to 11C-choline for the identification of disease recurrence in the setting of biochemical failure. Further studies are required to assess efficacy of anti-3-18F-FACBC in a larger series of prostate cancer patients.", "title": "" }, { "docid": "neg:1840073_6", "text": "Now-a-days as there is prohibitive demand for agricultural industry, effective growth and improved yield of fruit is necessary and important. For this purpose farmers need manual monitoring of fruits from harvest till its progress period. But manual monitoring will not give satisfactory result all the times and they always need satisfactory advice from expert. So it requires proposing an efficient smart farming technique which will help for better yield and growth with less human efforts. We introduce a technique which will diagnose and classify external disease within fruits. Traditional system uses thousands of words which lead to boundary of language. Whereas system that we have come up with, uses image processing techniques for implementation as image is easy way for conveying. In the proposed work, OpenCV library is applied for implementation. K-means clustering method is applied for image segmentation, the images are catalogue and mapped to their respective disease categories on basis of four feature vectors color, morphology, texture and structure of hole on the fruit. The system uses two image databases, one for implementation of query images and the other for training of already stored disease images. Artificial Neural Network (ANN) concept is used for pattern matching and classification of diseases.", "title": "" }, { "docid": "neg:1840073_7", "text": "Understanding the content of user's image posts is a particularly interesting problem in social networks and web settings. Current machine learning techniques focus mostly on curated training sets of image-label pairs, and perform image classification given the pixels within the image. In this work we instead leverage the wealth of information available from users: firstly, we employ user hashtags to capture the description of image content; and secondly, we make use of valuable contextual information about the user. We show how user metadata (age, gender, etc.) combined with image features derived from a convolutional neural network can be used to perform hashtag prediction. We explore two ways of combining these heterogeneous features into a learning framework: (i) simple concatenation; and (ii) a 3-way multiplicative gating, where the image model is conditioned on the user metadata. We apply these models to a large dataset of de-identified Facebook posts and demonstrate that modeling the user can significantly improve the tag prediction quality over current state-of-the-art methods.", "title": "" }, { "docid": "neg:1840073_8", "text": "As the number of academic papers and new technologies soars, it has been increasingly difficult for researchers, especially beginners, to enter a new research field. Researchers often need to study a promising paper in depth to keep up with the forefront of technology. Traditional Query-Oriented study method is time-consuming and even tedious. For a given paper, existent academic search engines like Google Scholar tend to recommend relevant papers, failing to reveal the knowledge structure. The state-of-the-art MapOriented study methods such as AMiner and AceMap can structure scholar information, but they’re too coarse-grained to dig into the underlying principles of a specific paper. To address this problem, we propose a Study-Map Oriented method and a novel model called RIDP (Reference Injection based Double-Damping PageRank) to help researchers study a given paper more efficiently and thoroughly. RIDP integrates newly designed Reference Injection based Topic Analysis method and Double-Damping PageRank algorithm to mine a Study Map out of massive academic papers in order to guide researchers to dig into the underlying principles of a specific paper. Experiment results on real datasets and pilot user studies indicate that our method can help researchers acquire knowledge more efficiently, and grasp knowledge structure systematically.", "title": "" }, { "docid": "neg:1840073_9", "text": "Various types of incidents and disasters cause huge loss to people's lives and property every year and highlight the need to improve our capabilities to handle natural, health, and manmade emergencies. How to develop emergency management systems that can provide critical decision support to emergency management personnel is considered a crucial issue by researchers and practitioners. Governments, such as the USA, the European Commission, and China, have recognized the importance of emergency management and funded national level emergency management projects during the past decade. Multi-criteria decision making (MCDM) refers to the study of methods and procedures by which concerns about multiple and often competing criteria can be formally incorporated into the management planning process. Over the years, it has evolved as an important field of Operations Research, focusing on issues as: analyzing and evaluating of incompatible criteria and alternatives; modeling decision makers' preferences; developing MCDM-based decision support systems; designing MCDM research paradigm; identifying compromising solutions of multi-criteria decision making problems. İn emergency management, MCDM can be used to evaluate decision alternatives and assist decision makers in making immediate and effective responses under pressures and uncertainties. However, although various approaches and technologies have been developed in the MCDM field to handle decision problems with conflicting criteria in some domains, effective decision support in emergency management requires in depth analysis of current MCDM methods and techniques, and adaptation of these techniques specifically for emergency management. In terms of this basic fact, the guest editors determined that the purpose of this special issue should be “to assess the current state of knowledge about MCDM in emergency management and to generate and throw open for discussion, more ideas, hypotheses and theories, the specific objective being to determine directions for further research”. For this purpose, this special issue presents some new progress about MCDM in emergency management that is expected to trigger thought and deepen further research. For this purpose, 11 papers [1–11] were selected from 41 submissions related to MCDM in emergency management from different countries and regions. All the selected papers went through a standard review process of the journal and the authors of all the papers made necessary revision in terms of reviewing comments. In the selected 11 papers, they can be divided into three categories. The first category focuses on innovative MCDM methods for logistics management, which includes 3 papers. The first paper written by Liberatore et al. [1] is to propose a hierarchical compromise model called RecHADS method for the joint optimization of recovery operations and distribution of emergency goods in humanitarian logistics. In the second paper, Peng et al. [2] applies a system dynamics disruption analysis approach for inventory and logistics planning in the post-seismic supply chain risk management. In the third paper, Rath and Gutjahr [3] present an exact solution method and a mathheuristic method to solve the warehouse location routing problem in disaster relief and obtained the good performance. In the second category, 4 papers about the MCDM-based risk assessment and risk decision-making methods in emergency response and emergency management are selected. In terms of the previous order, the fourth paper [4] is to integrate TODIM method and FSE method to formulate a new TODIM-FSE method for risk decision-making support in oil spill response. The fifth paper [5] is to utilize a fault tree analysis (FTA) method to give a risk decision-making solution to emergency response, especially in the case of the H1N1 infectious diseases. Similarly, the sixth paper [6] focuses on an analytic network process (ANP) method for risk assessment and decision analysis, and while the seventh paper [7] applies cumulative prospect theory (CPT) method to risk decision analysis in emergency response. The papers in the third category emphasize on the MCDM methods for disaster assessment and emergence management and four papers are included into this category. In the similar order, the eighth paper [8] is to propose a multi-event and multi-criteria method to evaluate the situation of disaster resilience. In the ninth paper, Kou et al. [9] develop an integrated expert system for fast disaster assessment and obtain the good evaluation performance. Similarly, the 10th paper [10] proposes a multi-objective programming approach to make the optimal decisions for oil-importing plan considering country risk with extreme events. Finally, the last paper [11] in this special issue is to develop a community-based collaborative information system to manage natural and manmade disasters. The guest editors hope that the papers published in this special issue would be of value to academic researchers and business practitioners and would provide a clearer sense of direction for further research, as well as facilitating use of existing methodologies in a more productive manner. The guest editors would like to place on record their sincere thanks to Prof. Stefan Nickel, the Editor-in-Chief of Computers & Operations Research, for this very special opportunity provided to us for contributing to this special issue. The guest editors have to thank all the referees for their kind support and help. Last, but not least, the guest editors would express the gratitude to all authors of submissions in this special issue for their contribution. Without the support of the authors and the referees, it would have been", "title": "" }, { "docid": "neg:1840073_10", "text": "ing Interactions Based on Message Sets Svend Frr 1 and Gul Agha 2. 1 Hewlett-Packard Laboratories, 1501 Page Mill Road, Palo Alto, CA 94303 2 University of Illinois, 1304 W. Springfield Avenue, Urbana, IL 61801 Abs t rac t . An important requirement of programming languages for distributed systems is to provide abstractions for coordination. A common type of coordination requires reactivity in response to arbitrary communication patterns. We have developed a communication model in which concurrent objects can be activated by sets of messages. Specifically, our model allows direct and abstract expression of common interaction patterns found in concurrent systems. For example, the model captures multiple clients that collectively invoke shared servers as a single activation. Furthermore, it supports definition of individual clients that concurrently invoke multiple servers and wait for subsets of the returned reply messages. Message sets are dynamically defined using conjunctive and disjunctive combinators that may depend o n the patterns of messages. The model subsumes existing models for multiRPC and multi-party synchronization within a single, uniform activation framework. 1 I n t r o d u c t i o n Distributed objects are often reactive, i.e. they carry out their actions in response to received response. Tradit ional object-oriented languages require one to one correspondence between response and a receive message: i.e. each response is caused by exactly one message. However, many coordination schemes involve object behaviors whose logical cause is a set of messages rather than a single message. For example, consider a transaction manager in a distributed database system. In order to commit a distributed transaction, the manager must coordinate the action taken at each site involved in the transaction. A two-phase commit protocol is a possible implementation of this coordination pattern. In carrying out a two-phase commit protocol, the manager first sends out a status inquiry to all the sites involved. In response to a status inquiry, each site sends a positive reply if it can commit the transaction; a site sends back a negative reply if it cannot commit the transaction. After sending out inquiries, the manager becomes a reactive object waiting for sites to reply. The logical structure of the manager is to react to a set of replies rather than a single reply: if a positive reply is received from all sites, the manager decides to commit the transaction; if a negative reply is received from any site, the manager must abort the transaction. In tradit ional object-oriented languages, the programmer must implement a response to a set of messages in terms of sequences of responses to single messages. * The reported work was carried out while the first author was affiliated with the University of Illinois. The current emaJl addresses are f rolund@hpl .hp. corn and agha@cs.uiue, edu", "title": "" }, { "docid": "neg:1840073_11", "text": "As the automobile industry evolves, a number of in-vehicle communication protocols are developed for different in-vehicle applications. With the emerging new applications towards Internet of Things (IoT), a more integral solution is needed to enable the pervasiveness of intra- and inter-vehicle communications. In this survey, we first introduce different classifications of automobile applications with focus on their bandwidth and latency. Then we survey different in-vehicle communication bus protocols including both legacy protocols and emerging Ethernet. In addition, we highlight our contribution in the field to employ power line as the in-vehicle communication medium. We believe power line communication will play an important part in future automobile which can potentially reduce the amount of wiring, simplify design and reduce cost. Based on these technologies, we also introduce some promising applications in future automobile enabled by the development of in-vehicle network. Finally, We will share our view on how the in-vehicle network can be merged into the future IoT.", "title": "" }, { "docid": "neg:1840073_12", "text": "This paper presents a general trainable framework for object detection in static images of cluttered scenes. The detection technique we develop is based on a wavelet representation of an object class derived from a statistical analysis of the class instances. By learning an object class in terms of a subset of an overcomplete dictionary of wavelet basis functions, we derive a compact representation of an object class which is used as an input to a suppori vector machine classifier. This representation overcomes both the problem of in-class variability and provides a low false detection rate in unconstrained environments. We demonstrate the capabilities of the technique i n two domains whose inherent information content differs significantly. The first system is face detection and the second is the domain of people which, in contrast to faces, vary greatly in color, texture, and patterns. Unlike previous approaches, this system learns from examples and does not rely on any a priori (handcrafted) models or motion-based segmentation. The paper also presents a motion-based extension to enhance the performance of the detection algorithm over video sequences. The results presented here suggest that this architecture may well be quite general.", "title": "" }, { "docid": "neg:1840073_13", "text": "The modified Brostrom procedure is commonly recommended for reconstruction of the anterior talofibular ligament (ATF) and calcaneofibular ligament (CF) with an advancement of the inferior retinaculum. However, some surgeons perform the modified Bostrom procedure with an semi-single ATF ligament reconstruction and advancement of the inferior retinaculum for simplicity. This study evaluated the initial stability of the modified Brostrom procedure and compared a two ligaments (ATF + CF) reconstruction group with a semi-single ligament (ATF) reconstruction group. Sixteen paired fresh frozen cadaveric ankle joints were used in this study. The ankle joint laxity was measured on the plane radiographs with 150 N anterior drawer force and 150 N varus stress force. The anterior displacement distances and varus tilt angles were measured before and after cutting the ATF and CF ligaments. A two ligaments (ATF + CF) reconstruction with an advancement of the inferior retinaculum was performed on eight left cadaveric ankles, and an semi-single ligament (ATF) reconstruction with an advancement of the inferior retinaculum was performed on eight right cadaveric ankles. The ankle instability was rechecked after surgery. The decreases in instability of the ankle after surgery were measured and the difference in the decrease was compared using a Mann–Whitney U test. The mean decreases in anterior displacement were 3.4 and 4.0 mm in the two ligaments reconstruction and semi-single ligament reconstruction groups, respectively. There was no significant difference between the two groups (P = 0.489). The mean decreases in the varus tilt angle in the two ligaments reconstruction and semi-single ligament reconstruction groups were 12.6° and 12.2°, respectively. There was no significant difference between the two groups (P = 0.399). In this cadaveric study, a substantial level of initial stability can be obtained using an anatomical reconstruction of the anterior talofibular ligament only and reinforcement with the inferior retinaculum. The modified Brostrom procedure with a semi-single ligament (Anterior talofibular ligament) reconstruction with an advancement of the inferior retinaculum can provide as much initial stability as the two ligaments (Anterior talofibular ligament and calcaneofibular ligament) reconstruction procedure.", "title": "" }, { "docid": "neg:1840073_14", "text": "If subjects are shown an angry face as a target visual stimulus for less than forty milliseconds and are then immediately shown an expressionless mask, these subjects report seeing the mask but not the target. However, an aversively conditioned masked target can elicit an emotional response from subjects without being consciously perceived,. Here we study the mechanism of this unconsciously mediated emotional learning. We measured neural activity in volunteer subjects who were presented with two angry faces, one of which, through previous classical conditioning, was associated with a burst of white noise. In half of the trials, the subjects' awareness of the angry faces was prevented by backward masking with a neutral face. A significant neural response was elicited in the right, but not left, amygdala to masked presentations of the conditioned angry face. Unmasked presentations of the same face produced enhanced neural activity in the left, but not right, amygdala. Our results indicate that, first, the human amygdala can discriminate between stimuli solely on the basis of their acquired behavioural significance, and second, this response is lateralized according to the subjects' level of awareness of the stimuli.", "title": "" }, { "docid": "neg:1840073_15", "text": "Jinsight is a tool for exploring a program’s run-time behavior visually. It is helpful for performance analysis, debugging, and any task in which you need to better understand what your Java program is really doing. Jinsight is designed specifically with object-oriented and multithreaded programs in mind. It exposes many facets of program behavior that elude conventional tools. It reveals object lifetimes and communication, and attendant performance bottlenecks. It shows thread interactions, deadlocks, and garbage collector activity. It can also help you find and fix memory leaks, which remain a hazard despite garbage collection. A user explores program execution through one or more views. Jinsight offers several types of views, each geared toward distinct aspects of object-oriented and multithreaded program behavior. The user has several different perspectives from which to discern performance problems, unexpected behavior, or bugs small and large. Moreover, the views are linked to each other in many ways, allowing navigation from one view to another. Navigation makes the collection of views far more powerful than the sum of their individual strengths.", "title": "" }, { "docid": "neg:1840073_16", "text": "As mobile ad hoc network applications are deployed, security emerges as a central requirement. In this paper, we introduce the wormhole attack, a severe attack in ad hoc networks that is particularly challenging to defend against. The wormhole attack is possible even if the attacker has not compromised any hosts, and even if all communication provides authenticity and confidentiality. In the wormhole attack, an attacker records packets (or bits) at one location in the network, tunnels them (possibly selectively) to another location, and retransmits them there into the network. The wormhole attack can form a serious threat in wireless networks, especially against many ad hoc network routing protocols and location-based wireless security systems. For example, most existing ad hoc network routing protocols, without some mechanism to defend against the wormhole attack, would be unable to find routes longer than one or two hops, severely disrupting communication. We present a general mechanism, called packet leashes, for detecting and, thus defending against wormhole attacks, and we present a specific protocol, called TIK, that implements leashes. We also discuss topology-based wormhole detection, and show that it is impossible for these approaches to detect some wormhole topologies.", "title": "" }, { "docid": "neg:1840073_17", "text": "We present Tartanian, a game theory-based player for headsup no-limit Texas Hold’em poker. Tartanian is built from three components. First, to deal with the virtually infinite strategy space of no-limit poker, we develop a discretized betting model designed to capture the most important strategic choices in the game. Second, we employ potential-aware automated abstraction algorithms for identifying strategically similar situations in order to decrease the size of the game tree. Third, we develop a new technique for automatically generating the source code of an equilibrium-finding algorithm from an XML-based description of a game. This automatically generated program is more efficient than what would be possible with a general-purpose equilibrium-finding program. Finally, we present results from the AAAI-07 Computer Poker Competition, in which Tartanian placed second out of ten entries.", "title": "" }, { "docid": "neg:1840073_18", "text": "This paper presents a system that uses symbolic representations of audio concepts as words for the descriptions of audio tracks, that enable it to go beyond the state of the art, which is audio event classification of a small number of audio classes in constrained settings, to large-scale classification in the wild. These audio words might be less meaningful for an annotator but they are descriptive for computer algorithms. We devise a random-forest vocabulary learning method with an audio word weighting scheme based on TF-IDF and TD-IDD, so as to combine the computational simplicity and accurate multi-class classification of the random forest with the data-driven discriminative power of the TF-IDF/TD-IDD methods. The proposed random forest clustering with text-retrieval methods significantly outperforms two state-of-the-art methods on the dry-run set and the full set of the TRECVID MED 2010 dataset.", "title": "" }, { "docid": "neg:1840073_19", "text": "Network function virtualization (NFV) is a promising technique aimed at reducing capital expenditures (CAPEX) and operating expenditures (OPEX), and improving the flexibility and scalability of an entire network. In contrast to traditional dispatching, NFV can separate network functions from proprietary infrastructure and gather these functions into a resource pool that can efficiently modify and adjust service function chains (SFCs). However, this emerging technique has some challenges. A major problem is reliability, which involves ensuring the availability of deployed SFCs, namely, the probability of successfully chaining a series of virtual network functions while considering both the feasibility and the specific requirements of clients, because the substrate network remains vulnerable to earthquakes, floods, and other natural disasters. Based on the premise of users’ demands for SFC requirements, we present an ensure reliability cost saving algorithm to reduce the CAPEX and OPEX of telecommunication service providers by reducing the reliability of the SFC deployments. The results of extensive experiments indicate that the proposed algorithms perform efficiently in terms of the blocking ratio, resource consumption, time consumption, and the first block.", "title": "" } ]
1840074
Implementing Gender-Dependent Vowel-Level Analysis for Boosting Speech-Based Depression Recognition
[ { "docid": "pos:1840074_0", "text": "Emotional speech recognition aims to automatically classify speech units (e.g., utterances) into emotional states, such as anger, happiness, neutral, sadness and surprise. The major contribution of this paper is to rate the discriminating capability of a set of features for emotional speech recognition when gender information is taken into consideration. A total of 87 features has been calculated over 500 utterances of the Danish Emotional Speech database. The Sequential Forward Selection method (SFS) has been used in order to discover the 5-10 features which are able to classify the samples in the best way for each gender. The criterion used in SFS is the crossvalidated correct classification rate of a Bayes classifier where the class probability distribution functions (pdfs) are approximated via Parzen windows or modeled as Gaussians. When a Bayes classifier with Gaussian pdfs is employed, a correct classification rate of 61.1% is obtained for male subjects and a corresponding rate of 57.1% for female ones. In the same experiment, a random Emotional speech recognition aims to automatically classify speech units (e.g., utterances) into emotional states, such as anger, happiness, neutral, sadness and surprise. The major contribution of this paper is to rate the discriminating capability of a set of features for emotional speech recognition when gender information is taken into consideration. A total of 87 features has been calculated over 500 utterances of the Danish Emotional Speech database. The Sequential Forward Selection method (SFS) has been used in order to discover the 5-10 features which are able to classify the samples in the best way for each gender. The criterion used in SFS is the crossvalidated correct classification rate of a Bayes classifier where the class probability distribution functions (pdfs) are approximated via Parzen windows or modeled as Gaussians. When a Bayes classifier with Gaussian PDFs is employed, a correct classification rate of 61.1% is obtained for male subjects and a corresponding rate of 57.1% for female ones. In the same experiment, a random classification would result in a correct classification rate of 20%. When gender information is not considered a correct classification score of 50.6% is obtained.classification would result in a correct classification rate of 20%. When gender information is not considered a correct classification score of 50.6% is obtained.", "title": "" }, { "docid": "pos:1840074_1", "text": "Mood disorders are inherently related to emotion. In particular, the behaviour of people suffering from mood disorders such as unipolar depression shows a strong temporal correlation with the affective dimensions valence, arousal and dominance. In addition to structured self-report questionnaires, psychologists and psychiatrists use in their evaluation of a patient's level of depression the observation of facial expressions and vocal cues. It is in this context that we present the fourth Audio-Visual Emotion recognition Challenge (AVEC 2014). This edition of the challenge uses a subset of the tasks used in a previous challenge, allowing for more focussed studies. In addition, labels for a third dimension (Dominance) have been added and the number of annotators per clip has been increased to a minimum of three, with most clips annotated by 5. The challenge has two goals logically organised as sub-challenges: the first is to predict the continuous values of the affective dimensions valence, arousal and dominance at each moment in time. The second is to predict the value of a single self-reported severity of depression indicator for each recording in the dataset. This paper presents the challenge guidelines, the common data used, and the performance of the baseline system on the two tasks.", "title": "" } ]
[ { "docid": "neg:1840074_0", "text": "The plant hormones gibberellin and abscisic acid regulate gene expression, secretion and cell death in aleurone. The emerging picture is of gibberellin perception at the plasma membrane whereas abscisic acid acts at both the plasma membrane and in the cytoplasm - although gibberellin and abscisic acid receptors have yet to be identified. A range of downstream-signalling components and events has been implicated in gibberellin and abscisic acid signalling in aleurone. These include the Galpha subunit of a heterotrimeric G protein, a transient elevation in cGMP, Ca2+-dependent and Ca2+-independent events in the cytoplasm, reversible protein phosphory-lation, and several promoter cis-elements and transcription factors, including GAMYB. In parallel, molecular genetic studies on mutants of Arabidopsis that show defects in responses to these hormones have identified components of gibberellin and abscisic acid signalling. These two approaches are yielding results that raise the possibility that specific gibberellin and abscisic acid signalling components perform similar functions in aleurone and other tissues.", "title": "" }, { "docid": "neg:1840074_1", "text": "A process is described to produce single sheets of functionalized graphene through thermal exfoliation of graphite oxide. The process yields a wrinkled sheet structure resulting from reaction sites involved in oxidation and reduction processes. The topological features of single sheets, as measured by atomic force microscopy, closely match predictions of first-principles atomistic modeling. Although graphite oxide is an insulator, functionalized graphene produced by this method is electrically conducting.", "title": "" }, { "docid": "neg:1840074_2", "text": "The aim of this study is to investigate the factors influencing the consumer acceptance of mobile banking in Bangladesh. The demographic, attitudinal, and behavioural characteristics of mobile bank users were examined. 292 respondents from seven major mobile financial service users of different mobile network operators participated in the consumer survey. Infrastructural facility, selfcontrol, social influence, perceived risk, ease of use, need for interaction, perceived usefulness, and customer service were found to influence consumer attitudes towards mobile banking services. The infrastructural facility of updated user friendly technology and its availability was found to be the most important factor that motivated consumers’ attitudes in Bangladesh towards mobile banking. The sample size was not necessarily representative of the Bangladeshi population as a whole as it ignored large rural population. This study identified two additional factors i.e. infrastructural facility and customer service relevant to mobile banking that were absent in previous researches. By addressing the concerns of and benefits sought by the consumers, marketers can create positive attractions and policy makers can set regulations for the expansion of mobile banking services in Bangladesh. This study offers an insight into mobile banking in Bangladesh focusing influencing factors, which has not previously been investigated.", "title": "" }, { "docid": "neg:1840074_3", "text": "It is widely accepted that mineral flotation is a very challenging control problem due to chaotic nature of process. This paper introduces a novel approach of combining multi-camera system and expert controllers to improve flotation performance. The system has been installed into the zinc circuit of Pyhäsalmi Mine (Finland). Long-term data analysis in fact shows that the new approach has improved considerably the recovery of the zinc circuit, resulting in a substantial increase in the mill’s annual profit. r 2006 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "neg:1840074_4", "text": "In this paper, we introduce a new technology, which allows people to share taste and smell sensations digitally with a remote person through existing networking technologies such as the Internet. By introducing this technology, we expect people to share their smell and taste experiences with their family and friends remotely. Sharing these senses are immensely beneficial since those are strongly associated with individual memories, emotions, and everyday experiences. As the initial step, we developed a control system, an actuator, which could digitally stimulate the sense of taste remotely. The system uses two approaches to stimulate taste sensations digitally: the electrical and thermal stimulations on tongue. Primary results suggested that sourness and saltiness are the main sensations that could be evoked through this device. Furthermore, this paper focuses on future aspects of such technology for remote smell actuation followed by applications and possibilities for further developments.", "title": "" }, { "docid": "neg:1840074_5", "text": "A full-vector beam propagation method based on a finite-element scheme for a helicoidal system is developed. The permittivity and permeability tensors of a straight waveguide are replaced with equivalent ones for a helicoidal system, obtained by transformation optics. A cylindrical, perfectly matched layer is implemented for the absorbing boundary condition. To treat wide-angle beam propagation, a second-order differentiation term with respect to the propagation direction is directly discretized without using a conventional Padé approximation. The transmission spectra of twisted photonic crystal fibers are thoroughly investigated, and it is found that the diameters of the air holes greatly affect the spectra. The calculated results are in good agreement with the recently reported measured results, showing the validity and usefulness of the method developed here.", "title": "" }, { "docid": "neg:1840074_6", "text": "During human evolutionary history, there were \"trade-offs\" between expending time and energy on child-rearing and mating, so both men and women evolved conditional mating strategies guided by cues signaling the circumstances. Many short-term matings might be successful for some men; others might try to find and keep a single mate, investing their effort in rearing her offspring. Recent evidence suggests that men with features signaling genetic benefits to offspring should be preferred by women as short-term mates, but there are trade-offs between a mate's genetic fitness and his willingness to help in child-rearing. It is these circumstances and the cues that signal them that underlie the variation in short- and long-term mating strategies between and within the sexes.", "title": "" }, { "docid": "neg:1840074_7", "text": "This paper demonstrates a computer-aided diagnosis (CAD) system for lung cancer classification of CT scans with unmarked nodules, a dataset from the Kaggle Data Science Bowl 2017. Thresholding was used as an initial segmentation approach to segment out lung tissue from the rest of the CT scan. Thresholding produced the next best lung segmentation. The initial approach was to directly feed the segmented CT scans into 3D CNNs for classification, but this proved to be inadequate. Instead, a modified U-Net trained on LUNA16 data (CT scans with labeled nodules) was used to first detect nodule candidates in the Kaggle CT scans. The U-Net nodule detection produced many false positives, so regions of CTs with segmented lungs where the most likely nodule candidates were located as determined by the U-Net output were fed into 3D Convolutional Neural Networks (CNNs) to ultimately classify the CT scan as positive or negative for lung cancer. The 3D CNNs produced a test set Accuracy of 86.6%. The performance of our CAD system outperforms the current CAD systems in literature which have several training and testing phases that each requires a lot of labeled data, while our CAD system has only three major phases (segmentation, nodule candidate detection, and malignancy classification), allowing more efficient training and detection and more generalizability to other cancers. Keywords—Lung Cancer; Computed Tomography; Deep Learning; Convolutional Neural Networks; Segmentation.", "title": "" }, { "docid": "neg:1840074_8", "text": "A wideband integrated RF duplexer supports 3G/4G bands I, II, III, IV, and IX, and achieves a TX-to-RX isolation of more than 55dB in the transmit-band, and greater than 45dB in the corresponding receive-band across 200MHz of bandwidth. A 65nm CMOS duplexer/LNA achieves a transmit insertion loss of 2.5dB, and a cascaded receiver noise figure of 5dB with more than 27dB of gain, exceeding the commercial external duplexers performance at considerably lower cost and area.", "title": "" }, { "docid": "neg:1840074_9", "text": "Information overload on the Web has created enormous challenges to customers selecting products for online purchases and to online businesses attempting to identify customers’ preferences efficiently. Various recommender systems employing different data representations and recommendation methods are currently used to address these challenges. In this research, we developed a graph model that provides a generic data representation and can support different recommendation methods. To demonstrate its usefulness and flexibility, we developed three recommendation methods: direct retrieval, association mining, and high-degree association retrieval. We used a data set from an online bookstore as our research test-bed. Evaluation results showed that combining product content information and historical customer transaction information achieved more accurate predictions and relevant recommendations than using only collaborative information. However, comparisons among different methods showed that high-degree association retrieval did not perform significantly better than the association mining method or the direct retrieval method in our test-bed.", "title": "" }, { "docid": "neg:1840074_10", "text": "This article traces the major trends in TESOL methods in the past 15 years. It focuses on the TESOL profession’s evolving perspectives on language teaching methods in terms of three perceptible shifts: (a) from communicative language teaching to task-based language teaching, (b) from method-based pedagogy to postmethod pedagogy, and (c) from systemic discovery to critical discourse. It is evident that during this transitional period, the profession has witnessed a heightened awareness about communicative and task-based language teaching, about the limitations of the concept of method, about possible postmethod pedagogies that seek to address some of the limitations of method, about the complexity of teacher beliefs that inform the practice of everyday teaching, and about the vitality of the macrostructures—social, cultural, political, and historical—that shape the microstructures of the language classroom. This article deals briefly with the changes and challenges the trend-setting transition seems to be bringing about in the profession’s collective thought and action.", "title": "" }, { "docid": "neg:1840074_11", "text": "In recent years, deep learning techniques have been developed to improve the performance of program synthesis from input-output examples. Albeit its significant progress, the programs that can be synthesized by state-of-the-art approaches are still simple in terms of their complexity. In this work, we move a significant step forward along this direction by proposing a new class of challenging tasks in the domain of program synthesis from input-output examples: learning a context-free parser from pairs of input programs and their parse trees. We show that this class of tasks are much more challenging than previously studied tasks, and the test accuracy of existing approaches is almost 0%. We tackle the challenges by developing three novel techniques inspired by three novel observations, which reveal the key ingredients of using deep learning to synthesize a complex program. First, the use of a non-differentiable machine is the key to effectively restrict the search space. Thus our proposed approach learns a neural program operating a domain-specific non-differentiable machine. Second, recursion is the key to achieve generalizability. Thus, we bake-in the notion of recursion in the design of our non-differentiable machine. Third, reinforcement learning is the key to learn how to operate the non-differentiable machine, but it is also hard to train the model effectively with existing reinforcement learning algorithms from a cold boot. We develop a novel two-phase reinforcement learningbased search algorithm to overcome this issue. In our evaluation, we show that using our novel approach, neural parsing programs can be learned to achieve 100% test accuracy on test inputs that are 500× longer than the training samples.", "title": "" }, { "docid": "neg:1840074_12", "text": "Reversible data hiding in encrypted images (RDHEI) is an effective technique to embed data in the encrypted domain. An original image is encrypted with a secret key and during or after its transmission, it is possible to embed additional information in the encrypted image, without knowing the encryption key or the original content of the image. During the decoding process, the secret message can be extracted and the original image can be reconstructed. In the last few years, RDHEI has started to draw research interest. Indeed, with the development of cloud computing, data privacy has become a real issue. However, none of the existing methods allow us to hide a large amount of information in a reversible manner. In this paper, we propose a new reversible method based on MSB (most significant bit) prediction with a very high capacity. We present two approaches, these are: high capacity reversible data hiding approach with correction of prediction errors and high capacity reversible data hiding approach with embedded prediction errors. With this method, regardless of the approach used, our results are better than those obtained with current state of the art methods, both in terms of reconstructed image quality and embedding capacity.", "title": "" }, { "docid": "neg:1840074_13", "text": "The need for curricular reform in K-4 mathematics is clear. Such reform must address both the content and emphasis of the curriculum as well as approaches to instruction. A longstanding preoccupation with computation and other traditional skills has dominated both what mathematics is taught and the way mathematics is taught at this level. As a result, the present K-4 curriculum is narrow in scope; fails to foster mathematical insight, reasoning, and problem solving; and emphasizes rote activities. Even more significant is that children begin to lose their belief that learning mathematics is a sense-making experience. They become passive receivers of rules and procedures rather than active participants in creating knowledge.", "title": "" }, { "docid": "neg:1840074_14", "text": "Process mining allows analysts to exploit logs of historical executions of business processes to extract insights regarding the actual performance of these processes. One of the most widely studied process mining operations is automated process discovery. An automated process discovery method takes as input an event log, and produces as output a business process model that captures the control-flow relations between tasks that are observed in or implied by the event log. Various automated process discovery methods have been proposed in the past two decades, striking different tradeoffs between scalability, accuracy, and complexity of the resulting models. However, these methods have been evaluated in an ad-hoc manner, employing different datasets, experimental setups, evaluation measures, and baselines, often leading to incomparable conclusions and sometimes unreproducible results due to the use of closed datasets. This article provides a systematic review and comparative evaluation of automated process discovery methods, using an open-source benchmark and covering 12 publicly-available real-life event logs, 12 proprietary real-life event logs, and nine quality metrics. The results highlight gaps and unexplored tradeoffs in the field, including the lack of scalability of some methods and a strong divergence in their performance with respect to the different quality metrics used.", "title": "" }, { "docid": "neg:1840074_15", "text": "Solid waste management (SWM) is a major public health and environmental concern in urban areas of many developing countries. Nairobi’s solid waste situation, which could be taken to generally represent the status which is largely characterized by low coverage of solid waste collection, pollution from uncontrolled dumping of waste, inefficient public services, unregulated and uncoordinated private sector and lack of key solid waste management infrastructure. This paper recapitulates on the public-private partnership as the best system for developing countries; challenges, approaches, practices or systems of SWM, and outcomes or advantages to the approach; the literature review focuses on surveying information pertaining to existing waste management methodologies, policies, and research relevant to the SWM. Information was sourced from peer-reviewed academic literature, grey literature, publicly available waste management plans, and through consultation with waste management professionals. Literature pertaining to SWM and municipal solid waste minimization, auditing and management were searched for through online journal databases, particularly Web of Science, and Science Direct. Legislation pertaining to waste management was also researched using the different databases. Additional information was obtained from grey literature and textbooks pertaining to waste management topics. After conducting preliminary research, prevalent references of select sources were identified and scanned for additional relevant articles. Research was also expanded to include literature pertaining to recycling, composting, education, and case studies; the manuscript summarizes with future recommendationsin terms collaborations of public/ private patternships, sensitization of people, privatization is important in improving processes and modernizing urban waste management, contract private sector, integrated waste management should be encouraged, provisional government leaders need to alter their mind set, prepare a strategic, integrated SWM plan for the cities, enact strong and adequate legislation at city and national level, evaluate the real impacts of waste management systems, utilizing locally based solutions for SWM service delivery and design, location, management of the waste collection centersand recycling and compositing activities should be", "title": "" }, { "docid": "neg:1840074_16", "text": "■ This article recounts the development of radar signal processing at Lincoln Laboratory. The Laboratory’s significant efforts in this field were initially driven by the need to provide detected and processed signals for air and ballistic missile defense systems. The first processing work was on the Semi-Automatic Ground Environment (SAGE) air-defense system, which led to algorithms and techniques for detection of aircraft in the presence of clutter. This work was quickly followed by processing efforts in ballistic missile defense, first in surface-acoustic-wave technology, in concurrence with the initiation of radar measurements at the Kwajalein Missile Range, and then by exploitation of the newly evolving technology of digital signal processing, which led to important contributions for ballistic missile defense and Federal Aviation Administration applications. More recently, the Laboratory has pursued the computationally challenging application of adaptive processing for the suppression of jamming and clutter signals. This article discusses several important programs in these areas.", "title": "" }, { "docid": "neg:1840074_17", "text": "Recently completed whole-genome sequencing projects marked the transition from gene-based phylogenetic studies to phylogenomics analysis of entire genomes. We developed an algorithm MGRA for reconstructing ancestral genomes and used it to study the rearrangement history of seven mammalian genomes: human, chimpanzee, macaque, mouse, rat, dog, and opossum. MGRA relies on the notion of the multiple breakpoint graphs to overcome some limitations of the existing approaches to ancestral genome reconstructions. MGRA also generates the rearrangement-based characters guiding the phylogenetic tree reconstruction when the phylogeny is unknown.", "title": "" }, { "docid": "neg:1840074_18", "text": "Dependability on AI models is of utmost importance to ensure full acceptance of the AI systems. One of the key aspects of the dependable AI system is to ensure that all its decisions are fair and not biased towards any individual. In this paper, we address the problem of detecting whether a model has an individual discrimination. Such a discrimination exists when two individuals who differ only in the values of their protected attributes (such as, gender/race) while the values of their non-protected ones are exactly the same, get different decisions. Measuring individual discrimination requires an exhaustive testing, which is infeasible for a nontrivial system. In this paper, we present an automated technique to generate test inputs, which is geared towards finding individual discrimination. Our technique combines the wellknown technique called symbolic execution along with the local explainability for generation of effective test cases. Our experimental results clearly demonstrate that our technique produces 3.72 times more successful test cases than the existing state-of-the-art across all our chosen benchmarks.", "title": "" }, { "docid": "neg:1840074_19", "text": "Next generation sequencing (NGS) technology has revolutionized genomic and genetic research. The pace of change in this area is rapid with three major new sequencing platforms having been released in 2011: Ion Torrent’s PGM, Pacific Biosciences’ RS and the Illumina MiSeq. Here we compare the results obtained with those platforms to the performance of the Illumina HiSeq, the current market leader. In order to compare these platforms, and get sufficient coverage depth to allow meaningful analysis, we have sequenced a set of 4 microbial genomes with mean GC content ranging from 19.3 to 67.7%. Together, these represent a comprehensive range of genome content. Here we report our analysis of that sequence data in terms of coverage distribution, bias, GC distribution, variant detection and accuracy. Sequence generated by Ion Torrent, MiSeq and Pacific Biosciences technologies displays near perfect coverage behaviour on GC-rich, neutral and moderately AT-rich genomes, but a profound bias was observed upon sequencing the extremely AT-rich genome of Plasmodium falciparum on the PGM, resulting in no coverage for approximately 30% of the genome. We analysed the ability to call variants from each platform and found that we could call slightly more variants from Ion Torrent data compared to MiSeq data, but at the expense of a higher false positive rate. Variant calling from Pacific Biosciences data was possible but higher coverage depth was required. Context specific errors were observed in both PGM and MiSeq data, but not in that from the Pacific Biosciences platform. All three fast turnaround sequencers evaluated here were able to generate usable sequence. However there are key differences between the quality of that data and the applications it will support.", "title": "" } ]
1840075
Neural Joking Machine : Humorous image captioning
[ { "docid": "pos:1840075_0", "text": "Humor is an integral part of human lives. Despite being tremendously impactful, it is perhaps surprising that we do not have a detailed understanding of humor yet. As interactions between humans and AI systems increase, it is imperative that these systems are taught to understand subtleties of human expressions such as humor. In this work, we are interested in the question - what content in a scene causes it to be funny? As a first step towards understanding visual humor, we analyze the humor manifested in abstract scenes and design computational models for them. We collect two datasets of abstract scenes that facilitate the study of humor at both the scene-level and the object-level. We analyze the funny scenes and explore the different types of humor depicted in them via human studies. We model two tasks that we believe demonstrate an understanding of some aspects of visual humor. The tasks involve predicting the funniness of a scene and altering the funniness of a scene. We show that our models perform well quantitatively, and qualitatively through human studies. Our datasets are publicly available.", "title": "" } ]
[ { "docid": "neg:1840075_0", "text": "In deterministic optimization, line searches are a standard tool ensuring stability and efficiency. Where only stochastic gradients are available, no direct equivalent has so far been formulated, because uncertain gradients do not allow for a strict sequence of decisions collapsing the search space. We construct a probabilistic line search by combining the structure of existing deterministic methods with notions from Bayesian optimization. Our method retains a Gaussian process surrogate of the univariate optimization objective, and uses a probabilistic belief over the Wolfe conditions to monitor the descent. The algorithm has very low computational cost, and no user-controlled parameters. Experiments show that it effectively removes the need to define a learning rate for stochastic gradient descent.", "title": "" }, { "docid": "neg:1840075_1", "text": "The paper presents a novel method and system for personalised (individualised) modelling of spatio/spectro-temporal data (SSTD) and prediction of events. A novel evolving spiking neural network reservoir system (eSNNr) is proposed for the purpose. The system consists of: spike-time encoding module of continuous value input information into spike trains; a recurrent 3D SNNr; eSNN as an evolving output classifier. Such system is generated for every new individual, using existing data of similar individuals. Subject to proper training and parameter optimisation, the system is capable of accurate spatiotemporal pattern recognition (STPR) and of early prediction of individual events. The method and the system are generic, applicable to various SSTD and classification and prediction problems. As a case study, the method is applied for early prediction of occurrence of stroke on an individual basis. Preliminary experiments demonstrated a significant improvement in accuracy and time of event prediction when using the proposed method when compared with standard machine learning methods, such as MLR, SVM, MLP. Future development and applications are discussed.", "title": "" }, { "docid": "neg:1840075_2", "text": "In this paper we suggest advanced IEEE 802.11ax TCP-aware scheduling strategies for optimizing the AP operation under transmission of unidirectional TCP traffic. Our scheduling strategies optimize the performance using the capability for Multi User transmissions over the Uplink, first introduced in IEEE 802.11ax, together with Multi User transmissions over the Downlink. They are based on Transmission Opportunities (TXOP) and we suggest three scheduling strategies determining the TXOP formation parameters. In one of the strategies one can control the achieved Goodput vs. the delay. We also assume saturated WiFi transmission queues. We show that with minimal Goodput degradation one can avoid considerable delays.", "title": "" }, { "docid": "neg:1840075_3", "text": "In this paper a general theory of multistage decimators and interpolators for sampling rate reduction and sampling rate increase is presented. A set of curves and the necessary relations for optimally designing multistage decimators is also given. It is shown that the processes of decimation and interpolation are duals and therefore the same set of design curves applies to both problems. Further, it is shown that highly efficient implementations of narrow-band finite impulse response (FIR) fiiters can be obtained by cascading the processes of decimation and interpolation. Examples show that the efficiencies obtained are comparable to those of recursive elliptic filter designs.", "title": "" }, { "docid": "neg:1840075_4", "text": "The aim of this paper is to research the effectiveness of SMS verification by understanding the correlation between notification and verification of flood early warning messages. This study contributes to the design of the dissemination techniques for SMS as an early warning messages. The metrics used in this study are using user perceptions of tasks, which include the ease of use (EOU) perception for using SMS and confidence with SMS skills perception, as well as, the users' positive perceptions, which include users' perception of usefulness and satisfaction perception towards using SMS as an early warning messages for floods. Experiments and surveys were conducted in flood-prone areas in Semarang, Indonesia. The results showed that the correlation is in users' perceptions of tasks for the confidence with skill.", "title": "" }, { "docid": "neg:1840075_5", "text": "With the growing problem of childhood obesity, recent research has begun to focus on family and social influences on children's eating patterns. Research has demonstrated that children's eating patterns are strongly influenced by characteristics of both the physical and social environment. With regard to the physical environment, children are more likely to eat foods that are available and easily accessible, and they tend to eat greater quantities when larger portions are provided. Additionally, characteristics of the social environment, including various socioeconomic and sociocultural factors such as parents' education, time constraints, and ethnicity influence the types of foods children eat. Mealtime structure is also an important factor related to children's eating patterns. Mealtime structure includes social and physical characteristics of mealtimes including whether families eat together, TV-viewing during meals, and the source of foods (e.g., restaurants, schools). Parents also play a direct role in children's eating patterns through their behaviors, attitudes, and feeding styles. Interventions aimed at improving children's nutrition need to address the variety of social and physical factors that influence children's eating patterns.", "title": "" }, { "docid": "neg:1840075_6", "text": "How should we measure metacognitive (\"type 2\") sensitivity, i.e. the efficacy with which observers' confidence ratings discriminate between their own correct and incorrect stimulus classifications? We argue that currently available methods are inadequate because they are influenced by factors such as response bias and type 1 sensitivity (i.e. ability to distinguish stimuli). Extending the signal detection theory (SDT) approach of Galvin, Podd, Drga, and Whitmore (2003), we propose a method of measuring type 2 sensitivity that is free from these confounds. We call our measure meta-d', which reflects how much information, in signal-to-noise units, is available for metacognition. Applying this novel method in a 2-interval forced choice visual task, we found that subjects' metacognitive sensitivity was close to, but significantly below, optimality. We discuss the theoretical implications of these findings, as well as related computational issues of the method. We also provide free Matlab code for implementing the analysis.", "title": "" }, { "docid": "neg:1840075_7", "text": "This project aims at studying how recent interactive and interactions technologies would help extend how we play the guitar, thus defining the “multimodal guitar”. Our contributions target three main axes: audio analysis, gestural control and audio synthesis. For this purpose, we designed and developed a freely-available toolbox for augmented guitar performances, compliant with the PureData and Max/MSP environments, gathering tools for: polyphonic pitch estimation, fretboard visualization and grouping, pressure sensing, modal synthesis, infinite sustain, rearranging looping and “smart” harmonizing.", "title": "" }, { "docid": "neg:1840075_8", "text": "Brown adipose tissue (BAT) is the main site of adaptive thermogenesis and experimental studies have associated BAT activity with protection against obesity and metabolic diseases, such as type 2 diabetes mellitus and dyslipidaemia. Active BAT is present in adult humans and its activity is impaired in patients with obesity. The ability of BAT to protect against chronic metabolic disease has traditionally been attributed to its capacity to utilize glucose and lipids for thermogenesis. However, BAT might also have a secretory role, which could contribute to the systemic consequences of BAT activity. Several BAT-derived molecules that act in a paracrine or autocrine manner have been identified. Most of these factors promote hypertrophy and hyperplasia of BAT, vascularization, innervation and blood flow, processes that are all associated with BAT recruitment when thermogenic activity is enhanced. Additionally, BAT can release regulatory molecules that act on other tissues and organs. This secretory capacity of BAT is thought to be involved in the beneficial effects of BAT transplantation in rodents. Fibroblast growth factor 21, IL-6 and neuregulin 4 are among the first BAT-derived endocrine factors to be identified. In this Review, we discuss the current understanding of the regulatory molecules (the so-called brown adipokines or batokines) that are released by BAT that influence systemic metabolism and convey the beneficial metabolic effects of BAT activation. The identification of such adipokines might also direct drug discovery approaches for managing obesity and its associated chronic metabolic diseases.", "title": "" }, { "docid": "neg:1840075_9", "text": "BACKGROUND\nHealth promotion organizations are increasingly embracing social media technologies to engage end users in a more interactive way and to widely disseminate their messages with the aim of improving health outcomes. However, such technologies are still in their early stages of development and, thus, evidence of their efficacy is limited.\n\n\nOBJECTIVE\nThe study aimed to provide a current overview of the evidence surrounding consumer-use social media and mobile software apps for health promotion interventions, with a particular focus on the Australian context and on health promotion targeted toward an Indigenous audience. Specifically, our research questions were: (1) What is the peer-reviewed evidence of benefit for social media and mobile technologies used in health promotion, intervention, self-management, and health service delivery, with regard to smoking cessation, sexual health, and otitis media? and (2) What social media and mobile software have been used in Indigenous-focused health promotion interventions in Australia with respect to smoking cessation, sexual health, or otitis media, and what is the evidence of their effectiveness and benefit?\n\n\nMETHODS\nWe conducted a scoping study of peer-reviewed evidence for the effectiveness of social media and mobile technologies in health promotion (globally) with respect to smoking cessation, sexual health, and otitis media. A scoping review was also conducted for Australian uses of social media to reach Indigenous Australians and mobile apps produced by Australian health bodies, again with respect to these three areas.\n\n\nRESULTS\nThe review identified 17 intervention studies and seven systematic reviews that met inclusion criteria, which showed limited evidence of benefit from these interventions. We also found five Australian projects with significant social media health components targeting the Indigenous Australian population for health promotion purposes, and four mobile software apps that met inclusion criteria. No evidence of benefit was found for these projects.\n\n\nCONCLUSIONS\nAlthough social media technologies have the unique capacity to reach Indigenous Australians as well as other underserved populations because of their wide and instant disseminability, evidence of their capacity to do so is limited. Current interventions are neither evidence-based nor widely adopted. Health promotion organizations need to gain a more thorough understanding of their technologies, who engages with them, why they engage with them, and how, in order to be able to create successful social media projects.", "title": "" }, { "docid": "neg:1840075_10", "text": "[1] The Mw 6.6, 26 December 2003 Bam (Iran) earthquake was one of the first earthquakes for which Envisat advanced synthetic aperture radar (ASAR) data were available. Using interferograms and azimuth offsets from ascending and descending tracks, we construct a three-dimensional displacement field of the deformation due to the earthquake. Elastic dislocation modeling shows that the observed deformation pattern cannot be explained by slip on a single planar fault, which significantly underestimates eastward and upward motions SE of Bam. We find that the deformation pattern observed can be best explained by slip on two subparallel faults. Eighty-five percent of moment release occurred on a previously unknown strike-slip fault running into the center of Bam, with peak slip of over 2 m occurring at a depth of 5 km. The remainder occurred as a combination of strike-slip and thrusting motion on a southward extension of the previously mapped Bam Fault 5 km to the east.", "title": "" }, { "docid": "neg:1840075_11", "text": "The vast quantity of information brought by big data as well as the evolving computer hardware encourages success stories in the machine learning community. In the meanwhile, it poses challenges for the Gaussian process (GP), a well-known non-parametric and interpretable Bayesian model, which suffers from cubic complexity to training size. To improve the scalability while retaining the desirable prediction quality, a variety of scalable GPs have been presented. But they have not yet been comprehensively reviewed and discussed in a unifying way in order to be well understood by both academia and industry. To this end, this paper devotes to reviewing state-of-theart scalable GPs involving two main categories: global approximations which distillate the entire data and local approximations which divide the data for subspace learning. Particularly, for global approximations, we mainly focus on sparse approximations comprising prior approximations which modify the prior but perform exact inference, and posterior approximations which retain exact prior but perform approximate inference; for local approximations, we highlight the mixture/product of experts that conducts model averaging from multiple local experts to boost predictions. To present a complete review, recent advances for improving the scalability and model capability of scalable GPs are reviewed. Finally, the extensions and open issues regarding the implementation of scalable GPs in various scenarios are reviewed and discussed to inspire novel ideas for future research avenues.", "title": "" }, { "docid": "neg:1840075_12", "text": "This paper builds on the idea that private sector logistics can and should be applied to improve the performance of disaster logistics but that before embarking on this the private sector needs to understand the core capabilities of humanitarian logistics. With this in mind, the paper walks us through the complexities of managing supply chains in humanitarian settings. It pinpoints the cross learning potential for both the humanitarian and private sectors in emergency relief operations as well as possibilities of getting involved through corporate social responsibility. It also outlines strategies for better preparedness and the need for supply chains to be agile, adaptable and aligned—a core competency of many humanitarian organizations involved in disaster relief and an area which the private sector could draw on to improve their own competitive edge. Finally, the article states the case for closer collaboration between humanitarians, businesses and academics to achieve better and more effective supply chains to respond to the complexities of today’s logistics be it the private sector or relieving the lives of those blighted by disaster. Journal of the Operational Research Society (2006) 57, 475–489. doi:10.1057/palgrave.jors.2602125 Published online 14 December 2005", "title": "" }, { "docid": "neg:1840075_13", "text": "The hypothesis that cancer is driven by tumour-initiating cells (popularly known as cancer stem cells) has recently attracted a great deal of attention, owing to the promise of a novel cellular target for the treatment of haematopoietic and solid malignancies. Furthermore, it seems that tumour-initiating cells might be resistant to many conventional cancer therapies, which might explain the limitations of these agents in curing human malignancies. Although much work is still needed to identify and characterize tumour-initiating cells, efforts are now being directed towards identifying therapeutic strategies that could target these cells. This Review considers recent advances in the cancer stem cell field, focusing on the challenges and opportunities for anticancer drug discovery.", "title": "" }, { "docid": "neg:1840075_14", "text": "Using survey responses collected via the Internet from a U.S. national probability sample of gay, lesbian, and bisexual adults (N = 662), this article reports prevalence estimates of criminal victimization and related experiences based on the target's sexual orientation. Approximately 20% of respondents reported having experienced a person or property crime based on their sexual orientation; about half had experienced verbal harassment, and more than 1 in 10 reported having experienced employment or housing discrimination. Gay men were significantly more likely than lesbians or bisexuals to experience violence and property crimes. Employment and housing discrimination were significantly more likely among gay men and lesbians than among bisexual men and women. Implications for future research and policy are discussed.", "title": "" }, { "docid": "neg:1840075_15", "text": "This paper presents algorithms and a prototype system for hand tracking and hand posture recognition. Hand postures are represented in terms of hierarchies of multi-scale colour image features at different scales, with qualitative inter-relations in terms of scale, position and orientation. In each image, detection of multi-scale colour features is performed. Hand states are then simultaneously detected and tracked using particle filtering, with an extension of layered sampling referred to as hierarchical layered sampling. Experiments are presented showing that the performance of the system is substantially improved by performing feature detection in colour space and including a prior with respect to skin colour. These components have been integrated into a real-time prototype system, applied to a test problem of controlling consumer electronics using hand gestures. In a simplified demo scenario, this system has been successfully tested by participants at two fairs during 2001.", "title": "" }, { "docid": "neg:1840075_16", "text": "This work, concerning paraphrase identification task, on one hand contributes to expanding deep learning embeddings to include continuous and discontinuous linguistic phrases. On the other hand, it comes up with a new scheme TF-KLD-KNN to learn the discriminative weights of words and phrases specific to paraphrase task, so that a weighted sum of embeddings can represent sentences more effectively. Based on these two innovations we get competitive state-of-the-art performance on paraphrase identification.", "title": "" }, { "docid": "neg:1840075_17", "text": "The mucous gel maintains a neutral microclimate at the epithelial cell surface, which may play a role in both the prevention of gastroduodenal injury and the provision of an environment essential for epithelial restitution and regeneration after injury. Enhancement of the components of the mucous barrier by sucralfate may explain its therapeutic efficacy for upper gastrointestinal tract protection, repai, and healing. We studied the effect of sucralfate and its major soluble component, sucrose octasulfate (SOS), on the synthesis and release of gastric mucin and surface active phospholipid, utilizing an isolated canine gastric mucous cells in culture. We correlated these results with the effect of the agents on mucin synthesis and secretion utilizing explants of canine fundusin vitro. Sucralfate and SOS significantly stimulated phospholipid secretion by isolated canine mucous cells in culture (123% and 112% of control, respectively.) Indomethacin pretreatment siginificantly inhibited the effect of sucralfate, but not SOS, on the stimulation of phospholipid release. Administration of either sucralfate or SOS to the isolated canine mucous cells had no effect upon mucin synthesis or secretion using a sensitive immunoassay. Sucralfate and SOS did not stimulate mucin release in the canine explants; sucralfate significantly stimulated the synthesis of mucin, but only to 108% of that observed in untreated explants. No increase in PGE2 release was observed after sucralfate or SOS exposure to the isolated canine mucous cells. Our results suggest sucralfate affects the mucus barrier largely in a qualitative manner. No increase in mucin secretion or major effect on synthesis was notd, although a significant increase in surface active phospholipid release was observed. The lack of dose dependency of this effect, along with the results of the PGE2 assay, suggests the drug may act through a non-receptor-mediated mechanism to perturb the cell membrane and release surface active phospholipid. The enhancement of phospholipid release by sucralfate to augment the barrier function of gastric mucus may, in concert with other effects of the drug, strrengthen mucosal barrier function.", "title": "" }, { "docid": "neg:1840075_18", "text": "This present study is designed to propose a conceptual framework extended from the previously advanced Theory of Acceptance Model (TAM). The framework makes it possible to examine the effects of social media, and perceived risk as the moderating effects between intention and actual purchase to be able to advance the Theory of Acceptance Model (TAM). 400 samples will be randomly selected among Saudi in Jeddah, Dammam and Riyadh. Data will be collected using questionnaire survey. As the research involves the analysis of numerical data, the assessment is carried out using Structural Equation Model (SEM). The hypothesis will be tested and the result is used to explain the proposed TAM. The findings from the present study will be beneficial for marketers to understand the intrinsic behavioral factors that influence consumers' selection hence avoid trial and errors in their advertising drives.", "title": "" }, { "docid": "neg:1840075_19", "text": "Prior work has shown that return oriented programming (ROP) can be used to bypass W⊕X, a software defense that stops shellcode, by reusing instructions from large libraries such as libc. Modern operating systems have since enabled address randomization (ASLR), which randomizes the location of libc, making these techniques unusable in practice. However, modern ASLR implementations leave smaller amounts of executable code unrandomized and it has been unclear whether an attacker can use these small code fragments to construct payloads in the general case. In this paper, we show defenses as currently deployed can be bypassed with new techniques for automatically creating ROP payloads from small amounts of unrandomized code. We propose using semantic program verification techniques for identifying the functionality of gadgets, and design a ROP compiler that is resistant to missing gadget types. To demonstrate our techniques, we build Q, an end-to-end system that automatically generates ROP payloads for a given binary. Q can produce payloads for 80% of Linux /usr/bin programs larger than 20KB. We also show that Q can automatically perform exploit hardening: given an exploit that crashes with defenses on, Q outputs an exploit that bypasses both W⊕X and ASLR. We show that Q can harden nine realworld Linux and Windows exploits, enabling an attacker to automatically bypass defenses as deployed by industry for those programs.", "title": "" } ]
1840076
Continuum regression for cross-modal multimedia retrieval
[ { "docid": "pos:1840076_0", "text": "The problem of joint modeling the text and image components of multimedia documents is studied. The text component is represented as a sample from a hidden topic model, learned with latent Dirichlet allocation, and images are represented as bags of visual (SIFT) features. Two hypotheses are investigated: that 1) there is a benefit to explicitly modeling correlations between the two components, and 2) this modeling is more effective in feature spaces with higher levels of abstraction. Correlations between the two components are learned with canonical correlation analysis. Abstraction is achieved by representing text and images at a more general, semantic level. The two hypotheses are studied in the context of the task of cross-modal document retrieval. This includes retrieving the text that most closely matches a query image, or retrieving the images that most closely match a query text. It is shown that accounting for cross-modal correlations and semantic abstraction both improve retrieval accuracy. The cross-modal model is also shown to outperform state-of-the-art image retrieval systems on a unimodal retrieval task.", "title": "" }, { "docid": "pos:1840076_1", "text": "Partial Least Squares (PLS) is a wide class of methods for modeling relations between sets of observed variables by means of latent variables. It comprises of regression and classification tasks as well as dimension reduction techniques and modeling tools. The underlying assumption of all PLS methods is that the observed data is generated by a system or process which is driven by a small number of latent (not directly observed or measured) variables. Projections of the observed data to its latent structure by means of PLS was developed by Herman Wold and coworkers [48, 49, 52]. PLS has received a great amount of attention in the field of chemometrics. The algorithm has become a standard tool for processing a wide spectrum of chemical data problems. The success of PLS in chemometrics resulted in a lot of applications in other scientific areas including bioinformatics, food research, medicine, pharmacology, social sciences, physiology–to name but a few [28, 25, 53, 29, 18, 22]. This chapter introduces the main concepts of PLS and provides an overview of its application to different data analysis problems. Our aim is to present a concise introduction, that is, a valuable guide for anyone who is concerned with data analysis. In its general form PLS creates orthogonal score vectors (also called latent vectors or components) by maximising the covariance between different sets of variables. PLS dealing with two blocks of variables is considered in this chapter, although the PLS extensions to model relations among a higher number of sets exist [44, 46, 47, 48, 39]. PLS is similar to Canonical Correlation Analysis (CCA) where latent vectors with maximal correlation are extracted [24]. There are different PLS techniques to extract latent vectors, and each of them gives rise to a variant of PLS. PLS can be naturally extended to regression problems. The predictor and predicted (response) variables are each considered as a block of variables. PLS then extracts the score vectors which serve as a new predictor representation", "title": "" } ]
[ { "docid": "neg:1840076_0", "text": "Deep learning techniques are famous due to Its capability to cope with large-scale data these days. They have been investigated within various of applications e.g., language, graphical modeling, speech, audio, image recognition, video, natural language and signal processing areas. In addition, extensive researches applying machine-learning methods in Intrusion Detection System (IDS) have been done in both academia and industry. However, huge data and difficulties to obtain data instances are hot challenges to machine-learning-based IDS. We show some limitations of previous IDSs which uses classic machine learners and introduce feature learning including feature construction, extraction and selection to overcome the challenges. We discuss some distinguished deep learning techniques and its application for IDS purposes. Future research directions using deep learning techniques for IDS purposes are briefly summarized.", "title": "" }, { "docid": "neg:1840076_1", "text": "Goal-oriented spoken dialogue systems have been the most prominent component in todays virtual personal assistants, which allow users to speak naturally in order to finish tasks more efficiently. The advancement of deep learning technologies has recently risen the applications of neural models to dialogue modeling. However, applying deep learning technologies for building robust and scalable dialogue systems is still a challenging task and an open research area as it requires deeper understanding of the classic pipelines as well as detailed knowledge of the prior work and the recent state-of-the-art work. Therefore, this tutorial is designed to focus on an overview of dialogue system development while describing most recent research for building dialogue systems, and summarizing the challenges, in order to allow researchers to study the potential improvements of the state-of-the-art dialogue systems. The tutorial material is available at http://deepdialogue.miulab.tw. 1 Tutorial Overview With the rising trend of artificial intelligence, more and more devices have incorporated goal-oriented spoken dialogue systems. Among popular virtual personal assistants, Microsoft’s Cortana, Apple’s Siri, Amazon Alexa, and Google Assistant have incorporated dialogue system modules in various devices, which allow users to speak naturally in order to finish tasks more efficiently. Traditional conversational systems have rather complex and/or modular pipelines. The advancement of deep learning technologies has recently risen the applications of neural models to dialogue modeling. Nevertheless, applying deep learning technologies for building robust and scalable dialogue systems is still a challenging task and an open research area as it requires deeper understanding of the classic pipelines as well as detailed knowledge on the benchmark of the models of the prior work and the recent state-of-the-art work. The goal of this tutorial is to provide the audience with the developing trend of dialogue systems, and a roadmap to get them started with the related work. The first section motivates the work on conversationbased intelligent agents, in which the core underlying system is task-oriented dialogue systems. The following section describes different approaches using deep learning for each component in the dialogue system and how it is evaluated. The last two sections focus on discussing the recent trends and current challenges on dialogue system technology and summarize the challenges and conclusions. The detailed content is described as follows. 2 Dialogue System Basics This section will motivate the work on conversation-based intelligent agents, in which the core underlying system is task-oriented spoken dialogue systems. The section starts with an overview of the standard pipeline framework for dialogue system illustrated in Figure 1 (Tur and De Mori, 2011). Basic components of a dialog system are automatic speech recognition (ASR), language understanding (LU), dialogue management (DM), and natural language generation (NLG) (Rudnicky et al., 1999; Zue et al., 2000; Zue and Glass, 2000). This tutorial will mainly focus on LU, DM, and NLG parts.", "title": "" }, { "docid": "neg:1840076_2", "text": "The academic community has published millions of research papers to date, and the number of new papers has been increasing with time. To discover new research, researchers typically rely on manual methods such as keyword-based search, reading proceedings of conferences, browsing publication lists of known experts, or checking the references of the papers they are interested. Existing tools for the literature search are suitable for a first-level bibliographic search. However, they do not allow complex second-level searches. In this paper, we present a web service called TheAdvisor (http://theadvisor.osu.edu) which helps the users to build a strong bibliography by extending the document set obtained after a first-level search. The service makes use of the citation graph for recommendation. It also features diversification, relevance feedback, graphical visualization, venue and reviewer recommendation. In this work, we explain the design criteria and rationale we employed to make the TheAdvisor a useful and scalable web service along with a thorough experimental evaluation.", "title": "" }, { "docid": "neg:1840076_3", "text": "We propose a system for easily preparing arbitrary wide-area environments for subsequent real-time tracking with a handheld device. Our system evaluation shows that minimal user effort is required to initialize a camera tracking session in an unprepared environment. We combine panoramas captured using a handheld omnidirectional camera from several viewpoints to create a point cloud model. After the offline modeling step, live camera pose tracking is initialized by feature point matching, and continuously updated by aligning the point cloud model to the camera image. Given a reconstruction made with less than five minutes of video, we achieve below 25 cm translational error and 0.5 degrees rotational error for over 80% of images tested. In contrast to camera-based simultaneous localization and mapping (SLAM) systems, our methods are suitable for handheld use in large outdoor spaces.", "title": "" }, { "docid": "neg:1840076_4", "text": "Bad presentation of medical statistics such as the risks associated with a particular intervention can lead to patients making poor decisions on treatment. Particularly confusing are single event probabilities, conditional probabilities (such as sensitivity and specificity), and relative risks. How can doctors improve the presentation of statistical information so that patients can make well informed decisions?", "title": "" }, { "docid": "neg:1840076_5", "text": "As we move towards large-scale object detection, it is unrealistic to expect annotated training data for all object classes at sufficient scale, and so methods capable of unseen object detection are required. We propose a novel zero-shot method based on training an end-to-end model that fuses semantic attribute prediction with visual features to propose object bounding boxes for seen and unseen classes. While we utilize semantic features during training, our method is agnostic to semantic information for unseen classes at test-time. Our method retains the efficiency and effectiveness of YOLO [1] for objects seen during training, while improving its performance for novel and unseen objects. The ability of state-of-art detection methods to learn discriminative object features to reject background proposals also limits their performance for unseen objects. We posit that, to detect unseen objects, we must incorporate semantic information into the visual domain so that the learned visual features reflect this information and leads to improved recall rates for unseen objects. We test our method on PASCAL VOC and MS COCO dataset and observed significant improvements on the average precision of unseen classes.", "title": "" }, { "docid": "neg:1840076_6", "text": "Volleyball players are at high risk of overuse shoulder injuries, with spike biomechanics a perceived risk factor. This study compared spike kinematics between elite male volleyball players with and without a history of shoulder injuries. Height, mass, maximum jump height, passive shoulder rotation range of motion (ROM), and active trunk ROM were collected on elite players with (13) and without (11) shoulder injury history and were compared using independent samples t tests (P < .05). The average of spike kinematics at impact and range 0.1 s before and after impact during down-the-line and cross-court spike types were compared using linear mixed models in SPSS (P < .01). No differences were detected between the injured and uninjured groups. Thoracic rotation and shoulder abduction at impact and range of shoulder rotation velocity differed between spike types. The ability to tolerate the differing demands of the spike types could be used as return-to-play criteria for injured athletes.", "title": "" }, { "docid": "neg:1840076_7", "text": "Given only a few image-text pairs, humans can learn to detect semantic concepts and describe the content. For machine learning algorithms, they usually require a lot of data to train a deep neural network to solve the problem. However, it is challenging for the existing systems to generalize well to the few-shot multi-modal scenario, because the learner should understand not only images and texts but also their relationships from only a few examples. In this paper, we tackle two multi-modal problems, i.e., image captioning and visual question answering (VQA), in the few-shot setting.\n We propose Fast Parameter Adaptation for Image-Text Modeling (FPAIT) that learns to learn jointly understanding image and text data by a few examples. In practice, FPAIT has two benefits. (1) Fast learning ability. FPAIT learns proper initial parameters for the joint image-text learner from a large number of different tasks. When a new task comes, FPAIT can use a small number of gradient steps to achieve a good performance. (2) Robust to few examples. In few-shot tasks, the small training data will introduce large biases in Convolutional Neural Networks (CNN) and damage the learner's performance. FPAIT leverages dynamic linear transformations to alleviate the side effects of the small training set. In this way, FPAIT flexibly normalizes the features and thus reduces the biases during training. Quantitatively, FPAIT achieves superior performance on both few-shot image captioning and VQA benchmarks.", "title": "" }, { "docid": "neg:1840076_8", "text": "We present our overall third ranking solution for the KDD Cup 2010 on educational data mining. The goal of the competition was to predict a student’s ability to answer questions correctly, based on historic results. In our approach we use an ensemble of collaborative filtering techniques, as used in the field of recommender systems and adopt them to fit the needs of the competition. The ensemble of predictions is finally blended, using a neural network.", "title": "" }, { "docid": "neg:1840076_9", "text": "Nearly all our buildings and workspaces are protected against fire breaks, which may occur due to some fault in the electric circuitries and power sources. The immediate alarming and aid to extinguish the fire in such situations of fire breaks are provided using embedded systems installed in the buildings. But as the area being monitored against such fire threats becomes vast, these systems do not provide a centralized solution. For the protection of such a huge area, like a college campus or an industrial park, a centralized wireless fire control system using Wireless sensor network technology is developed. The system developed connects the five dangers prone zones of the campus with a central control room through a ZigBee communication interface such that in case of any fire break in any of the building, a direct communication channel is developed that will send an immediate signal to the control room. In case if any of the emergency zone lies out of reach of the central node, multi hoping technique is adopted for the effective transmitting of the signal. The five nodes maintains a wireless interlink among themselves as well as with the central node for this purpose. Moreover a hooter is attached along with these nodes to notify the occurrence of any fire break such that the persons can leave the building immediately and with the help of the signal received in the control room, the exact building where the fire break occurred is identified and fire extinguishing is done. The real time system developed is implemented in Atmega32 with temperature, fire and humidity sensors and ZigBee module.", "title": "" }, { "docid": "neg:1840076_10", "text": "Each time a latency in responding to a stimulus is measured, we owe a debt to F. C. Donders, who in the mid-19th century made the fundamental discovery that the time required to perform a mental computation reveals something fundamental about how the mind works. Donders expressed the idea in the following simple and optimistic statement about the feasibility of measuring the mind: “Will all quantitative treatment of mental processes be out of the question then? By no means! An important factor seemed to be susceptible to measurement: I refer to the time required for simple mental processes” (Donders, 1868/1969, pp. 413–414). With particular variations of simple stimuli and subjects’ choices, Donders demonstrated that it is possible to bring order to understanding invisible thought processes by computing the time that elapses between stimulus presentation and response production. A more specific observation he offered lies at the center of our own modern understanding of mental operations:", "title": "" }, { "docid": "neg:1840076_11", "text": "With predictions that this nursing shortage will be more severe and have a longer duration than has been previously experienced, traditional strategies implemented by employers will have limited success. The aging nursing workforce, low unemployment, and the global nature of this shortage compound the usual factors that contribute to nursing shortages. For sustained change and assurance of an adequate supply of nurses, solutions must be developed in several areas: education, healthcare deliver systems, policy and regulations, and image. This shortage is not solely nursing's issue and requires a collaborative effort among nursing leaders in practice and education, health care executives, government, and the media. This paper poses several ideas of solutions, some already underway in the United States, as a catalyst for readers to initiate local programs.", "title": "" }, { "docid": "neg:1840076_12", "text": "Over the last decade, the zebrafish has entered the field of cardiovascular research as a new model organism. This is largely due to a number of highly successful small- and large-scale forward genetic screens, which have led to the identification of zebrafish mutants with cardiovascular defects. Genetic mapping and identification of the affected genes have resulted in novel insights into the molecular regulation of vertebrate cardiac development. More recently, the zebrafish has become an attractive model to study the effect of genetic variations identified in patients with cardiovascular defects by candidate gene or whole-genome-association studies. Thanks to an almost entirely sequenced genome and high conservation of gene function compared with humans, the zebrafish has proved highly informative to express and study human disease-related gene variants, providing novel insights into human cardiovascular disease mechanisms, and highlighting the suitability of the zebrafish as an excellent model to study human cardiovascular diseases. In this review, I discuss recent discoveries in the field of cardiac development and specific cases in which the zebrafish has been used to model human congenital and acquired cardiac diseases.", "title": "" }, { "docid": "neg:1840076_13", "text": "Artificial Intelligence methods are becoming very popular in medical applications due to high reliability and ease. From the past decades, Artificial Intelligence techniques such as Artificial Neural Networks, Fuzzy Expert Systems, Robotics etc have found an increased usage in disease diagnosis, patient monitoring, disease risk evaluation, predicting effect of new medicines and robotic handling of surgeries. This paper presents an introduction and survey on different artificial intelligence methods used by researchers for the application of diagnosing or predicting Hypertension. Keywords-Hypertension, Artificial Neural Networks, Fuzzy Systems.", "title": "" }, { "docid": "neg:1840076_14", "text": "Darier's disease is characterized by dense keratotic lesions in the seborrheic areas of the body such as scalp, forehead, nasolabial folds, trunk and inguinal region. It is a rare genodermatosis, an autosomal dominant inherited disease that may be associated with neuropsichiatric disorders. It is caused by ATPA2 gene mutation, presenting cutaneous and dermatologic expressions. Psychiatric symptoms are depression, suicidal attempts, and bipolar affective disorder. We report a case of Darier's disease in a 48-year-old female patient presenting severe cutaneous and psychiatric manifestations.", "title": "" }, { "docid": "neg:1840076_15", "text": "Text summarization is the process of creating a shorter version of one or more text documents. Automatic text summarization has become an important way of finding relevant information in large text libraries or in the Internet. Extractive text summarization techniques select entire sentences from documents according to some criteria to form a summary. Sentence scoring is the technique most used for extractive text summarization, today. Depending on the context, however, some techniques may yield better results than some others. This paper advocates the thesis that the quality of the summary obtained with combinations of sentence scoring methods depend on text subject. Such hypothesis is evaluated using three different contexts: news, blogs and articles. The results obtained show the validity of the hypothesis formulated and point at which techniques are more effective in each of those contexts studied.", "title": "" }, { "docid": "neg:1840076_16", "text": "How to efficiently train recurrent networks remains a challenging and active research topic. Most of the proposed training approaches are based on computational ways to efficiently obtain the gradient of the error function, and can be generally grouped into five major groups. In this study we present a derivation that unifies these approaches. We demonstrate that the approaches are only five different ways of solving a particular matrix equation. The second goal of this paper is develop a new algorithm based on the insights gained from the novel formulation. The new algorithm, which is based on approximating the error gradient, has lower computational complexity in computing the weight update than the competing techniques for most typical problems. In addition, it reaches the error minimum in a much smaller number of iterations. A desirable characteristic of recurrent network training algorithms is to be able to update the weights in an on-line fashion. We have also developed an on-line version of the proposed algorithm, that is based on updating the error gradient approximation in a recursive manner.", "title": "" }, { "docid": "neg:1840076_17", "text": "Embedding and visualizing large-scale high-dimensional data in a two-dimensional space is an important problem, because such visualization can reveal deep insights of complex data. However, most of the existing embedding approaches run on an excessively high precision, even when users want to obtain a brief insight from a visualization of large-scale datasets, ignoring the fact that in the end, the outputs are embedded onto a fixed-range pixel-based screen space. Motivated by this observation and directly considering the properties of screen space in an embedding algorithm, we propose Pixel-Aligned Stochastic Neighbor Embedding (PixelSNE), a highly efficient screen resolution-driven 2D embedding method which accelerates Barnes-Hut treebased t-distributed stochastic neighbor embedding (BH-SNE), which is known to be a state-of-the-art 2D embedding method. Our experimental results show a significantly faster running time for PixelSNE compared to BH-SNE for various datasets while maintaining comparable embedding quality.", "title": "" }, { "docid": "neg:1840076_18", "text": "Employees are the most important asset of the organization. It’s a major challenge for the organization to retain its workforce as a lot of cost is incurred on them directly or indirectly. In order to have competitive advantage over the other organizations, the focus has to be on the employees. As ultimately the employees are the face of the organization as they are the building blocks of the organization. Thus their retention is a major area of concern. So attempt has been made to reduce the turnover rate of the organization. Therefore this paper attempts to review the various antecedents of turnover which affect turnover intentions of the employees.", "title": "" } ]
1840077
Kernel Methods on Riemannian Manifolds with Gaussian RBF Kernels
[ { "docid": "pos:1840077_0", "text": "The symmetric positive definite (SPD) matrices have been widely used in image and vision problems. Recently there are growing interests in studying sparse representation (SR) of SPD matrices, motivated by the great success of SR for vector data. Though the space of SPD matrices is well-known to form a Lie group that is a Riemannian manifold, existing work fails to take full advantage of its geometric structure. This paper attempts to tackle this problem by proposing a kernel based method for SR and dictionary learning (DL) of SPD matrices. We disclose that the space of SPD matrices, with the operations of logarithmic multiplication and scalar logarithmic multiplication defined in the Log-Euclidean framework, is a complete inner product space. We can thus develop a broad family of kernels that satisfies Mercer's condition. These kernels characterize the geodesic distance and can be computed efficiently. We also consider the geometric structure in the DL process by updating atom matrices in the Riemannian space instead of in the Euclidean space. The proposed method is evaluated with various vision problems and shows notable performance gains over state-of-the-arts.", "title": "" }, { "docid": "pos:1840077_1", "text": "We address the problem of tracking and recognizing faces in real-world, noisy videos. We track faces using a tracker that adaptively builds a target model reflecting changes in appearance, typical of a video setting. However, adaptive appearance trackers often suffer from drift, a gradual adaptation of the tracker to non-targets. To alleviate this problem, our tracker introduces visual constraints using a combination of generative and discriminative models in a particle filtering framework. The generative term conforms the particles to the space of generic face poses while the discriminative one ensures rejection of poorly aligned targets. This leads to a tracker that significantly improves robustness against abrupt appearance changes and occlusions, critical for the subsequent recognition phase. Identity of the tracked subject is established by fusing pose-discriminant and person-discriminant features over the duration of a video sequence. This leads to a robust video-based face recognizer with state-of-the-art recognition performance. We test the quality of tracking and face recognition on real-world noisy videos from YouTube as well as the standard Honda/UCSD database. Our approach produces successful face tracking results on over 80% of all videos without video or person-specific parameter tuning. The good tracking performance induces similarly high recognition rates: 100% on Honda/UCSD and over 70% on the YouTube set containing 35 celebrities in 1500 sequences.", "title": "" }, { "docid": "pos:1840077_2", "text": "We address the problem of comparing sets of images for object recognition, where the sets may represent variations in an object's appearance due to changing camera pose and lighting conditions. canonical correlations (also known as principal or canonical angles), which can be thought of as the angles between two d-dimensional subspaces, have recently attracted attention for image set matching. Canonical correlations offer many benefits in accuracy, efficiency, and robustness compared to the two main classical methods: parametric distribution-based and nonparametric sample-based matching of sets. Here, this is first demonstrated experimentally for reasonably sized data sets using existing methods exploiting canonical correlations. Motivated by their proven effectiveness, a novel discriminative learning method over sets is proposed for set classification. Specifically, inspired by classical linear discriminant analysis (LDA), we develop a linear discriminant function that maximizes the canonical correlations of within-class sets and minimizes the canonical correlations of between-class sets. Image sets transformed by the discriminant function are then compared by the canonical correlations. Classical orthogonal subspace method (OSM) is also investigated for the similar purpose and compared with the proposed method. The proposed method is evaluated on various object recognition problems using face image sets with arbitrary motion captured under different illuminations and image sets of 500 general objects taken at different views. The method is also applied to object category recognition using ETH-80 database. The proposed method is shown to outperform the state-of-the-art methods in terms of accuracy and efficiency", "title": "" } ]
[ { "docid": "neg:1840077_0", "text": "We present high performance implementations of the QR and the singular value decomposition of a batch of small matrices hosted on the GPU with applications in the compression of hierarchical matrices. The one-sided Jacobi algorithm is used for its simplicity and inherent parallelism as a building block for the SVD of low rank blocks using randomized methods. We implement multiple kernels based on the level of the GPU memory hierarchy in which the matrices can reside and show substantial speedups against streamed cuSOLVER SVDs. The resulting batched routine is a key component of hierarchical matrix compression, opening up opportunities to perform H-matrix arithmetic efficiently on GPUs.", "title": "" }, { "docid": "neg:1840077_1", "text": "Deep neural networks have been widely adopted for automatic organ segmentation from abdominal CT scans. However, the segmentation accuracy of some small organs (e.g., the pancreas) is sometimes below satisfaction, arguably because deep networks are easily disrupted by the complex and variable background regions which occupies a large fraction of the input volume. In this paper, we formulate this problem into a fixed-point model which uses a predicted segmentation mask to shrink the input region. This is motivated by the fact that a smaller input region often leads to more accurate segmentation. In the training process, we use the ground-truth annotation to generate accurate input regions and optimize network weights. On the testing stage, we fix the network parameters and update the segmentation results in an iterative manner. We evaluate our approach on the NIH pancreas segmentation dataset, and outperform the state-of-the-art by more than 4%, measured by the average Dice-Sørensen Coefficient (DSC). In addition, we report 62.43% DSC in the worst case, which guarantees the reliability of our approach in clinical applications.", "title": "" }, { "docid": "neg:1840077_2", "text": " Proposed method – Max-RGB & Gray-World • Instantiations of Minkowski norm – Optimal illuminant estimate • L6 norm: Working best overall", "title": "" }, { "docid": "neg:1840077_3", "text": "The Internet architecture uses congestion avoidance mechanisms implemented in the transport layer protocol like TCP to provide good service under heavy load. If network nodes distribute bandwidth fairly, the Internet would be more robust and accommodate a wide variety of applications. Various congestion and bandwidth management schemes have been proposed for this purpose and can be classified into two broad categories: packet scheduling algorithms such as fair queueing (FQ) which explicitly provide bandwidth shares by scheduling packets. They are more difficult to implement compared to FIFO queueing. The second category has active queue management schemes such as RED which use FIFO queues at the routers. They are easy to implement but don't aim to provide (and, in the presence of non-congestion-responsive sources, don't provide) fairness. An algorithm called AFD (approximate fair dropping), has been proposed to provide approximate, weighted max-min fair bandwidth allocations with relatively low complexity. AFD has since been widely adopted by the industry. This paper describes the evolution of AFD from a research project into an industry setting, focusing on the changes it has undergone in the process. AFD now serves as a traffic management module, which can be implemented either using a single FIFO or overlaid on top of extant per-flow queueing structures and which provides approximate bandwidth allocation in a simple fashion. The AFD algorithm has been implemented in several switch and router platforms at Cisco sytems, successfully transitioning from the academic world into the industry.", "title": "" }, { "docid": "neg:1840077_4", "text": "Radio frequency identification (RFID) has been identified as a crucial technology for the modern 21 st century knowledge-based economy. Many businesses started realising RFID to be able to improve their operational efficiency, achieve additional cost savings, and generate opportunities for higher revenues. To investigate how RFID technology has brought an impact to warehousing, a comprehensive analysis of research findings available through leading scientific article databases was conducted. Articles from years 1995 to 2010 were reviewed and analysed according to warehouse operations, RFID application domains, and benefits achieved. This paper presents four discussion topics covering RFID innovation, including its applications, perceived benefits, obstacles to its adoption and future trends. This is aimed at elucidating the current state of RFID in the warehouse and giving insights for the academics to establish new research scope and for the practitioners to evaluate their assessment of adopting RFID in the warehouse.", "title": "" }, { "docid": "neg:1840077_5", "text": "Modeling and generating graphs is fundamental for studying networks in biology, engineering, and social sciences. However, modeling complex distributions over graphs and then efficiently sampling from these distributions is challenging due to the non-unique, high-dimensional nature of graphs and the complex, non-local dependencies that exist between edges in a given graph. Here we propose GraphRNN, a deep autoregressive model that addresses the above challenges and approximates any distribution of graphs with minimal assumptions about their structure. GraphRNN learns to generate graphs by training on a representative set of graphs and decomposes the graph generation process into a sequence of node and edge formations, conditioned on the graph structure generated so far. In order to quantitatively evaluate the performance of GraphRNN, we introduce a benchmark suite of datasets, baselines and novel evaluation metrics based on Maximum Mean Discrepancy, which measure distances between sets of graphs. Our experiments show that GraphRNN significantly outperforms all baselines, learning to generate diverse graphs that match the structural characteristics of a target set, while also scaling to graphs 50× larger than previous deep models.", "title": "" }, { "docid": "neg:1840077_6", "text": " Abstract— In this paper is presented an investigation of the speech recognition classification performance. This investigation on the speech recognition classification performance is performed using two standard neural networks structures as the classifier. The utilized standard neural network types include Feed-forward Neural Network (NN) with back propagation algorithm and a Radial Basis Functions Neural Networks.", "title": "" }, { "docid": "neg:1840077_7", "text": "In this paper, we propose a cross-lingual convolutional neural network (CNN) model that is based on word and phrase embeddings learned from unlabeled data in two languages and dependency grammar. Compared to traditional machine translation (MT) based methods for cross lingual sentence modeling, our model is much simpler and does not need parallel corpora or language specific features. We only use a bilingual dictionary and dependency parser. This makes our model particularly appealing for resource poor languages. We evaluate our model using English and Chinese data on several sentence classification tasks. We show that our model achieves a comparable and even better performance than the traditional MT-based method.", "title": "" }, { "docid": "neg:1840077_8", "text": "Requirements engineering encompasses many difficult, overarching problems inherent to its subareas of process, elicitation, specification, analysis, and validation. Requirements engineering researchers seek innovative, effective means of addressing these problems. One powerful tool that can be added to the researcher toolkit is that of machine learning. Some researchers have been experimenting with their own implementations of machine learning algorithms or with those available as part of the Weka machine learning software suite. There are some shortcomings to using “one off” solutions. It is the position of the authors that many problems exist in requirements engineering that can be supported by Weka's machine learning algorithms, specifically by classification trees. Further, the authors posit that adoption will be boosted if machine learning is easy to use and is integrated into requirements research tools, such as TraceLab. Toward that end, an initial concept validation of a component in TraceLab is presented that applies the Weka classification trees. The component is demonstrated on two different requirements engineering problems. Finally, insights gained on using the TraceLab Weka component on these two problems are offered.", "title": "" }, { "docid": "neg:1840077_9", "text": "Up to the time when a huge corruption scandal, popularly labeled tangentopoli”(bribe city), brought down the political establishment that had ruled Italy for several decades, that country had reported one of the largest shares of capital spending in GDP among the OECD countries. After the scandal broke out and several prominent individuals were sent to jail, or even committed suicide, capital spending fell sharply. The fall seems to have been caused by a reduction in the number of capital projects being undertaken and, perhaps more importantly, by a sharp fall in the costs of the projects still undertaken. Information released by Transparency International (TI) reports that, within the space of two or three years, in the city of Milan, the city where the scandal broke out in the first place, the cost of city rail links fell by 52 percent, the cost of one kilometer of subway fell by 57 percent, and the budget for the new airport terminal was reduced by 59 percent to reflect the lower construction costs. Although one must be aware of the logical fallacy of post hoc, ergo propter hoc, the connection between the two events is too strong to be attributed to a coincidence. In fact, this paper takes the view that it could not have been a coincidence.", "title": "" }, { "docid": "neg:1840077_10", "text": "Trust models have been recently suggested as an effective security mechanism for Wireless Sensor Networks (WSNs). Considerable research has been done on modeling trust. However, most current research work only takes communication behavior into account to calculate sensor nodes' trust value, which is not enough for trust evaluation due to the widespread malicious attacks. In this paper, we propose an Efficient Distributed Trust Model (EDTM) for WSNs. First, according to the number of packets received by sensor nodes, direct trust and recommendation trust are selectively calculated. Then, communication trust, energy trust and data trust are considered during the calculation of direct trust. Furthermore, trust reliability and familiarity are defined to improve the accuracy of recommendation trust. The proposed EDTM can evaluate trustworthiness of sensor nodes more precisely and prevent the security breaches more effectively. Simulation results show that EDTM outperforms other similar models, e.g., NBBTE trust model.", "title": "" }, { "docid": "neg:1840077_11", "text": "Errors and discrepancies in radiology practice are uncomfortably common, with an estimated day-to-day rate of 3-5% of studies reported, and much higher rates reported in many targeted studies. Nonetheless, the meaning of the terms \"error\" and \"discrepancy\" and the relationship to medical negligence are frequently misunderstood. This review outlines the incidence of such events, the ways they can be categorized to aid understanding, and potential contributing factors, both human- and system-based. Possible strategies to minimise error are considered, along with the means of dealing with perceived underperformance when it is identified. The inevitability of imperfection is explained, while the importance of striving to minimise such imperfection is emphasised.\n\n\nTEACHING POINTS\n• Discrepancies between radiology reports and subsequent patient outcomes are not inevitably errors. • Radiologist reporting performance cannot be perfect, and some errors are inevitable. • Error or discrepancy in radiology reporting does not equate negligence. • Radiologist errors occur for many reasons, both human- and system-derived. • Strategies exist to minimise error causes and to learn from errors made.", "title": "" }, { "docid": "neg:1840077_12", "text": "One of the most widely studied systems of argumentation is the one described by Dung in a paper from 1995. Unfortunately, this framework does not allow for joint attacks on arguments, which we argue must be required of any truly abstract argumentation framework. A few frameworks can be said to allow for such interactions among arguments, but for various reasons we believe that these are inadequate for modelling argumentation systems with joint attacks. In this paper we propose a generalization of the framework of Dung, which allows for sets of arguments to attack other arguments. We extend the semantics associated with the original framework to this generalization, and prove that all results in the paper by Dung have an equivalent in this more abstract framework.", "title": "" }, { "docid": "neg:1840077_13", "text": "To assess the likelihood of procedural success in patients with multivessel coronary disease undergoing percutaneous coronary angioplasty, 350 consecutive patients (1,100 stenoses) from four clinical sites were evaluated. Eighteen variables characterizing the severity and morphology of each stenosis and 18 patient-related variables were assessed at a core angiographic laboratory and at the clinical sites. Most patients had Canadian Cardiovascular Society class III or IV angina (72%) and two-vessel coronary disease (78%). Left ventricular function was generally well preserved (mean ejection fraction, 58 +/- 12%; range, 18-85%) and 1.9 +/- 1.0 stenoses per patient had attempted percutaneous coronary angioplasty. Procedural success (less than or equal to 50% final diameter stenosis in one or more stenoses and no major ischemic complications) was achieved in 290 patients (82.8%), and an additional nine patients (2.6%) had a reduction in diameter stenosis by 20% or more with a final diameter stenosis 51-60% and were without major complications. Major ischemic complications (death, myocardial infarction, or emergency bypass surgery) occurred in 30 patients (8.6%). In-hospital mortality was 1.1%. Stepwise regression analysis determined that a modified American College of Cardiology/American Heart Association Task Force (ACC/AHA) classification of the primary target stenosis (with type B prospectively divided into type B1 [one type B characteristic] and type B2 [greater than or equal to two type B characteristics]) and the presence of diabetes mellitus were the only variables independently predictive of procedural outcome (target stenosis modified ACC/AHA score; p less than 0.001 for both success and complications; diabetes mellitus: p = 0.003 for success and p = 0.016 for complications). Analysis of success and complications on a per stenosis dilated basis showed, for type A stenoses, a 92% success and a 2% complication rate; for type B1 stenoses, an 84% success and a 4% complication rate; for type B2 stenoses, a 76% success and a 10% complication rate; and for type C stenoses, a 61% success and a 21% complication rate. The subdivision into types B1 and B2 provided significantly more information in this clinically important intermediate risk group than did the standard ACC/AHA scheme. The stenosis characteristics of chronic total occlusion, high grade (80-99% diameter) stenosis, stenosis bend of more than 60 degrees, and excessive tortuosity were particularly predictive of adverse procedural outcome. This improved scheme may improve clinical decision making and provide a framework on which to base meaningful subgroup analysis in randomized trials assessing the efficacy of percutaneous coronary angioplasty.", "title": "" }, { "docid": "neg:1840077_14", "text": "OBJECTIVE\nThis study assesses the psychometric properties of Ward's seven-subtest short form (SF) for WAIS-IV in a sample of adults with schizophrenia (SZ) and schizoaffective disorder.\n\n\nMETHOD\nSeventy patients diagnosed with schizophrenia or schizoaffective disorder were administered the full version of the WAIS-IV. Four different versions of the Ward's SF were then calculated. The subtests used were: Similarities, Digit Span, Arithmetic, Information, Coding, Picture Completion, and Block Design (BD version) or Matrix Reasoning (MR version). Prorated and regression-based formulae were assessed for each version.\n\n\nRESULTS\nThe actual and estimated factorial indexes reflected the typical pattern observed in schizophrenia. The four SFs correlated significantly with their full-version counterparts, but the Perceptual Reasoning Index (PRI) correlated below the acceptance threshold for all four versions. The regression-derived estimates showed larger differences compared to the full form. The four forms revealed comparable but generally low clinical category agreement rates for factor indexes. All SFs showed an acceptable reliability, but they were not correlated with clinical outcomes.\n\n\nCONCLUSIONS\nThe WAIS-IV SF offers a good estimate of WAIS-IV intelligence quotient, which is consistent with previous results. Although the overall scores are comparable between the four versions, the prorated forms provided a better estimation of almost all indexes. MR can be used as an alternative for BD without substantially changing the psychometric properties of the SF. However, we recommend a cautious use of these abbreviated forms when it is necessary to estimate the factor index scores, especially PRI, and Processing Speed Index.", "title": "" }, { "docid": "neg:1840077_15", "text": "The Internet of Things (IoT) is constantly evolving and is giving unique solutions to the everyday problems faced by man. “Smart City” is one such implementation aimed at improving the lifestyle of human beings. One of the major hurdles in most cities is its solid waste management, and effective management of the solid waste produced becomes an integral part of a smart city. This paper aims at providing an IoT based architectural solution to tackle the problems faced by the present solid waste management system. By providing a complete IoT based system, the process of tracking, collecting, and managing the solid waste can be easily automated and monitored efficiently. By taking the example of the solid waste management crisis of Bengaluru city, India, we have come up with the overall system architecture and protocol stack to give a IoT based solution to improve the reliability and efficiency of the system. By making use of sensors, we collect data from the garbage bins and send them to a gateway using LoRa technology. The data from various garbage bins are collected by the gateway and sent to the cloud over the Internet using the MQTT (Message Queue Telemetry Transport) protocol. The main advantage of the proposed system is the use of LoRa technology for data communication which enables long distance data transmission along with low power consumption as compared to Wi-Fi, Bluetooth or Zigbee.", "title": "" }, { "docid": "neg:1840077_16", "text": "Business analytics (BA) systems are an important strategic investment for many organisations and can potentially contribute significantly to firm performance. Establishing strong BA capabilities is currently one of the major concerns of chief information officers. This research project aims to develop a BA capability maturity model (BACMM). The BACMM will help organisations to scope and evaluate their BA initiatives. This research-in-progress paper describes the current BACMM, relates it to existing capability maturity models and explains its theoretical base. It also discusses the design science research approach being used to develop the BACMM and provides details of further work within the research project. Finally, the paper concludes with a discussion of how the BACMM might be used in practice.", "title": "" }, { "docid": "neg:1840077_17", "text": "Convolutional Neural Networks (CNNs) exhibit remarkable performance in various machine learning tasks. As sensor-equipped internet of things (IoT) devices permeate into every aspect of modern life, it is increasingly important to run CNN inference, a computationally intensive application, on resource constrained devices. We present a technique for fast and energy-efficient CNN inference on mobile SoC platforms, which are projected to be a major player in the IoT space. We propose techniques for efficient parallelization of CNN inference targeting mobile GPUs, and explore the underlying tradeoffs. Experiments with running Squeezenet on three different mobile devices confirm the effectiveness of our approach. For further study, please refer to the project repository available on our GitHub page: https://github.com/mtmd/Mobile ConvNet.", "title": "" }, { "docid": "neg:1840077_18", "text": "This paper presents two different Ku-Band Low-Profile antenna concepts for Mobile Satellite Communications. The antennas are based on low-cost hybrid mechanical-electronic steerable solutions but, while the first one allows a broadband reception of a satellite signal (Receive-only antenna concept), the second one provides transmit and receive functions for a bi-directional communication link between the satellite and the mobile user terminal (Transmit-Receive antenna). Both examples are suitable for integration in land vehicles and aircrafts.", "title": "" }, { "docid": "neg:1840077_19", "text": "OBJECTIVE\nBecause adverse drug events (ADEs) are a serious health problem and a leading cause of death, it is of vital importance to identify them correctly and in a timely manner. With the development of Web 2.0, social media has become a large data source for information on ADEs. The objective of this study is to develop a relation extraction system that uses natural language processing techniques to effectively distinguish between ADEs and non-ADEs in informal text on social media.\n\n\nMETHODS AND MATERIALS\nWe develop a feature-based approach that utilizes various lexical, syntactic, and semantic features. Information-gain-based feature selection is performed to address high-dimensional features. Then, we evaluate the effectiveness of four well-known kernel-based approaches (i.e., subset tree kernel, tree kernel, shortest dependency path kernel, and all-paths graph kernel) and several ensembles that are generated by adopting different combination methods (i.e., majority voting, weighted averaging, and stacked generalization). All of the approaches are tested using three data sets: two health-related discussion forums and one general social networking site (i.e., Twitter).\n\n\nRESULTS\nWhen investigating the contribution of each feature subset, the feature-based approach attains the best area under the receiver operating characteristics curve (AUC) values, which are 78.6%, 72.2%, and 79.2% on the three data sets. When individual methods are used, we attain the best AUC values of 82.1%, 73.2%, and 77.0% using the subset tree kernel, shortest dependency path kernel, and feature-based approach on the three data sets, respectively. When using classifier ensembles, we achieve the best AUC values of 84.5%, 77.3%, and 84.5% on the three data sets, outperforming the baselines.\n\n\nCONCLUSIONS\nOur experimental results indicate that ADE extraction from social media can benefit from feature selection. With respect to the effectiveness of different feature subsets, lexical features and semantic features can enhance the ADE extraction capability. Kernel-based approaches, which can stay away from the feature sparsity issue, are qualified to address the ADE extraction problem. Combining different individual classifiers using suitable combination methods can further enhance the ADE extraction effectiveness.", "title": "" } ]
1840078
Visual Analytics for MOOC Data
[ { "docid": "pos:1840078_0", "text": "Today it is quite common for people to exchange hundreds of comments in online conversations (e.g., blogs). Often, it can be very difficult to analyze and gain insights from such long conversations. To address this problem, we present a visual text analytic system that tightly integrates interactive visualization with novel text mining and summarization techniques to fulfill information needs of users in exploring conversations. At first, we perform a user requirement analysis for the domain of blog conversations to derive a set of design principles. Following these principles, we present an interface that visualizes a combination of various metadata and textual analysis results, supporting the user to interactively explore the blog conversations. We conclude with an informal user evaluation, which provides anecdotal evidence about the effectiveness of our system and directions for further design.", "title": "" } ]
[ { "docid": "neg:1840078_0", "text": "Since the first online demonstration of Neural Machine Translation (NMT) by LISA (Bahdanau et al., 2014), NMT development has recently moved from laboratory to production systems as demonstrated by several entities announcing rollout of NMT engines to replace their existing technologies. NMT systems have a large number of training configurations and the training process of such systems is usually very long, often a few weeks, so role of experimentation is critical and important to share. In this work, we present our approach to production-ready systems simultaneously with release of online demonstrators covering a large variety of languages ( 12 languages, for32 language pairs). We explore different practical choices: an efficient and evolutive open-source framework; data preparation; network architecture; additional implemented features; tuning for production; etc. We discuss about evaluation methodology, present our first findings and we finally outline further work. Our ultimate goal is to share our expertise to build competitive production systems for ”generic” translation. We aim at contributing to set up a collaborative framework to speed-up adoption of the technology, foster further research efforts and enable the delivery and adoption to/by industry of use-case specific engines integrated in real production workflows. Mastering of the technology would allow us to build translation engines suited for particular needs, outperforming current simplest/uniform systems.", "title": "" }, { "docid": "neg:1840078_1", "text": "BACKGROUND\nIn theory, infections that arise after female genital mutilation (FGM) in childhood might ascend to the internal genitalia, causing inflammation and scarring and subsequent tubal-factor infertility. Our aim was to investigate this possible association between FGM and primary infertility.\n\n\nMETHODS\nWe did a hospital-based case-control study in Khartoum, Sudan, to which we enrolled women (n=99) with primary infertility not caused by hormonal or iatrogenic factors (previous abdominal surgery), or the result of male-factor infertility. These women underwent diagnostic laparoscopy. Our controls were primigravidae women (n=180) recruited from antenatal care. We used exact conditional logistic regression, stratifying for age and controlling for socioeconomic status, level of education, gonorrhoea, and chlamydia, to compare these groups with respect to FGM.\n\n\nFINDINGS\nOf the 99 infertile women examined, 48 had adnexal pathology indicative of previous inflammation. After controlling for covariates, these women had a significantly higher risk than controls of having undergone the most extensive form of FGM, involving the labia majora (odds ratio 4.69, 95% CI 1.49-19.7). Among women with primary infertility, both those with tubal pathology and those with normal laparoscopy findings were at a higher risk than controls of extensive FGM, both with borderline significance (p=0.054 and p=0.055, respectively). The anatomical extent of FGM, rather than whether or not the vulva had been sutured or closed, was associated with primary infertility.\n\n\nINTERPRETATION\nOur findings indicate a positive association between the anatomical extent of FGM and primary infertility. Laparoscopic postinflammatory adnexal changes are not the only explanation for this association, since cases without such pathology were also affected. The association between FGM and primary infertility is highly relevant for preventive work against this ancient practice.", "title": "" }, { "docid": "neg:1840078_2", "text": "Probabilistic topic models are a suite of algorithms whose aim is to discover the hidden thematic structure in large archives of documents. In this article, we review the main ideas of this field, survey the current state-of-the-art, and describe some promising future directions. We first describe latent Dirichlet allocation (LDA) [8], which is the simplest kind of topic model. We discuss its connections to probabilistic modeling, and describe two kinds of algorithms for topic discovery. We then survey the growing body of research that extends and applies topic models in interesting ways. These extensions have been developed by relaxing some of the statistical assumptions of LDA, incorporating meta-data into the analysis of the documents, and using similar kinds of models on a diversity of data types such as social networks, images and genetics. Finally, we give our thoughts as to some of the important unexplored directions for topic modeling. These include rigorous methods for checking models built for data exploration, new approaches to visualizing text and other high dimensional data, and moving beyond traditional information engineering applications towards using topic models for more scientific ends.", "title": "" }, { "docid": "neg:1840078_3", "text": "The use of Bayesian methods for data analysis is creating a revolution in fields ranging from genetics to marketing. Yet, results of our literature review, including more than 10,000 articles published in 15 journals from January 2001 and December 2010, indicate that Bayesian approaches are essentially absent from the organizational sciences. Our article introduces organizational science researchers to Bayesian methods and describes why and how they should be used. We use multiple linear regression as the framework to offer a step-by-step demonstration, including the use of software, regarding how to implement Bayesian methods. We explain and illustrate how to determine the prior distribution, compute the posterior distribution, possibly accept the null value, and produce a write-up describing the entire Bayesian process, including graphs, results, and their interpretation. We also offer a summary of the advantages of using Bayesian analysis and examples of how specific published research based on frequentist analysis-based approaches failed to benefit from the advantages offered by a Bayesian approach and how using Bayesian analyses would have led to richer and, in some cases, different substantive conclusions. We hope that our article will serve as a catalyst for the adoption of Bayesian methods in organizational science research.", "title": "" }, { "docid": "neg:1840078_4", "text": "Elevated liver enzymes are a common scenario encountered by physicians in clinical practice. For many physicians, however, evaluation of such a problem in patients presenting with no symptoms can be challenging. Evidence supporting a standardized approach to evaluation is lacking. Although alterations of liver enzymes could be a normal physiological phenomenon in certain cases, it may also reflect potential liver injury in others, necessitating its further assessment and management. In this article, we provide a guide to primary care clinicians to interpret abnormal elevation of liver enzymes in asymptomatic patients using a step-wise algorithm. Adopting a schematic approach that classifies enzyme alterations on the basis of pattern (hepatocellular, cholestatic and isolated hyperbilirubinemia), we review an approach to abnormal alteration of liver enzymes within each section, the most common causes of enzyme alteration, and suggest initial investigations.", "title": "" }, { "docid": "neg:1840078_5", "text": "6  Abstract— The purpose of this paper is to propose a MATLAB/ Simulink simulators for PV cell/module/array based on the Two-diode model of a PV cell.This model is known to have better accuracy at low irradiance levels which allows for more accurate prediction of PV systems performance.To reduce computational time , the input parameters are reduced as the values of Rs and Rp are estimated by an efficient iteration method. Furthermore ,all of the inputs to the simulators are information available on a standard PV module datasheet. The present paper present first abrief introduction to the behavior and functioning of a PV device and write the basic equation of the two-diode model,without the intention of providing an indepth analysis of the photovoltaic phenomena and the semicondutor physics. The introduction on PV devices is followed by the modeling and simulation of PV cell/PV module/PV array, which is the main subject of this paper. A MATLAB Simulik based simulation study of PV cell/PV module/PV array is carried out and presented .The simulation model makes use of the two-diode model basic circuit equations of PV solar cell, taking the effect of sunlight irradiance and cell temperature into consideration on the output current I-V characteristic and output power P-V characteristic . A particular typical 50W solar panel was used for model evaluation. The simulation results , compared with points taken directly from the data sheet and curves pubblished by the manufacturers, show excellent correspondance to the model.", "title": "" }, { "docid": "neg:1840078_6", "text": "The successful integration of Information and Communications Technology (ICT) into the teaching and learning of English Language is largely dependent on the level of teacher’s ICT competence, the actual utilization of ICT in the language classroom and factors that challenge teachers to use it in language teaching. The study therefore assessed the Secondary School English language teachers’ ICT literacy, the extent of ICT utilization in English language teaching and the challenges that prevent language teachers to integrate ICT in teaching. To answer the problems, three sets of survey questionnaires were distributed to 30 English teachers in the 11 schools of Cluster 1 (CarCanMadCarLan). Data gathered were analyzed using descriptive statistics and frequency count. The results revealed that the teachers’ ICT literacy was moderate. The findings provided evidence that there was only a limited use of ICT in language teaching. Feedback gathered from questionnaires show that teachers faced many challenges that demotivate them from using ICT in language activities. Based on these findings, it is recommended the teachers must be provided with intensive ICT-based trainings to equip them with knowledge of ICT and its utilization in language teaching. School administrators as well as stakeholders may look for interventions to upgrade school’s ICTbased resources for its optimum use in teaching and learning. Most importantly, a larger school-wide ICT development plan may be implemented to ensure coherence of ICT implementation in the teaching-learning activities. ‘ICT & Innovations in Education’ International Journal International Electronic Journal | ISSN 2321 – 7189 | www.ictejournal.com Volume 2, Issue 1 | February 2014", "title": "" }, { "docid": "neg:1840078_7", "text": "SwitchKV is a new key-value store system design that combines high-performance cache nodes with resourceconstrained backend nodes to provide load balancing in the face of unpredictable workload skew. The cache nodes absorb the hottest queries so that no individual backend node is over-burdened. Compared with previous designs, SwitchKV exploits SDN techniques and deeply optimized switch hardware to enable efficient contentbased routing. Programmable network switches keep track of cached keys and route requests to the appropriate nodes at line speed, based on keys encoded in packet headers. A new hybrid caching strategy keeps cache and switch forwarding rules updated with low overhead and ensures that system load is always well-balanced under rapidly changing workloads. Our evaluation results demonstrate that SwitchKV can achieve up to 5× throughput and 3× latency improvements over traditional system designs.", "title": "" }, { "docid": "neg:1840078_8", "text": "Intuitive and efficient retrieval of motion capture data is essential for effective use of motion capture databases. In this paper, we describe a system that allows the user to retrieve a particular sequence by performing an approximation of the motion with an instrumented puppet. This interface is intuitive because both adults and children have experience playacting with puppets and toys to express particular behaviors or to tell stories with style and emotion. The puppet has 17 degrees of freedom and can therefore represent a variety of motions. We develop a novel similarity metric between puppet and human motion by computing the reconstruction errors of the puppet motion in the latent space of the human motion and those of the human motion in the latent space of the puppet motion. This metric works even for relatively large databases. We conducted a user study of the system and subjects could find the desired motion with reasonable accuracy from a database consisting of everyday, exercise, and acrobatic behaviors.", "title": "" }, { "docid": "neg:1840078_9", "text": "Two of the most important outcomes of learning analytics are predicting students’ learning and providing effective feedback. Learning Management Systems (LMS), which are widely used to support online and face-to-face learning, provide extensive research opportunities with detailed records of background data regarding users’ behaviors. The purpose of this study was to investigate the effects of undergraduate students’ LMS learning behaviors on their academic achievements. In line with this purpose, the participating students’ online learning behaviors in LMS were examined by using learning analytics for 14 weeks, and the relationship between students’ behaviors and their academic achievements was analyzed, followed by an analysis of their views about the influence of LMS on their academic achievement. The present study, in which quantitative and qualitative data were collected, was carried out with the explanatory mixed method. A total of 71 undergraduate students participated in the study. The results revealed that the students used LMSs as a support to face-to-face education more intensively on course days (at the beginning of the related lessons and at nights on course days) and that they activated the content elements the most. Lastly, almost all the students agreed that LMSs helped increase their academic achievement only when LMSs included such features as effectiveness, interaction, reinforcement, attractive design, social media support, and accessibility.", "title": "" }, { "docid": "neg:1840078_10", "text": "There are several approaches for automated functional web testing and the choice among them depends on a number of factors, including the tools used for web testing and the costs associated with their adoption. In this paper, we present an empirical cost/benefit analysis of two different categories of automated functional web testing approaches: (1) capture-replay web testing (in particular, using Selenium IDE); and, (2) programmable web testing (using Selenium WebDriver). On a set of six web applications, we evaluated the costs of applying these testing approaches both when developing the initial test suites from scratch and when the test suites are maintained, upon the release of a new software version. Results indicate that, on the one hand, the development of the test suites is more expensive in terms of time required (between 32% and 112%) when the programmable web testing approach is adopted, but on the other hand, test suite maintenance is less expensive when this approach is used (with a saving between 16% and 51%). We found that, in the majority of the cases, after a small number of releases (from one to three), the cumulative cost of programmable web testing becomes lower than the cost involved with capture-replay web testing and the cost saving gets amplified over the successive releases.", "title": "" }, { "docid": "neg:1840078_11", "text": "Recent work in machine learning for information extraction has focused on two distinct sub-problems: the conventional problem of filling template slots from natural language text, and the problem of wrapper induction, learning simple extraction procedures (“wrappers”) for highly structured text such as Web pages produced by CGI scripts. For suitably regular domains, existing wrapper induction algorithms can efficiently learn wrappers that are simple and highly accurate, but the regularity bias of these algorithms makes them unsuitable for most conventional information extraction tasks. Boosting is a technique for improving the performance of a simple machine learning algorithm by repeatedly applying it to the training set with different example weightings. We describe an algorithm that learns simple, low-coverage wrapper-like extraction patterns, which we then apply to conventional information extraction problems using boosting. The result is BWI, a trainable information extraction system with a strong precision bias and F1 performance better than state-of-the-art techniques in many domains.", "title": "" }, { "docid": "neg:1840078_12", "text": "Simultaneous Localization And Mapping (SLAM) is a fundamental problem in mobile robotics. While sparse point-based SLAM methods provide accurate camera localization, the generated maps lack semantic information. On the other hand, state of the art object detection methods provide rich information about entities present in the scene from a single image. This work incorporates a real-time deep-learned object detector to the monocular SLAM framework for representing generic objects as quadrics that permit detections to be seamlessly integrated while allowing the real-time performance. Finer reconstruction of an object, learned by a CNN network, is also incorporated and provides a shape prior for the quadric leading further refinement. To capture the dominant structure of the scene, additional planar landmarks are detected by a CNN-based plane detector and modelled as landmarks in the map. Experiments show that the introduced plane and object landmarks and the associated constraints, using the proposed monocular plane detector and incorporated object detector, significantly improve camera localization and lead to a richer semantically more meaningful map.", "title": "" }, { "docid": "neg:1840078_13", "text": "The paper presents performance analysis of modified SEPIC dc-dc converter with low input voltage and wide output voltage range. The operational analysis and the design is done for the 380W power output of the modified converter. The simulation results of modified SEPIC converter are obtained with PI controller for the output voltage. The results obtained with the modified converter are compared with the basic SEPIC converter topology for the rise time, peak time, settling time and steady state error of the output response for open loop. Voltage tracking curve is also shown for wide output voltage range. I. Introduction Dc-dc converters are widely used in regulated switched mode dc power supplies and in dc motor drive applications. The input to these converters is often an unregulated dc voltage, which is obtained by rectifying the line voltage and it will therefore fluctuate due to variations of the line voltages. Switched mode dc-dc converters are used to convert this unregulated dc input into a controlled dc output at a desired voltage level. The recent growth of battery powered applications and low voltage storage elements are increasing the demand of efficient step-up dc–dc converters. Typical applications are in adjustable speed drives, switch-mode power supplies, uninterrupted power supplies, and utility interface with nonconventional energy sources, battery energy storage systems, battery charging for electric vehicles, and power supplies for telecommunication systems etc.. These applications demand high step-up static gain, high efficiency and reduced weight, volume and cost. The step-up stage normally is the critical point for the design of high efficiency converters due to the operation with high input current and high output voltage [1]. The boost converter topology is highly effective in these applications but at low line voltage in boost converter, the switching losses are high because the input current has the maximum value and the highest step-up conversion is required. The inductor has to be oversized for the large current at low line input. As a result, a boost converter designed for universal-input applications is heavily oversized compared to a converter designed for a narrow range of input ac line voltage [2]. However, recently new non-isolated dc–dc converter topologies with basic boost are proposed, showing that it is possible to obtain high static gain, low voltage stress and low losses, improving the performance with respect to the classical topologies. Some single stage high power factor rectifiers are presented in [3-6]. A new …", "title": "" }, { "docid": "neg:1840078_14", "text": "This study presents the clinical results of a surgical technique that expands a narrow ridge when its orofacial width precludes the placement of dental implants. In 170 people, 329 implants were placed in sites needing ridge enlargement using the endentulous ridge expansion procedure. This technique involves a partial-thickness flap, crestal and vertical intraosseous incisions into the ridge, and buccal displacement of the buccal cortical plate, including a portion of the underiying spongiosa. Implants were placed in the expanded ridge and allowed to heal for 4 to 5 months. When indicated, the implants were exposed during a second-stage surgery to allow visualization of the implant site. Occlusal loading was applied during the following 3 to 5 months by provisional prostheses. The final phase was the placement of the permanent prostheses. The results yielded a success rate of 98.8%.", "title": "" }, { "docid": "neg:1840078_15", "text": "Internet technology is revolutionizing education. Teachers are developing massive open online courses (MOOCs) and using innovative practices such as flipped learning in which students watch lectures at home and engage in hands-on, problem solving activities in class. This work seeks to explore the design space afforded by these novel educational paradigms and to develop technology for improving student learning. Our design, based on the technique of adaptive content review, monitors student attention during educational presentations and determines which lecture topic students might benefit the most from reviewing. An evaluation of our technology within the context of an online art history lesson demonstrated that adaptively reviewing lesson content improved student recall abilities 29% over a baseline system and was able to match recall gains achieved by a full lesson review in less time. Our findings offer guidelines for a novel design space in dynamic educational technology that might support both teachers and online tutoring systems.", "title": "" }, { "docid": "neg:1840078_16", "text": "In this paper a review of architectures suitable for nonlinear real-time audio signal processing is presented. The computational and structural complexity of neural networks (NNs) represent in fact, the main drawbacks that can hinder many practical NNs multimedia applications. In particular e,cient neural architectures and their learning algorithm for real-time on-line audio processing are discussed. Moreover, applications in the -elds of (1) audio signal recovery, (2) speech quality enhancement, (3) nonlinear transducer linearization, (4) learning based pseudo-physical sound synthesis, are brie1y presented and discussed. c © 2003 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "neg:1840078_17", "text": "OBJECTIVE\nOptimal mental health care is dependent upon sensitive and early detection of mental health problems. We have introduced a state-of-the-art method for the current study for remote behavioral monitoring that transports assessment out of the clinic and into the environments in which individuals negotiate their daily lives. The objective of this study was to examine whether the information captured with multimodal smartphone sensors can serve as behavioral markers for one's mental health. We hypothesized that (a) unobtrusively collected smartphone sensor data would be associated with individuals' daily levels of stress, and (b) sensor data would be associated with changes in depression, stress, and subjective loneliness over time.\n\n\nMETHOD\nA total of 47 young adults (age range: 19-30 years) were recruited for the study. Individuals were enrolled as a single cohort and participated in the study over a 10-week period. Participants were provided with smartphones embedded with a range of sensors and software that enabled continuous tracking of their geospatial activity (using the Global Positioning System and wireless fidelity), kinesthetic activity (using multiaxial accelerometers), sleep duration (modeled using device-usage data, accelerometer inferences, ambient sound features, and ambient light levels), and time spent proximal to human speech (i.e., speech duration using microphone and speech detection algorithms). Participants completed daily ratings of stress, as well as pre- and postmeasures of depression (Patient Health Questionnaire-9; Spitzer, Kroenke, & Williams, 1999), stress (Perceived Stress Scale; Cohen et al., 1983), and loneliness (Revised UCLA Loneliness Scale; Russell, Peplau, & Cutrona, 1980).\n\n\nRESULTS\nMixed-effects linear modeling showed that sensor-derived geospatial activity (p < .05), sleep duration (p < .05), and variability in geospatial activity (p < .05), were associated with daily stress levels. Penalized functional regression showed associations between changes in depression and sensor-derived speech duration (p < .05), geospatial activity (p < .05), and sleep duration (p < .05). Changes in loneliness were associated with sensor-derived kinesthetic activity (p < .01).\n\n\nCONCLUSIONS AND IMPLICATIONS FOR PRACTICE\nSmartphones can be harnessed as instruments for unobtrusive monitoring of several behavioral indicators of mental health. Creative leveraging of smartphone sensing could provide novel opportunities for close-to-invisible psychiatric assessment at a scale and efficiency that far exceeds what is currently feasible with existing assessment technologies.", "title": "" }, { "docid": "neg:1840078_18", "text": "Metric learning has the aim to improve classification accuracy by learning a distance measure which brings data points from the same class closer together and pushes data points from different classes further apart. Recent research has demonstrated that metric learning approaches can also be applied to trees, such as molecular structures, abstract syntax trees of computer programs, or syntax trees of natural language, by learning the cost function of an edit distance, i.e. the costs of replacing, deleting, or inserting nodes in a tree. However, learning such costs directly may yield an edit distance which violates metric axioms, is challenging to interpret, and may not generalize well. In this contribution, we propose a novel metric learning approach for trees which we call embedding edit distance learning (BEDL) and which learns an edit distance indirectly by embedding the tree nodes as vectors, such that the Euclidean distance between those vectors supports class discrimination. We learn such embeddings by reducing the distance to prototypical trees from the same class and increasing the distance to prototypical trees from different classes. In our experiments, we show that BEDL improves upon the state-of-the-art in metric learning for trees on six benchmark data sets, ranging from computer science over biomedical data to a natural-language processing data set containing over 300,000 nodes.", "title": "" }, { "docid": "neg:1840078_19", "text": "As machine learning (ML) technology continues to spread by rapid evolution, the system or service using Machine Learning technology, called ML product, makes big impact on our life, society and economy. Meanwhile, Quality Assurance (QA) for ML product is quite more difficult than hardware, non-ML software and service because performance of ML technology is much better than non-ML technology in exchange for the characteristics of ML product, e.g. low explainability. We must keep rapid evolution and reduce quality risk of ML product simultaneously. In this paper, we show a Quality Assurance Framework for Machine Learning product. Scope of QA in this paper is limited to product evaluation. First, a policy of QA for ML Product is proposed. General principles of product evaluation is introduced and applied to ML product evaluation as a part of the policy. They are composed of A-ARAI: Allowability, Achievability, Robustness, Avoidability and Improvability. A strategy of ML Product Evaluation is constructed as another part of the policy. Quality Integrity Level for ML product is also modelled. Second, we propose a test architecture of ML product testing. It consists of test levels and fundamental test types of ML product testing, including snapshot testing, learning testing and confrontation testing. Finally, we defines QA activity levels for ML product.", "title": "" } ]
1840079
Design of Power-Rail ESD Clamp Circuit With Ultra-Low Standby Leakage Current in Nanoscale
[ { "docid": "pos:1840079_0", "text": "Considering gate-oxide reliability, a new electrostatic discharge (ESD) protection scheme with an on-chip ESD bus (ESD_BUS) and a high-voltage-tolerant ESD clamp circuit for 1.2/2.5 V mixed-voltage I/O interfaces is proposed. The devices used in the high-voltage-tolerant ESD clamp circuit are all 1.2 V low-voltage N- and P-type MOS devices that can be safely operated under the 2.5-V bias conditions without suffering from the gate-oxide reliability issue. The four-mode (positive-to-VSS, negative-to-VSS, positive-to-VDD, and negative-to-VDD) ESD stresses on the mixed-voltage I/O pad and pin-to-pin ESD stresses can be effectively discharged by the proposed ESD protection scheme. The experimental results verified in a 0.13-mum CMOS process have confirmed that the proposed new ESD protection scheme has high human-body model (HBM) and machine-model (MM) ESD robustness with a fast turn-on speed. The proposed new ESD protection scheme, which is designed with only low- voltage devices, is an excellent and cost-efficient solution to protect mixed-voltage I/O interfaces.", "title": "" } ]
[ { "docid": "neg:1840079_0", "text": "F or years, business academics and practitioners have operated in the belief that sustained competitive advantage could accrue from a variety of industrylevel entry barriers, such as technological supremacy, patent protections, and government regulations. However, technological change and diffusion, rapid innovation, and deregulation have eroded these widely recognized barriers. In today’s environment, which requires flexibility, innovation, and speed-to-market, effectively developing and managing employees’ knowledge, experiences, skills, and expertise—collectively defined as “human capital”—has become a key success factor for sustained organizational performance.", "title": "" }, { "docid": "neg:1840079_1", "text": "Although prior research has examined how individual difference factors are related to relationship initiation and formation over the Internet (e.g., online dating sites, social networking sites), little research has examined how dispositional factors are related to other aspects of online dating. The present research therefore sought to examine the relationship between several dispositional factors, such as Big-Five personality traits, self-esteem, rejection sensitivity, and attachment styles, and the use of online dating sites and online dating behaviors. Rejection sensitivity was the only dispositional variable predictive of use of online dating sites whereby those higher in rejection sensitivity are more likely to use online dating sites than those lower in rejection sensitivity. We also found that those higher in rejection sensitivity, those lower in conscientiousness, and men indicated being more likely to engage in potentially risky behaviors related to meeting an online dating partner face-to-face. Further research is needed to further explore the relationships between these dispositional factors and online dating behaviors. 2014 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "neg:1840079_2", "text": "Wine is the product of complex interactions between fungi, yeasts and bacteria that commence in the vineyard and continue throughout the fermentation process until packaging. Although grape cultivar and cultivation provide the foundations of wine flavour, microorganisms, especially yeasts, impact on the subtlety and individuality of the flavour response. Consequently, it is important to identify and understand the ecological interactions that occur between the different microbial groups, species and strains. These interactions encompass yeast-yeast, yeast-filamentous fungi and yeast-bacteria responses. The surface of healthy grapes has a predominance of Aureobasidium pullulans, Metschnikowia, Hanseniaspora (Kloeckera), Cryptococcus and Rhodotorula species depending on stage of maturity. This microflora moderates the growth of spoilage and mycotoxigenic fungi on grapes, the species and strains of yeasts that contribute to alcoholic fermentation, and the bacteria that contribute to malolactic fermentation. Damaged grapes have increased populations of lactic and acetic acid bacteria that impact on yeasts during alcoholic fermentation. Alcoholic fermentation is characterised by the successional growth of various yeast species and strains, where yeast-yeast interactions determine the ecology. Through yeast-bacterial interactions, this ecology can determine progression of the malolactic fermentation, and potential growth of spoilage bacteria in the final product. The mechanisms by which one species/strain impacts on another in grape-wine ecosystems include: production of lytic enzymes, ethanol, sulphur dioxide and killer toxin/bacteriocin like peptides; nutrient depletion including removal of oxygen, and production of carbon dioxide; and release of cell autolytic components. Cell-cell communication through quorum sensing molecules needs investigation.", "title": "" }, { "docid": "neg:1840079_3", "text": "Healthcare costs have increased dramatically and the demand for highquality care will only grow in our aging society. At the same time,more event data are being collected about care processes. Healthcare Information Systems (HIS) have hundreds of tables with patient-related event data. Therefore, it is quite natural to exploit these data to improve care processes while reducing costs. Data science techniqueswill play a crucial role in this endeavor. Processmining can be used to improve compliance and performance while reducing costs. The chapter sets the scene for process mining in healthcare, thus serving as an introduction to this SpringerBrief.", "title": "" }, { "docid": "neg:1840079_4", "text": "Traditional background modeling and subtraction methods have a strong assumption that the scenes are of static structures with limited perturbation. These methods will perform poorly in dynamic scenes. In this paper, we present a solution to this problem. We first extend the local binary patterns from spatial domain to spatio-temporal domain, and present a new online dynamic texture extraction operator, named spatio- temporal local binary patterns (STLBP). Then we present a novel and effective method for dynamic background modeling and subtraction using STLBP. In the proposed method, each pixel is modeled as a group of STLBP dynamic texture histograms which combine spatial texture and temporal motion information together. Compared with traditional methods, experimental results show that the proposed method adapts quickly to the changes of the dynamic background. It achieves accurate detection of moving objects and suppresses most of the false detections for dynamic changes of nature scenes.", "title": "" }, { "docid": "neg:1840079_5", "text": "Sentiment analysis deals with identifying polarity orientation embedded in users' comments and reviews. It aims at discriminating positive reviews from negative ones. Sentiment is related to culture and language morphology. In this paper, we investigate the effects of language morphology on sentiment analysis in reviews written in the Arabic language. In particular, we investigate, in details, how negation affects sentiments. We also define a set of rules that capture the morphology of negations in Arabic. These rules are then used to detect sentiment taking care of negated words. Experimentations prove that our suggested approach is superior to several existing methods that deal with sentiment detection in Arabic reviews.", "title": "" }, { "docid": "neg:1840079_6", "text": "Query segmentation is essential to query processing. It aims to tokenize query words into several semantic segments and help the search engine to improve the precision of retrieval. In this paper, we present a novel unsupervised learning approach to query segmentation based on principal eigenspace similarity of queryword-frequency matrix derived from web statistics. Experimental results show that our approach could achieve superior performance of 35.8% and 17.7% in Fmeasure over the two baselines respectively, i.e. MI (Mutual Information) approach and EM optimization approach.", "title": "" }, { "docid": "neg:1840079_7", "text": "Language Models (LMs) for Automatic Speech Recognition (ASR) are typically trained on large text corpora from news articles, books and web documents. These types of corpora, however, are unlikely to match the test distribution of ASR systems, which expect spoken utterances. Therefore, the LM is typically adapted to a smaller held-out in-domain dataset that is drawn from the test distribution. We propose three LM adaptation approaches for Deep NN and Long Short-Term Memory (LSTM): (1) Adapting the softmax layer in the Neural Network (NN); (2) Adding a non-linear adaptation layer before the softmax layer that is trained only in the adaptation phase; (3) Training the extra non-linear adaptation layer in pre-training and adaptation phases. Aiming to improve upon a hierarchical Maximum Entropy (MaxEnt) second-pass LM baseline, which factors the model into word-cluster and word models, we build an NN LM that predicts only word clusters. Adapting the LSTM LM by training the adaptation layer in both training and adaptation phases (Approach 3), we reduce the cluster perplexity by 30% on a held-out dataset compared to an unadapted LSTM LM. Initial experiments using a state-of-the-art ASR system show a 2.3% relative reduction in WER on top of an adapted MaxEnt LM.", "title": "" }, { "docid": "neg:1840079_8", "text": "This paper presents a universal tuning system for harmonic operation of series-resonant inverters (SRI), based on a self-oscillating switching method. In the new tuning system, SRI can instantly operate in one of the switching frequency harmonics, e.g., the first, third, or fifth harmonic. Moreover, the new system can utilize pulse density modulation (PDM), phase shift (PS), and power–frequency control methods for each harmonic. Simultaneous combination of PDM and PS control method is also proposed for smoother power regulation. In addition, this paper investigates performance of selected harmonic operation based on phase-locked loop (PLL) circuits. In comparison with the fundamental harmonic operation, PLL circuits suffer from stability problem for the other harmonic operations. The proposed method has been verified using laboratory prototypes with resonant frequencies of 20 up to 75 kHz and output power of about 200 W.", "title": "" }, { "docid": "neg:1840079_9", "text": "Hidden Markov Model (HMM) based applications are common in various areas, but the incorporation of HMM's for anomaly detection is still in its infancy. This paper aims at classifying the TCP network traffic as an attack or normal using HMM. The paper's main objective is to build an anomaly detection system, a predictive model capable of discriminating between normal and abnormal behavior of network traffic. In the training phase, special attention is given to the initialization and model selection issues, which makes the training phase particularly effective. For training HMM, 12.195% features out of the total features (5 features out of 41 features) present in the KDD Cup 1999 data set are used. Result of tests on the KDD Cup 1999 data set shows that the proposed system is able to classify network traffic in proportion to the number of features used for training HMM. We are extending our work on a larger data set for building an anomaly detection system.", "title": "" }, { "docid": "neg:1840079_10", "text": "A unique and miniaturized dual-band coplanar waveguide (CPW)-fed antenna is presented. The proposed antenna comprises a rectangular patch that is surrounded by upper and lower ground-plane sections that are interconnected by a high-impedance microstrip line. The proposed antenna structure generates two separate impedance bandwidths to cover frequency bands of GSM and Wi-Fi/WLAN. The antenna realized is relatively small in size $(17\\times 20\\ {\\hbox{mm}}^{2})$ and operates over frequency ranges 1.60–1.85 and 4.95–5.80 GHz, making it suitable for GSM and Wi-Fi/WLAN applications. In addition, the antenna is circularly polarized in the GSM band. Experimental results show the antenna exhibits monopole-like radiation characteristics and a good antenna gain over its operating bands. The measured and simulated results presented show good agreement.", "title": "" }, { "docid": "neg:1840079_11", "text": "Address Space Layout Randomization (ASLR) can increase the cost of exploiting memory corruption vulnerabilities. One major weakness of ASLR is that it assumes the secrecy of memory addresses and is thus ineffective in the face of memory disclosure vulnerabilities. Even fine-grained variants of ASLR are shown to be ineffective against memory disclosures. In this paper we present an approach that synchronizes randomization with potential runtime disclosure. By applying rerandomization to the memory layout of a process every time it generates an output, our approach renders disclosures stale by the time they can be used by attackers to hijack control flow. We have developed a fully functioning prototype for x86_64 C programs by extending the Linux kernel, GCC, and the libc dynamic linker. The prototype operates on C source code and recompiles programs with a set of augmented information required to track pointer locations and support runtime rerandomization. Using this augmented information we dynamically relocate code segments and update code pointer values during runtime. Our evaluation on the SPEC CPU2006 benchmark, along with other applications, show that our technique incurs a very low performance overhead (2.1% on average).", "title": "" }, { "docid": "neg:1840079_12", "text": "Named entity recognition (NER) is a subtask of information extraction that seeks to locate and classify atomic elements in text into predefined categories such as the names of persons, organizations, locations, expressions of times, quantities, monetary values, percentages, etc. We use the JavaNLP repository(http://nlp.stanford.edu/javanlp/ ) for its implementation of a Conditional Random Field(CRF) and a Conditional Markov Model(CMM), also called a Maximum Entropy Markov Model. We have obtained results on majority voting with different labeling schemes, with backward and forward parsing of the CMM, and also some results when we trained a decision tree to take a decision based on the outputs of the different labeling schemes. We have also tried to solve the problem of label inconsistency issue by attempting the naive approach of enforcing hard label-consistency by choosing the majority entity for a sequence of tokens, in the specific test document, as well as the whole test corpus, and managed to get reasonable gains. We also attempted soft label consistency in the following way. We use a portion of the training data to train a CRF to make predictions on the rest of the train data and on the test data. We then train a second CRF with the majority label predictions as additional input features.", "title": "" }, { "docid": "neg:1840079_13", "text": "Dynamic time warping (DTW) is used for the comparison and processing of nonlinear signals and constitutes a widely researched field of study. The method has been initially designed for, and applied to, signals representing audio data. Afterwords it has been successfully modified and applied to many other fields of study. In this paper, we present the results of researches on the generalized DTW method designed for use with rotational sets of data parameterized by quaternions. The need to compare and process quaternion time series has been gaining in importance recently. Three-dimensional motion data processing is one of the most important applications here. Specifically, it is applied in the context of motion capture, and in many cases all rotational signals are described in this way. We propose a construction of generalized method called quaternion dynamic time warping (QDTW), which makes use of specific properties of quaternion space. It allows for the creation of a family of algorithms that deal with the higher order features of the rotational trajectory. This paper focuses on the analysis of the properties of this new approach. Numerical results show that the proposed method allows for efficient element assignment. Moreover, when used as the measure of similarity for a clustering task, the method helps to obtain good clustering performance both for synthetic and real datasets.", "title": "" }, { "docid": "neg:1840079_14", "text": "In this paper, we address the problem of automatically extracting disease-symptom relationships from health question-answer forums due to its usefulness for medical question answering system. To cope with the problem, we divide our main task into two subtasks since they exhibit different challenges: (1) disease-symptom extraction across sentences, (2) disease-symptom extraction within a sentence. For both subtasks, we employed machine learning approach leveraging several hand-crafted features, such as syntactic features (i.e., information from part-of-speech tags) and pre-trained word vectors. Furthermore, we basically formulate our problem as a binary classification task, in which we classify the \"indicating\" relation between a pair of Symptom and Disease entity. To evaluate the performance, we also collected and annotated corpus containing 463 pairs of question-answer threads from several Indonesian health consultation websites. Our experiment shows that, as our expected, the first subtask is relatively more difficult than the second subtask. For the first subtask, the extraction of disease-symptom relation only achieved 36% in terms of F1 measure, while the second one was 76%. To the best of our knowledge, this is the first work addressing such relation extraction task for both \"across\" and \"within\" sentence, especially in Indonesia.", "title": "" }, { "docid": "neg:1840079_15", "text": "Deepened periodontal pockets exert a significant pathological burden on the host and its immune system, particularly in a patient with generalized moderate to severe periodontitis. This burden is extensive and longitudinal, occurring over decades of disease development. Considerable diagnostic and prognostic successes in this regard have come from efforts to measure the depths of the pockets and their contents, including level of inflammatory mediators, cellular exudates and microbes; however, the current standard of care for measuring these pockets, periodontal probing, is an analog technology in a digital age. Measurements obtained by probing are variable, operator dependent and influenced by site-specific factors. Despite these limitations, manual probing is still the standard of care for periodontal diagnostics globally. However, it is becoming increasingly clear that this technology needs to be updated to be compatible with the digital technologies currently being used to image other orofacial structures, such as maxillary sinuses, alveolar bone, nerve foramina and endodontic canals in 3 dimensions. This review aims to summarize the existing technology, as well as new imaging strategies that could be utilized for accurate evaluation of periodontal pocket dimensions.", "title": "" }, { "docid": "neg:1840079_16", "text": "Learning through experience is time-consuming, inefficient and often bad for your cortisol levels. To address this problem, a number of recently proposed teacherstudent methods have demonstrated the benefits of private tuition, in which a single model learns from an ensemble of more experienced tutors. Unfortunately, the cost of such supervision restricts good representations to a privileged minority. Unsupervised learning can be used to lower tuition fees, but runs the risk of producing networks that require extracurriculum learning to strengthen their CVs and create their own LinkedIn profiles1. Inspired by the logo on a promotional stress ball at a local recruitment fair, we make the following three contributions. First, we propose a novel almost no supervision training algorithm that is effective, yet highly scalable in the number of student networks being supervised, ensuring that education remains affordable. Second, we demonstrate our approach on a typical use case: learning to bake, developing a method that tastily surpasses the current state of the art. Finally, we provide a rigorous quantitive analysis of our method, proving that we have access to a calculator2. Our work calls into question the long-held dogma that life is the best teacher. Give a student a fish and you feed them for a day, teach a student to gatecrash seminars and you feed them until the day they move to Google.", "title": "" }, { "docid": "neg:1840079_17", "text": "The present research aims at gaining a better insight on the psychological barriers to the introduction of social robots in society at large. Based on social psychological research on intergroup distinctiveness, we suggested that concerns toward this technology are related to how we define and defend our human identity. A threat to distinctiveness hypothesis was advanced. We predicted that too much perceived similarity between social robots and humans triggers concerns about the negative impact of this technology on humans, as a group, and their identity more generally because similarity blurs category boundaries, undermining human uniqueness. Focusing on the appearance of robots, in two studies we tested the validity of this hypothesis. In both studies, participants were presented with pictures of three types of robots that differed in their anthropomorphic appearance varying from no resemblance to humans (mechanical robots), to some body shape resemblance (biped humanoids) to a perfect copy of human body (androids). Androids raised the highest concerns for the potential damage to humans, followed by humanoids and then mechanical robots. In Study 1, we further demonstrated that robot anthropomorphic appearance (and not the attribution of mind and human nature) was responsible for the perceived damage that the robot could cause. In Study 2, we gained a clearer insight in the processes B Maria Paola Paladino mariapaola.paladino@unitn.it Francesco Ferrari francesco.ferrari-1@unitn.it Jolanda Jetten j.jetten@psy.uq.edu.au 1 Department of Psychology and Cognitive Science, University of Trento, Corso Bettini 31, 38068 Rovereto, Italy 2 School of Psychology, The University of Queensland, St Lucia, QLD 4072, Australia underlying this effect by showing that androids were also judged as most threatening to the human–robot distinction and that this perception was responsible for the higher perceived damage to humans. Implications of these findings for social robotics are discussed.", "title": "" }, { "docid": "neg:1840079_18", "text": "IMPORTANCE\nHealth care-associated infections (HAIs) account for a large proportion of the harms caused by health care and are associated with high costs. Better evaluation of the costs of these infections could help providers and payers to justify investing in prevention.\n\n\nOBJECTIVE\nTo estimate costs associated with the most significant and targetable HAIs.\n\n\nDATA SOURCES\nFor estimation of attributable costs, we conducted a systematic review of the literature using PubMed for the years 1986 through April 2013. For HAI incidence estimates, we used the National Healthcare Safety Network of the Centers for Disease Control and Prevention (CDC).\n\n\nSTUDY SELECTION\nStudies performed outside the United States were excluded. Inclusion criteria included a robust method of comparison using a matched control group or an appropriate regression strategy, generalizable populations typical of inpatient wards and critical care units, methodologic consistency with CDC definitions, and soundness of handling economic outcomes.\n\n\nDATA EXTRACTION AND SYNTHESIS\nThree review cycles were completed, with the final iteration carried out from July 2011 to April 2013. Selected publications underwent a secondary review by the research team.\n\n\nMAIN OUTCOMES AND MEASURES\nCosts, inflated to 2012 US dollars.\n\n\nRESULTS\nUsing Monte Carlo simulation, we generated point estimates and 95% CIs for attributable costs and length of hospital stay. On a per-case basis, central line-associated bloodstream infections were found to be the most costly HAIs at $45,814 (95% CI, $30,919-$65,245), followed by ventilator-associated pneumonia at $40,144 (95% CI, $36,286-$44,220), surgical site infections at $20,785 (95% CI, $18,902-$22,667), Clostridium difficile infection at $11,285 (95% CI, $9118-$13,574), and catheter-associated urinary tract infections at $896 (95% CI, $603-$1189). The total annual costs for the 5 major infections were $9.8 billion (95% CI, $8.3-$11.5 billion), with surgical site infections contributing the most to overall costs (33.7% of the total), followed by ventilator-associated pneumonia (31.6%), central line-associated bloodstream infections (18.9%), C difficile infections (15.4%), and catheter-associated urinary tract infections (<1%).\n\n\nCONCLUSIONS AND RELEVANCE\nWhile quality improvement initiatives have decreased HAI incidence and costs, much more remains to be done. As hospitals realize savings from prevention of these complications under payment reforms, they may be more likely to invest in such strategies.", "title": "" }, { "docid": "neg:1840079_19", "text": "Simulation is the research tool of choice for a majority of the mobile ad hoc network (MANET) community. However, while the use of simulation has increased, the credibility of the simulation results has decreased. To determine the state of MANET simulation studies, we surveyed the 2000-2005 proceedings of the ACM International Symposium on Mobile Ad Hoc Networking and Computing (MobiHoc). From our survey, we found significant shortfalls. We present the results of our survey in this paper. We then summarize common simulation study pitfalls found in our survey. Finally, we discuss the tools available that aid the development of rigorous simulation studies. We offer these results to the community with the hope of improving the credibility of MANET simulation-based studies.", "title": "" } ]
1840080
Hierarchical Parsing Net: Semantic Scene Parsing From Global Scene to Objects
[ { "docid": "pos:1840080_0", "text": "Scene parsing is challenging for unrestricted open vocabulary and diverse scenes. In this paper, we exploit the capability of global context information by different-region-based context aggregation through our pyramid pooling module together with the proposed pyramid scene parsing network (PSPNet). Our global prior representation is effective to produce good quality results on the scene parsing task, while PSPNet provides a superior framework for pixel-level prediction. The proposed approach achieves state-of-the-art performance on various datasets. It came first in ImageNet scene parsing challenge 2016, PASCAL VOC 2012 benchmark and Cityscapes benchmark. A single PSPNet yields the new record of mIoU accuracy 85.4% on PASCAL VOC 2012 and accuracy 80.2% on Cityscapes.", "title": "" } ]
[ { "docid": "neg:1840080_0", "text": "We describe the class of convexified convolutional neural networks (CCNNs), which capture the parameter sharing of convolutional neural networks in a convex manner. By representing the nonlinear convolutional filters as vectors in a reproducing kernel Hilbert space, the CNN parameters can be represented in terms of a lowrank matrix, and the rank constraint can be relaxed so as to obtain a convex optimization problem. For learning two-layer convolutional neural networks, we prove that the generalization error obtained by a convexified CNN converges to that of the best possible CNN. For learning deeper networks, we train CCNNs in a layerwise manner. Empirically, we find that CCNNs achieve competitive or better performance than CNNs trained by backpropagation, SVMs, fully-connected neural networks, stacked denoising auto-encoders, and other baseline methods.", "title": "" }, { "docid": "neg:1840080_1", "text": "Visual search is necessary for rapid scene analysis because information processing in the visual system is limited to one or a few regions at one time [3]. To select potential regions or objects of interest rapidly with a task-independent manner, the so-called \"visual saliency\", is important for reducing the complexity of scenes. From the perspective of engineering, modeling visual saliency usually facilitates subsequent higher visual processing, such as image re-targeting [10], image compression [12], object recognition [16], etc. Visual attention model is deeply studied in recent decades. Most of existing models are built on the biologically-inspired architecture based on the famous Feature Integration Theory (FIT) [19, 20]. For instance, Itti et al. proposed a famous saliency model which computes the saliency map with local contrast in multiple feature dimensions, such as color, orientation, etc. [15] [23]. However, FIT-based methods perhaps risk being immersed in local saliency (e.g., object boundaries), because they employ local contrast of features in limited regions and ignore the global information. Visual attention models usually provide location information of the potential objects, but miss some object-related information (e.g., object surfaces) that is necessary for further object detection and recognition. Distinguished from FIT, Guided Search Theory (GST) [3] [24] provides a mechanism to search the regions of interest (ROI) or objects with the guidance from scene layout or top-down sources. The recent version of GST claims that the visual system searches objects of interest along two parallel pathways, i.e., the non-selective pathway and the selective pathway [3]. This new visual search strategy allows observers to extract spatial layout (or gist) information rapidly from entire scene via non-selective pathway. Then, this context information of scene acts as top-down modulation to guide the salient object search along the selective pathway. This two-pathway-based search strategy provides a parallel processing of global and local information for rapid visual search. Referring to the GST, we assume that the non-selective pathway provides \"where\" information and prior of multiple objects for visual search, a counterpart to visual selective saliency, and we use certain simple and fast fixation prediction method to provide an initial estimate of where the objects present. At the same time, the bottom-up visual selective pathway extracts fine image features in multiple cue channels, which could be regarded as a counterpart to the \"what\" pathway in visual system for object recognition. When these bottom-up features meet \"where\" information of objects, the visual system …", "title": "" }, { "docid": "neg:1840080_2", "text": "Recent advances in signal processing and machine learning techniques have enabled the application of Brain-Computer Interface (BCI) technologies to fields such as medicine, industry, and recreation; however, BCIs still suffer from the requirement of frequent calibration sessions due to the intra- and inter-individual variability of brain-signals, which makes calibration suppression through transfer learning an area of increasing interest for the development of practical BCI systems. In this paper, we present an unsupervised transfer method (spectral transfer using information geometry, STIG), which ranks and combines unlabeled predictions from an ensemble of information geometry classifiers built on data from individual training subjects. The STIG method is validated in both off-line and real-time feedback analysis during a rapid serial visual presentation task (RSVP). For detection of single-trial, event-related potentials (ERPs), the proposed method can significantly outperform existing calibration-free techniques as well as outperform traditional within-subject calibration techniques when limited data is available. This method demonstrates that unsupervised transfer learning for single-trial detection in ERP-based BCIs can be achieved without the requirement of costly training data, representing a step-forward in the overall goal of achieving a practical user-independent BCI system.", "title": "" }, { "docid": "neg:1840080_3", "text": "BACKGROUND\nAesthetic surgery of female genitalia is an uncommon procedure, and of the techniques available, labia minora reduction can achieve excellent results. Recently, more conservative labia minora reduction techniques have been developed, because the simple isolated strategy of straight amputation does not ensure a favorable outcome. This study was designed to review a series of labia minora reductions using inferior wedge resection and superior pedicle flap reconstruction.\n\n\nMETHODS\nTwenty-one patients underwent inferior wedge resection and superior pedicle flap reconstruction. The mean follow-up was 46 months. Aesthetic results and postoperative outcomes were collected retrospectively and evaluated.\n\n\nRESULTS\nTwenty patients (95.2 percent) underwent bilateral procedures, and 90.4 percent of patients had a congenital labia minora hypertrophy. Five complications occurred in 21 patients (23.8 percent). Wound-healing problems were observed more frequently. The cosmetic result was considered to be good or very good in 85.7 percent of patients, and 95.2 percent were very satisfied with the procedure. All complications except one were observed immediately after the procedure.\n\n\nCONCLUSIONS\nThe results of this study demonstrate that inferior wedge resection and superior pedicle flap reconstruction is a simple and consistent technique and deserves a place among the main procedures available. The complications observed were not unexpected and did not extend hospital stay or interfere with the normal postoperative period. The success of the procedure depends on patient selection, careful preoperative planning, and adequate intraoperative management.", "title": "" }, { "docid": "neg:1840080_4", "text": "We apply basic statistical reasoning to signal reconstruction by machine learning – learning to map corrupted observations to clean signals – with a simple and powerful conclusion: it is possible to learn to restore images by only looking at corrupted examples, at performance at and sometimes exceeding training using clean data, without explicit image priors or likelihood models of the corruption. In practice, we show that a single model learns photographic noise removal, denoising synthetic Monte Carlo images, and reconstruction of undersampled MRI scans – all corrupted by different processes – based on noisy data only.", "title": "" }, { "docid": "neg:1840080_5", "text": "Information on the Nuclear Magnetic Resonance Gyro under development by Northrop Grumman Corporation is presented. The basics of Operation are summarized, a review of the completed phases is presented, and the current state of development and progress in phase 4 is discussed. Many details have been left out for the sake of brevity, but the principles are still complete.", "title": "" }, { "docid": "neg:1840080_6", "text": "Disruption of electric power operations can be catastrophic on the national security and economy. Due to the complexity of widely dispersed assets and the interdependency between computer, communication, and power systems, the requirement to meet security and quality compliance on the operations is a challenging issue. In recent years, NERC's cybersecurity standard was initiated to require utilities compliance on cybersecurity in control systems - NERC CIP 1200. This standard identifies several cyber-related vulnerabilities that exist in control systems and recommends several remedial actions (e.g., best practices). This paper is an overview of the cybersecurity issues for electric power control and automation systems, the control architectures, and the possible methodologies for vulnerability assessment of existing systems.", "title": "" }, { "docid": "neg:1840080_7", "text": "Research in signal processing shows that a variety of transforms have been introduced to map the data from the original space into the feature space, in order to efficiently analyze a signal. These techniques differ in their basis functions, that is used for projecting the signal into a higher dimensional space. One of the widely used schemes for quasi-stationary and non-stationary signals is the time-frequency (TF) transforms, characterized by specific kernel functions. This work introduces a novel class of Ramanujan Fourier Transform (RFT) based TF transform functions, constituted by Ramanujan sums (RS) basis. The proposed special class of transforms offer high immunity to noise interference, since the computation is carried out only on co-resonant components, during analysis of signals. Further, we also provide a 2-D formulation of the RFT function. Experimental validation using synthetic examples, indicates that this technique shows potential for obtaining relatively sparse TF-equivalent representation and can be optimized for characterization of certain real-life signals.", "title": "" }, { "docid": "neg:1840080_8", "text": "We present an efficient method for detecting anomalies in videos. Recent applications of convolutional neural networks have shown promises of convolutional layers for object detection and recognition, especially in images. However, convolutional neural networks are supervised and require labels as learning signals. We propose a spatiotemporal architecture for anomaly detection in videos including crowded scenes. Our architecture includes two main components, one for spatial feature representation, and one for learning the temporal evolution of the spatial features. Experimental results on Avenue, Subway and UCSD benchmarks confirm that the detection accuracy of our method is comparable to state-of-the-art methods at a considerable speed of up to 140 fps.", "title": "" }, { "docid": "neg:1840080_9", "text": "Cortical circuits in the brain are refined by experience during critical periods early in postnatal life. Critical periods are regulated by the balance of excitatory and inhibitory (E/I) neurotransmission in the brain during development. There is now increasing evidence of E/I imbalance in autism, a complex genetic neurodevelopmental disorder diagnosed by abnormal socialization, impaired communication, and repetitive behaviors or restricted interests. The underlying cause is still largely unknown and there is no fully effective treatment or cure. We propose that alteration of the expression and/or timing of critical period circuit refinement in primary sensory brain areas may significantly contribute to autistic phenotypes, including cognitive and behavioral impairments. Dissection of the cellular and molecular mechanisms governing well-established critical periods represents a powerful tool to identify new potential therapeutic targets to restore normal plasticity and function in affected neuronal circuits.", "title": "" }, { "docid": "neg:1840080_10", "text": "This paper presents an optimization based algorithm for underwater image de-hazing problem. Underwater image de-hazing is the most prominent area in research. Underwater images are corrupted due to absorption and scattering. With the effect of that, underwater images have the limitation of low visibility, low color and poor natural appearance. To avoid the mentioned problems, Enhanced fuzzy intensification method is proposed. For each color channel, enhanced fuzzy membership function is derived. Second, the correction of fuzzy based pixel intensification is carried out for each channel to remove haze and to enhance visibility and color. The post processing of fuzzy histogram equalization is implemented for red channel alone when the captured image is having highest value of red channel pixel values. The proposed method provides better results in terms maximum entropy and PSNR with minimum MSE with very minimum computational time compared to existing methodologies.", "title": "" }, { "docid": "neg:1840080_11", "text": "This paper presents an efficient design approach for band-pass post filters in waveguides, based on mode-matching technique. With this technique, the characteristics of symmetrical cylindrical post arrangements in the cross-section of the considered waveguides can be analyzed accurately and quickly. Importantly, the approach is applicable to post filters in waveguide but can be extended to Substrate Integrated Waveguide (SIW) technologies. The fast computations provide accurate relationships for the K factors as a function of the post radii and the distances between posts, and allow analyzing the influence of machining tolerances on the filter performance. The computations are used to choose reasonable posts for designing band-pass filters, while the error analysis helps to judge whether a given machining precision is sufficient. The approach is applied to a Chebyshev band-pass post filter and a band-pass SIW filter with a center frequency of 10.5 GHz and a fractional bandwidth of 9.52% with verification via full-wave simulations using HFSS and measurements on manufactured prototypes.", "title": "" }, { "docid": "neg:1840080_12", "text": "Software developers face a number of challenges when creating applications that attempt to keep important data confidential. Even with diligent attention paid to correct software design and implementation practices, secrets can still be exposed through a single flaw in any of the privileged code on the platform, code which may have been written by thousands of developers from hundreds of organizations throughout the world. Intel is developing innovative security technology which provides the ability for software developers to maintain control of the security of sensitive code and data by creating trusted domains within applications to protect critical information during execution and at rest. This paper will describe how this technology has been effectively used in lab exercises to protect private information in applications including enterprise rights management, video chat, trusted financial transactions, and others. Examples will include both protection of local processing and the establishment of secure communication with cloud services. It will illustrate useful software design patterns that can be followed to create many additional types of trusted software solutions.", "title": "" }, { "docid": "neg:1840080_13", "text": "BACKGROUND\nThe purpose of our study was to evaluate inter-observer reliability of the Three-Column classifications with conventional Schatzker and AO/OTA of Tibial Plateau Fractures.\n\n\nMETHODS\n50 cases involving all kinds of the fracture patterns were collected from 278 consecutive patients with tibial plateau fractures who were internal fixed in department of Orthopedics and Trauma III in Shanghai Sixth People's Hospital. The series were arranged randomly, numbered 1 to 50. Four observers were chosen to classify these cases. Before the research, a classification training session was held to each observer. They were given as much time as they required evaluating the radiographs accurately and independently. The classification choices made at the first viewing were not available during the second viewing. The observers were not provided with any feedback after the first viewing. The kappa statistic was used to analyze the inter-observer reliability of the three fracture classification made by the four observers.\n\n\nRESULTS\nThe mean kappa values for inter-observer reliability regarding Schatzker classification was 0.567 (range: 0.513-0.589), representing \"moderate agreement\". The mean kappa values for inter-observer reliability regarding AO/ASIF classification systems was 0.623 (range: 0.510-0.710) representing \"substantial agreement\". The mean kappa values for inter-observer reliability regarding Three-Column classification systems was 0.766 (range: 0.706-0.890), representing \"substantial agreement\".\n\n\nCONCLUSION\nThree-Column classification, which is dependent on the understanding of the fractures using CT scans as well as the 3D reconstruction can identity the posterior column fracture or fragment. It showed \"substantial agreement\" in the assessment of inter-observer reliability, higher than the conventional Schatzker and AO/OTA classifications. We finally conclude that Three-Column classification provides a higher agreement among different surgeons and could be popularized and widely practiced in other clinical centers.", "title": "" }, { "docid": "neg:1840080_14", "text": "The stator permanent magnet (PM) machines have simple and robust rotor structure as well as high torque density. The hybrid excitation topology can realize flux regulation and wide constant power operating capability of the stator PM machines when used in dc power systems. This paper compares and analyzes the electromagnetic performance of different hybrid excitation stator PM machines according to different combination modes of PMs, excitation winding, and iron flux bridge. Then, the control strategies for voltage regulation of dc power systems are discussed based on different critical control variables including the excitation current, the armature current, and the electromagnetic torque. Furthermore, an improved direct torque control (DTC) strategy is investigated to improve system performance. A parallel hybrid excitation flux-switching generator employing the improved DTC which shows excellent dynamic and steady-state performance has been achieved experimentally.", "title": "" }, { "docid": "neg:1840080_15", "text": "This paper presents a novel two-stage low dropout regulator (LDO) that minimizes output noise via a pre-regulator stage and achieves high power supply rejection via a simple subtractor circuit in the power driver stage. The LDO is fabricated with a standard 0.35mum CMOS process and occupies 0.26 mm2 and 0.39mm2 for single and dual output respectively. Measurement showed PSR is 60dB at 10kHz and integrated noise is 21.2uVrms ranging from 1kHz to 100kHz", "title": "" }, { "docid": "neg:1840080_16", "text": "Edith Penrose’s (1959) book, The Theory of the Growth of the Firm, is considered by many scholars in the strategy field to be the seminal work that provided the intellectual foundations for the modern, resource-based theory of the firm. However, the present paper suggests that Penrose’s direct or intended contribution to resource-based thinking has been misinterpreted. Penrose never aimed to provide useful strategy prescriptions for managers to create a sustainable stream of rents; rather, she tried to rigorously describe the processes through which firms grow. In her theory, rents were generally assumed not to occur. If they arose this reflected an inefficient macro-level outcome of an otherwise efficient micro-level growth process. Nevertheless, her ideas have undoubtedly stimulated ‘good conversation’ within the strategy field in the spirit of Mahoney and Pandian (1992); their emerging use by some scholars as building blocks in models that show how sustainable competitive advantage and rents can be achieved is undeniable, although such use was never intended by Edith Penrose herself. Copyright  2002 John Wiley & Sons, Ltd.", "title": "" }, { "docid": "neg:1840080_17", "text": "Predicting panic is of critical importance in many areas of human and animal behavior, notably in the context of economics. The recent financial crisis is a case in point. Panic may be due to a specific external threat or self-generated nervousness. Here we show that the recent economic crisis and earlier large single-day panics were preceded by extended periods of high levels of market mimicry--direct evidence of uncertainty and nervousness, and of the comparatively weak influence of external news. High levels of mimicry can be a quite general indicator of the potential for self-organized crises.", "title": "" }, { "docid": "neg:1840080_18", "text": "We present a model for attacking various cryptographic schemes by taking advantage of random hardware faults. The model consists of a black-box containing some cryptographic secret. The box interacts with the outside world by following a cryptographic protocol. The model supposes that from time to time the box is affected by a random hardware fault causing it to output incorrect values. For example, the hardware fault flips an internal register bit at some point during the computation. We show that for many digital signature and identification schemes these incorrect outputs completely expose the secrets stored in the box. We present the following results: (1) The secret signing key used in an implementation of RSA based on the Chinese Remainder Theorem (CRT) is completely exposed from a single erroneous RSA signature, (2) for non-CRT implementations of RSA the secret key is exposed given a large number (e.g. 1000) of erroneous signatures, (3) the secret key used in Fiat—Shamir identification is exposed after a small number (e.g. 10) of faulty executions of the protocol, and (4) the secret key used in Schnorr's identification protocol is exposed after a much larger number (e.g. 10,000) of faulty executions. Our estimates for the number of necessary faults are based on standard security parameters such as a 1024-bit modulus, and a 2 -40 identification error probability. Our results demonstrate the importance of preventing errors in cryptographic computations. We conclude the paper with various methods for preventing these attacks.", "title": "" }, { "docid": "neg:1840080_19", "text": "Whether they are made to entertain you, or to educate you, good video games engage you. Significant research has tried to understand engagement in games by measuring player experience (PX). Traditionally, PX evaluation has focused on the enjoyment of game, or the motivation of players; these factors no doubt contribute to engagement, but do decisions regarding play environment (e.g., the choice of game controller) affect the player more deeply than that? We apply self-determination theory (specifically satisfaction of needs and self-discrepancy represented using the five factors model of personality) to explain PX in an experiment with controller type as the manipulation. Our study shows that there are a number of effects of controller on PX and in-game player personality. These findings provide both a lens with which to view controller effects in games and a guide for controller choice in the design of new games. Our research demonstrates that including self-characteristics assessment in the PX evaluation toolbox is valuable and useful for understanding player experience.", "title": "" } ]
1840081
MLC Toolbox: A MATLAB/OCTAVE Library for Multi-Label Classification
[ { "docid": "pos:1840081_0", "text": "More than twelve years have elapsed since the first public release of WEKA. In that time, the software has been rewritten entirely from scratch, evolved substantially and now accompanies a text on data mining [35]. These days, WEKA enjoys widespread acceptance in both academia and business, has an active community, and has been downloaded more than 1.4 million times since being placed on Source-Forge in April 2000. This paper provides an introduction to the WEKA workbench, reviews the history of the project, and, in light of the recent 3.6 stable release, briefly discusses what has been added since the last stable version (Weka 3.4) released in 2003.", "title": "" }, { "docid": "pos:1840081_1", "text": "Most classification problems associate a single class to each example or instance. However, there are many classification tasks where each instance can be associated with one or more classes. This group of problems represents an area known as multi-label classification. One typical example of multi-label classification problems is the classification of documents, where each document can be assigned to more than one class. This tutorial presents the most frequently used techniques to deal with these problems in a pedagogical manner, with examples illustrating the main techniques and proposing a taxonomy of multi-label techniques that highlights the similarities and differences between these techniques.", "title": "" }, { "docid": "pos:1840081_2", "text": "The explosion of online content has made the management of such content non-trivial. Web-related tasks such as web page categorization, news filtering, query categorization, tag recommendation, etc. often involve the construction of multi-label categorization systems on a large scale. Existing multi-label classification methods either do not scale or have unsatisfactory performance. In this work, we propose MetaLabeler to automatically determine the relevant set of labels for each instance without intensive human involvement or expensive cross-validation. Extensive experiments conducted on benchmark data show that the MetaLabeler tends to outperform existing methods. Moreover, MetaLabeler scales to millions of multi-labeled instances and can be deployed easily. This enables us to apply the MetaLabeler to a large scale query categorization problem in Yahoo!, yielding a significant improvement in performance.", "title": "" } ]
[ { "docid": "neg:1840081_0", "text": "We study the problem of generating source code in a strongly typed, Java-like programming language, given a label (for example a set of API calls or types) carrying a small amount of information about the code that is desired. The generated programs are expected to respect a “realistic” relationship between programs and labels, as exemplified by a corpus of labeled programs available during training. Two challenges in such conditional program generation are that the generated programs must satisfy a rich set of syntactic and semantic constraints, and that source code contains many low-level features that impede learning. We address these problems by training a neural generator not on code but on program sketches, or models of program syntax that abstract out names and operations that do not generalize across programs. During generation, we infer a posterior distribution over sketches, then concretize samples from this distribution into type-safe programs using combinatorial techniques. We implement our ideas in a system for generating API-heavy Java code, and show that it can often predict the entire body of a method given just a few API calls or data types that appear in the method.", "title": "" }, { "docid": "neg:1840081_1", "text": "The task of end-to-end relation extraction consists of two sub-tasks: i) identifying entity mentions along with their types and ii) recognizing semantic relations among the entity mention pairs. It has been shown that for better performance, it is necessary to address these two sub-tasks jointly [22,13]. We propose an approach for simultaneous extraction of entity mentions and relations in a sentence, by using inference in Markov Logic Networks (MLN) [21]. We learn three different classifiers : i) local entity classifier, ii) local relation classifier and iii) “pipeline” relation classifier which uses predictions of the local entity classifier. Predictions of these classifiers may be inconsistent with each other. We represent these predictions along with some domain knowledge using weighted first-order logic rules in an MLN and perform joint inference over the MLN to obtain a global output with minimum inconsistencies. Experiments on the ACE (Automatic Content Extraction) 2004 dataset demonstrate that our approach of joint extraction using MLNs outperforms the baselines of individual classifiers. Our end-to-end relation extraction performance is better than 2 out of 3 previous results reported on the ACE 2004 dataset.", "title": "" }, { "docid": "neg:1840081_2", "text": "Event cameras offer many advantages over standard frame-based cameras, such as low latency, high temporal resolution, and a high dynamic range. They respond to pixellevel brightness changes and, therefore, provide a sparse output. However, in textured scenes with rapid motion, millions of events are generated per second. Therefore, stateof-the-art event-based algorithms either require massive parallel computation (e.g., a GPU) or depart from the event-based processing paradigm. Inspired by frame-based pre-processing techniques that reduce an image to a set of features, which are typically the input to higher-level algorithms, we propose a method to reduce an event stream to a corner event stream. Our goal is twofold: extract relevant tracking information (corners do not suffer from the aperture problem) and decrease the event rate for later processing stages. Our event-based corner detector is very efficient due to its design principle, which consists of working on the Surface of Active Events (a map with the timestamp of the latest event at each pixel) using only comparison operations. Our method asynchronously processes event by event with very low latency. Our implementation is capable of processing millions of events per second on a single core (less than a micro-second per event) and reduces the event rate by a factor of 10 to 20.", "title": "" }, { "docid": "neg:1840081_3", "text": "Neural network based methods have obtained great progress on a variety of natural language processing tasks. However, it is still a challenge task to model long texts, such as sentences and documents. In this paper, we propose a multi-timescale long short-term memory (MT-LSTM) neural network to model long texts. MTLSTM partitions the hidden states of the standard LSTM into several groups. Each group is activated at different time periods. Thus, MT-LSTM can model very long documents as well as short sentences. Experiments on four benchmark datasets show that our model outperforms the other neural models in text classification task.", "title": "" }, { "docid": "neg:1840081_4", "text": "Generative Adversarial Networks (GANs) have recently emerged as powerful generative models. GANs are trained by an adversarial process between a generative network and a discriminative network. It is theoretically guaranteed that, in the nonparametric regime, by arriving at the unique saddle point of a minimax objective function, the generative network generates samples from the data distribution. However, in practice, getting close to this saddle point has proven to be difficult, resulting in the ubiquitous problem of “mode collapse”. The root of the problems in training GANs lies on the unbalanced nature of the game being played. Here, we propose to level the playing field and make the minimax game balanced by “heating” the data distribution. The empirical distribution is frozen at temperature zero; GANs are instead initialized at infinite temperature, where learning is stable. By annealing the heated data distribution, we initialized the network at each temperature with the learnt parameters of the previous higher temperature. We posited a conjecture that learning under continuous annealing in the nonparametric regime is stable, and proposed an algorithm in corollary. In our experiments, the annealed GAN algorithm, dubbed β-GAN, trained with unmodified objective function was stable and did not suffer from mode collapse.", "title": "" }, { "docid": "neg:1840081_5", "text": "Keyphrase extraction is a fundamental technique in natural language processing. It enables documents to be mapped to a concise set of phrases that can be used for indexing, clustering, ontology building, auto-tagging and other information organization schemes. Two major families of unsupervised keyphrase extraction algorithms may be characterized as statistical and graph-based. We present a hybrid statistical-graphical algorithm that capitalizes on the heuristics of both families of algorithms and is able to outperform the state of the art in unsupervised keyphrase extraction on several datasets.", "title": "" }, { "docid": "neg:1840081_6", "text": "Minimally invasive principles should be the driving force behind rehabilitating young individuals affected by severe dental erosion. The maxillary anterior teeth of a patient, class ACE IV, has been treated following the most conservatory approach, the Sandwich Approach. These teeth, if restored by conventional dentistry (eg, crowns) would have required elective endodontic therapy and crown lengthening. To preserve the pulp vitality, six palatal resin composite veneers and four facial ceramic veneers were delivered instead with minimal, if any, removal of tooth structure. In this article, the details about the treatment are described.", "title": "" }, { "docid": "neg:1840081_7", "text": "The rates of different ATP-consuming reactions were measured in concanavalin A-stimulated thymocytes, a model system in which more than 80% of the ATP consumption can be accounted for. There was a clear hierarchy of the responses of different energy-consuming reactions to changes in energy supply: pathways of macromolecule biosynthesis (protein synthesis and RNA/DNA synthesis) were most sensitive to energy supply, followed by sodium cycling and then calcium cycling across the plasma membrane. Mitochondrial proton leak was the least sensitive to energy supply. Control analysis was used to quantify the relative control over ATP production exerted by the individual groups of ATP-consuming reactions. Control was widely shared; no block of reactions had more than one-third of the control. A fuller control analysis showed that there appeared to be a hierarchy of control over the flux through ATP: protein synthesis > RNA/DNA synthesis and substrate oxidation > Na+ cycling and Ca2+ cycling > other ATP consumers and mitochondrial proton leak. Control analysis also indicated that there was significant control over the rates of individual ATP consumers by energy supply. Each ATP consumer had strong control over its own rate but very little control over the rates of the other ATP consumers.", "title": "" }, { "docid": "neg:1840081_8", "text": "Theoretical estimates indicate that graphene thin films can be used as transparent electrodes for thin-film devices such as solar cells and organic light-emitting diodes, with an unmatched combination of sheet resistance and transparency. We demonstrate organic light-emitting diodes with solution-processed graphene thin film transparent conductive anodes. The graphene electrodes were deposited on quartz substrates by spin-coating of an aqueous dispersion of functionalized graphene, followed by a vacuum anneal step to reduce the sheet resistance. Small molecular weight organic materials and a metal cathode were directly deposited on the graphene anodes, resulting in devices with a performance comparable to control devices on indium-tin-oxide transparent anodes. The outcoupling efficiency of devices on graphene and indium-tin-oxide is nearly identical, in agreement with model predictions.", "title": "" }, { "docid": "neg:1840081_9", "text": "Return-Oriented Programming (ROP) is the cornerstone of today’s exploits. Yet, building ROP chains is predominantly a manual task, enjoying limited tool support. Many of the available tools contain bugs, are not tailored to the needs of exploit development in the real world and do not offer practical support to analysts, which is why they are seldom used for any tasks beyond gadget discovery. We present PSHAPE (P ractical Support for Half-Automated P rogram Exploitation), a tool which assists analysts in exploit development. It discovers gadgets, chains gadgets together, and ensures that side effects such as register dereferences do not crash the program. Furthermore, we introduce the notion of gadget summaries, a compact representation of the effects a gadget or a chain of gadgets has on memory and registers. These semantic summaries enable analysts to quickly determine the usefulness of long, complex gadgets that use a lot of aliasing or involve memory accesses. Case studies on nine real binaries representing 147 MiB of code show PSHAPE’s usefulness: it automatically builds usable ROP chains for nine out of eleven scenarios.", "title": "" }, { "docid": "neg:1840081_10", "text": "We present an optimistic primary-backup (so-called passive replication) mechanism for highly available Internet services on intercloud platforms. Our proposed method aims at providing Internet services despite the occurrence of a large-scale disaster. To this end, each service in our method creates replicas in different data centers and coordinates them with an optimistic consensus algorithm instead of a majority-based consensus algorithm such as Paxos. Although our method allows temporary inconsistencies among replicas, it eventually converges on the desired state without an interruption in services. In particular, the method tolerates simultaneous failure of the majority of nodes and a partitioning of the network. Moreover, through interservice communications, members of the service groups are autonomously reorganized according to the type of failure and changes in system load. This enables both load balancing and power savings, as well as provisioning for the next disaster. We demonstrate the service availability provided by our approach for simulated failure patterns and its adaptation to changes in workload for load balancing and power savings by experiments with a prototype implementation.", "title": "" }, { "docid": "neg:1840081_11", "text": "OBJECTIVE\nCurrent estimates of the prevalence of depression during pregnancy vary widely. A more precise estimate is required to identify the level of disease burden and develop strategies for managing depressive disorders. The objective of this study was to estimate the prevalence of depression during pregnancy by trimester, as detected by validated screening instruments (ie, Beck Depression Inventory, Edinburgh Postnatal Depression Score) and structured interviews, and to compare the rates among instruments.\n\n\nDATA SOURCES\nObservational studies and surveys were searched in MEDLINE from 1966, CINAHL from 1982, EMBASE from 1980, and HealthSTAR from 1975.\n\n\nMETHODS OF STUDY SELECTION\nA validated study selection/data extraction form detailed acceptance criteria. Numbers and percentages of depressed patients, by weeks of gestation or trimester, were reported.\n\n\nTABULATION, INTEGRATION, AND RESULTS\nTwo reviewers independently extracted data; a third party resolved disagreement. Two raters assessed quality by using a 12-point checklist. A random effects meta-analytic model produced point estimates and 95% confidence intervals (CIs). Heterogeneity was examined with the chi(2) test (no systematic bias detected). Funnel plots and Begg-Mazumdar test were used to assess publication bias (none found). Of 714 articles identified, 21 (19,284 patients) met the study criteria. Quality scores averaged 62%. Prevalence rates (95% CIs) were 7.4% (2.2, 12.6), 12.8% (10.7, 14.8), and 12.0% (7.4, 16.7) for the first, second, and third trimesters, respectively. Structured interviews found lower rates than the Beck Depression Inventory but not the Edinburgh Postnatal Depression Scale.\n\n\nCONCLUSION\nRates of depression, especially during the second and third trimesters of pregnancy, are substantial. Clinical and economic studies to estimate maternal and fetal consequences are needed.", "title": "" }, { "docid": "neg:1840081_12", "text": "Neural sequence-to-sequence model has achieved great success in abstractive summarization task. However, due to the limit of input length, most of previous works can only utilize lead sentences as the input to generate the abstractive summarization, which ignores crucial information of the document. To alleviate this problem, we propose a novel approach to improve neural sentence summarization by using extractive summarization, which aims at taking full advantage of the document information as much as possible. Furthermore, we present both of streamline strategy and system combination strategy to achieve the fusion of the contents in different views, which can be easily adapted to other domains. Experimental results on CNN/Daily Mail dataset demonstrate both our proposed strategies can significantly improve the performance of neural sentence summarization.", "title": "" }, { "docid": "neg:1840081_13", "text": "Digital archiving creates a vast store of knowledge that can be accessed only through digital tools. Users of this information will need fluency in the tools of digital access, exploration, visualization, analysis, and collaboration. This paper proposes that this fluency represents a new form of literacy, which must become fundamental for humanities scholars. Tools influence both the creation and the analysis of information. Whether using pen and paper, Microsoft Office, or Web 2.0, scholars base their process, production, and questions on the capabilities their tools offer them. Digital archiving and the interconnectivity of the Web provide new challenges in terms of quantity and quality of information. They create a new medium for presentation as well as a foundation for collaboration that is independent of physical location. Challenges for digital humanities include: • developing new genres for complex information presentation that can be shared, analyzed, and compared; • creating a literacy in information analysis and visualization that has the same rigor and richness as current scholarship; and • expanding classically text-based pedagogy to include simulation, animation, and spatial and geographic representation.", "title": "" }, { "docid": "neg:1840081_14", "text": "Is narcissism related to observer-rated attractiveness? Two views imply that narcissism is unrelated to attractiveness: positive illusions theory and Feingold’s (1992) attractiveness theory (i.e., attractiveness is unrelated to personality in general). In contrast, two other views imply that narcissism is positively related to attractiveness: an evolutionary perspective on narcissism (i.e., selection pressures in shortterm mating contexts shaped the evolution of narcissism, including greater selection for attractiveness in short-term versus long-term mating contexts) and, secondly, the self-regulatory processing model of narcissism (narcissists groom themselves to bolster grandiose self-images). A meta-analysis (N > 1000) reveals a small but reliable positive narcissism–attractiveness correlation that approaches the largest known personality–attractiveness correlations. The finding supports the evolutionary and self-regulatory views of narcissism. 2009 Elsevier Inc. All rights reserved.", "title": "" }, { "docid": "neg:1840081_15", "text": "This paper discusses the conception and development of a ball-on-plate balancing system based on mechatronic design principles. Realization of the design is achieved with the simultaneous consideration towards constraints like cost, performance, functionality, extendibility, and educational merit. A complete dynamic system investigation for the ball-on-plate system is presented in this paper. This includes hardware design, sensor and actuator selection, system modeling, parameter identification, controller design and experimental testing. The system was designed and built by students as part of the course Mechatronics System Design at Rensselaer. 1. MECHATRONICS AT RENSSELAER Mechatronics is the synergistic combination of mechanical engineering, electronics, control systems and computers. The key element in mechatronics is the integration of these areas through the design process. The essential characteristic of a mechatronics engineer and the key to success in mechatronics is a balance between two sets of skills: modeling / analysis skills and experimentation / hardware implementation skills. Synergism and integration in design set a mechatronic system apart from a traditional, multidisciplinary system. Mechanical engineers are expected to design with synergy and integration and professors must now teach design accordingly. In the Department of Mechanical Engineering, Aeronautical Engineering & Mechanics (ME, AE & M) at Rensselaer there are presently two seniorelective courses in the field of mechatronics, which are also open to graduate students: Mechatronics, offered in the fall semester, and Mechatronic System Design, offered in the spring semester. In both courses, emphasis is placed on a balance between physical understanding and mathematical formalities. The key areas of study covered in both courses are: 1. Mechatronic system design principles 2. Modeling, analysis, and control of dynamic physical systems 3. Selection and interfacing of sensors, actuators, and microcontrollers 4. Analog and digital control electronics 5. Real-time programming for control Mechatronics covers the fundamentals in these areas through integrated lectures and laboratory exercises, while Mechatronic System Design focuses on the application and extension of the fundamentals through a design, build, and test experience. Throughout the coverage, the focus is kept on the role of the key mechatronic areas of study in the overall design process and how these key areas are integrated into a successful mechatronic system design. In mechatronics, balance is paramount. The essential characteristic of a mechatronics engineer and the key to success in mechatronics is a balance between two skill sets: 1. Modeling (physical and mathematical), analysis (closed-form and numerical simulation), and control design (analog and digital) of dynamic physical systems; and 2. Experimental validation of models and analysis (for computer simulation without experimental verification is at best questionable, and at worst useless), and an understanding of the key issues in hardware implementation of designs. Figure 1 shows a diagram of the procedure for a dynamic system investigation which emphasizes this balance. This diagram serves as a guide for the study of the various mechatronic hardware systems in the courses taught at Rensselaer. When students perform a complete dynamic system investigation of a mechatronic system, they develop modeling / analysis skills and obtain knowledge of and experience with a wide variety of analog and digital sensors and actuators that will be indispensable as mechatronic design engineers in future years. This fundamental process of dynamic system investigation shall be followed in this paper. 2. INTRODUCTION: BALL ON PLATE SYSTEM The ball-on-plate balancing system, due to its inherent complexity, presents a challenging design problem. In the context of such an unconventional problem, the relevance of mechatronics design methodology becomes apparent. This paper describes the design and development of a ball-on-plate balancing system that was built from an initial design concept by a team of primarily undergraduate students as part of the course Mechatronics System Design at Rensselaer. Other ball-on-plate balancing systems have been designed in the past and some are also commercially available (TecQuipment). The existing systems are, to some extent, bulky and non-portable, and prohibitively expensive for educational purposes. The objective of this design exercise, as is typical of mechatronics design, was to make the ball-on-plate balancing system ‘better, cheaper, quicker’, i.e., to build a compact and affordable ball-on-plate system within a single semester. These objectives were met extremely well by the design that will be presented in this paper. The system described here is unique for its innovativeness in terms of the sensing and actuation schemes, which are the two most critical issues in this design. The first major challenge was to sense the ball position, accurately, reliably, and in a noncumbersome, yet inexpensive way. The various options that were considered are listed below. The relative merits and demerits are also indicated. 1. Some sort of touch sensing scheme: not enough information available, maybe hard to implement. 2. Overhead digital camera with image grabbing and processing software: expensive, requires the use of additional software, requires the use of a super-structure to mount the camera. 3. Resistive grid on the plate (a two dimensional potentiometer): limited resolution, excessive and cumbersome wiring needed. 4. Grid of infrared sensors: inexpensive, limited resolution, cumbersome, excessive wiring needed. Physical System Physical Model Mathematical Model Model Parameter Identification Actual Dynamic Behavior Compare Predicted Dynamic Behavior Make Design Decisions Design Complete Measurements, Calculations, Manufacturer's Specifications Assumptions and Engineering Judgement Physical Laws Experimental Analysis Equation Solution: Analytical and Numerical Solution Model Adequate, Performance Adequate Model Adequate, Performance Inadequate Modify or Augment Model Inadequate: Modify Which Parameters to Identify? What Tests to Perform? Figure 1.Dynamic System Investigation chart 5. 3D-motion tracking of the ball by means of an infrared-ultrasonic transponder attached to the ball, which exchanges signals with 3 remotely located towers (V-scope by Lipman Electronic Engineering Ltd.): very accurate and clean measurements, requires an additional apparatus altogether, very expensive, special attachment to the ball has to be made Based on the above listed merits and demerits associated with each choice, it was decided to pursue the option of using a touch-screen. It offered the most compact, reliable, and affordable solution. This decision was followed by extensive research pertaining to the selection and implementation of an appropriate touch-sensor. The next major challenge was to design an actuation mechanism for the plate. The plate has to rotate about its two planer body axes, to be able to balance the ball. For this design, the following options were considered: 1. Two linear actuators connected to two corners on the base of the plate that is supported by a ball and socket joint in the center, thus providing the two necessary degrees of motion: very expensive 2. Mount the plate on a gimbal ring. One motor turns the gimbal providing one degree of rotation; the other motor turns the plate relative to the ring thus providing a second degree of rotation: a non-symmetric set-up because one motor has to move the entire gimbal along with the plate thus experiencing a much higher load inertia as compared to the other motor. 3. Use of cable and pulley arrangement to turn the plate using two motors (DC or Stepper): good idea, has been used earlier 4. Use a spatial linkage mechanism to turn the plate using two motors (DC or Stepper): This comprises two four-bar parallelogram linkages, each driving one axis of rotation of the plate: an innovative method never tried before, design has to verified. Figure 2 Ball-on-plate System Assembly In this case, the final choice was selected for its uniqueness as a design never tried before. Figure 2 shows an assembly view of the entire system including the spatial linkage mechanism and the touch-screen mounted on the plate. 3. PHYSICAL SYSTEM DESCRIPTION The physical system consists of an acrylic plate, an actuation mechanism for tilting the plate about two axes, a ball position sensor, instrumentation for signal processing, and real-time control software/hardware. The entire system is mounted on an aluminium base plate and is supported by four vertical aluminium beams. The beams provide shape and support to the system and also provide mountings for the two motors. 3.1 Actuation mechanism Figure 3. The spatial linkage mechanism used for actuating the plate. Each motor (O 1 and O2) drives one axis of the plate-rotation angle and is connected to the plate by a spatial linkage mechanism (Figure 3). Referring to the schematic in Figure 5, each side of the spatial linkage mechanism (O 1-P1-A-O and O2-P2-B-O) is a four-bar parallelogram linkage. This ensures that for small motions around the equilibrium, the plate angles (q1 and q2, defined later) are equal to the corresponding motor angles (θm1 and θm2). The plate is connected to ground by means of a U-joint at O. Ball joints (at points P1, P2, A and B) connecting linkages and rods provide enough freedom of motion to ensure that the system does not bind. The motor angles are measured by highresolution optical encoders mounted on the motor shafts. A dual-axis inclinometer is mounted on the plate to measure the plate angles directly. As shall be shown later, for small motions, the motor angles correspond to the plate angles due to the kinematic constraints imposed by the parallelogram linkages. The motors used for driving the l", "title": "" }, { "docid": "neg:1840081_16", "text": "Mobile IP is the current standard for supporting macromobility of mobile hosts. However, in the case of micromobility support, there are several competing proposals. In this paper, we present the design, implementation, and performance evaluation of HAWAII, a domain-based approach for supporting mobility. HAWAII uses specialized path setup schemes which install host-based forwarding entries in specific routers to support intra-domain micromobility. These path setup schemes deliver excellent performance by reducing mobility related disruption to user applications. Also, mobile hosts retain their network address while moving within the domain, simplifying quality-of-service (QoS) support. Furthermore, reliability is achieved through maintaining soft-state forwarding entries for the mobile hosts and leveraging fault detection mechanisms built in existing intra-domain routing protocols. HAWAII defaults to using Mobile IP for macromobility, thus providing a comprehensive solution for mobility support in wide-area wireless networks.", "title": "" }, { "docid": "neg:1840081_17", "text": "CONTEXT\nMedical schools are known to be stressful environments for students and hence medical students have been believed to experience greater incidences of depression than others. We evaluated the global prevalence of depression amongst medical students, as well as epidemiological, psychological, educational and social factors in order to identify high-risk groups that may require targeted interventions.\n\n\nMETHODS\nA systematic search was conducted in online databases for cross-sectional studies examining prevalences of depression among medical students. Studies were included only if they had used standardised and validated questionnaires to evaluate the prevalence of depression in a group of medical students. Random-effects models were used to calculate the aggregate prevalence and pooled odds ratios (ORs). Meta-regression was carried out when heterogeneity was high.\n\n\nRESULTS\nFindings for a total of 62 728 medical students and 1845 non-medical students were pooled across 77 studies and examined. Our analyses demonstrated a global prevalence of depression amongst medical students of 28.0% (95% confidence interval [CI] 24.2-32.1%). Female, Year 1, postgraduate and Middle Eastern medical students were more likely to be depressed, but the differences were not statistically significant. By year of study, Year 1 students had the highest rates of depression at 33.5% (95% CI 25.2-43.1%); rates of depression then gradually decreased to reach 20.5% (95% CI 13.2-30.5%) at Year 5. This trend represented a significant decline (B = - 0.324, p = 0.005). There was no significant difference in prevalences of depression between medical and non-medical students. The overall mean frequency of suicide ideation was 5.8% (95% CI 4.0-8.3%), but the mean proportion of depressed medical students who sought treatment was only 12.9% (95% CI 8.1-19.8%).\n\n\nCONCLUSIONS\nDepression affects almost one-third of medical students globally but treatment rates are relatively low. The current findings suggest that medical schools and health authorities should offer early detection and prevention programmes, and interventions for depression amongst medical students before graduation.", "title": "" } ]
1840082
Robust 2D/3D face mask presentation attack detection scheme by exploring multiple features and comparison score level fusion
[ { "docid": "pos:1840082_0", "text": "This paper proposes a method for constructing local image descriptors which efficiently encode texture information and are suitable for histogram based representation of image regions. The method computes a binary code for each pixel by linearly projecting local image patches onto a subspace, whose basis vectors are learnt from natural images via independent component analysis, and by binarizing the coordinates in this basis via thresholding. The length of the binary code string is determined by the number of basis vectors. Image regions can be conveniently represented by histograms of pixels' binary codes. Our method is inspired by other descriptors which produce binary codes, such as local binary pattern and local phase quantization. However, instead of heuristic code constructions, the proposed approach is based on statistics of natural images and this improves its modeling capacity. The experimental results show that our method improves accuracy in texture recognition tasks compared to the state-of-the-art.", "title": "" }, { "docid": "pos:1840082_1", "text": "As a crucial security problem, anti-spoofing in biometrics, and particularly for the face modality, has achieved great progress in the recent years. Still, new threats arrive inform of better, more realistic and more sophisticated spoofing attacks. The objective of the 2nd Competition on Counter Measures to 2D Face Spoofing Attacks is to challenge researchers to create counter measures effectively detecting a variety of attacks. The submitted propositions are evaluated on the Replay-Attack database and the achieved results are presented in this paper.", "title": "" } ]
[ { "docid": "neg:1840082_0", "text": "Twitter - a microblogging service that enables users to post messages (\"tweets\") of up to 140 characters - supports a variety of communicative practices; participants use Twitter to converse with individuals, groups, and the public at large, so when conversations emerge, they are often experienced by broader audiences than just the interlocutors. This paper examines the practice of retweeting as a way by which participants can be \"in a conversation.\" While retweeting has become a convention inside Twitter, participants retweet using different styles and for diverse reasons. We highlight how authorship, attribution, and communicative fidelity are negotiated in diverse ways. Using a series of case studies and empirical data, this paper maps out retweeting as a conversational practice.", "title": "" }, { "docid": "neg:1840082_1", "text": "We introduce a light-weight, power efficient, and general purpose convolutional neural network, ESPNetv2 , for modeling visual and sequential data. Our network uses group point-wise and depth-wise dilated separable convolutions to learn representations from a large effective receptive field with fewer FLOPs and parameters. The performance of our network is evaluated on three different tasks: (1) object classification, (2) semantic segmentation, and (3) language modeling. Experiments on these tasks, including image classification on the ImageNet and language modeling on the PenTree bank dataset, demonstrate the superior performance of our method over the state-ofthe-art methods. Our network has better generalization properties than ShuffleNetv2 when tested on the MSCOCO multi-object classification task and the Cityscapes urban scene semantic segmentation task. Our experiments show that ESPNetv2 is much more power efficient than existing state-of-the-art efficient methods including ShuffleNets and MobileNets. Our code is open-source and available at https://github.com/sacmehta/ESPNetv2.", "title": "" }, { "docid": "neg:1840082_2", "text": "An algebra for geometric reasoning is developed that is amenable to software implementation. The features of the algebra are chosen to support geometric programming of the variety found in computer graphics and computer aided geometric design applications. The implementation of the algebra in C++ is described, and several examples illustrating the use of this software are given.", "title": "" }, { "docid": "neg:1840082_3", "text": "The term ‘vulnerability’ is used in many different ways by various scholarly communities. The resulting disagreement about the appropriate definition of vulnerability is a frequent cause for misunderstanding in interdisciplinary research on climate change and a challenge for attempts to develop formal models of vulnerability. Earlier attempts at reconciling the various conceptualizations of vulnerability were, at best, partly successful. This paper presents a generally applicable conceptual framework of vulnerability that combines a nomenclature of vulnerable situations and a terminology of vulnerability concepts based on the distinction of four fundamental groups of vulnerability factors. This conceptual framework is applied to characterize the vulnerability concepts employed by the main schools of vulnerability research and to review earlier attempts at classifying vulnerability concepts. None of these onedimensional classification schemes reflects the diversity of vulnerability concepts identified in this review. The wide range of policy responses available to address the risks from global climate change suggests that climate impact, vulnerability, and adaptation assessments will continue to apply a variety of vulnerability concepts. The framework presented here provides the much-needed conceptual clarity and facilitates bridging the various approaches to researching vulnerability to climate change. r 2006 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "neg:1840082_4", "text": "Machine learning requires access to all the data used for training. Recently, Google Research proposed Federated Learning as an alternative, where the training data is distributed over a federation of clients that each only access their own training data; the partially trained model is updated in a distributed fashion to maintain a situation where the data from all participating clients remains unknown. In this research we construct different distributions of the DMOZ dataset over the clients in the network and compare the resulting performance of Federated Averaging when learning a classifier. We find that the difference in spread of topics for each client has a strong correlation with the performance of the Federated Averaging algorithm.", "title": "" }, { "docid": "neg:1840082_5", "text": "A full-bridge converter which employs a coupled inductor to achieve zero-voltage switching of the primary switches in the entire line and load range is described. Because the coupled inductor does not appear as a series inductance in the load current path, it does not cause a loss of duty cycle or severe voltage ringing across the output rectifier. The operation and performance of the proposed converter is verified on a 670-W prototype.", "title": "" }, { "docid": "neg:1840082_6", "text": "Self-regulatory strategies of goal setting and goal striving are analyzed in three experiments. Experiment 1 uses fantasy realization theory (Oettingen, in: J. Brandstätter, R.M. Lerner (Eds.), Action and Self Development: Theory and Research through the Life Span, Sage Publications Inc, Thousand Oaks, CA, 1999, pp. 315-342) to analyze the self-regulatory processes of turning free fantasies about a desired future into binding goals. School children 8-12 years of age who had to mentally elaborate a desired academic future as well as present reality standing in its way, formed stronger goal commitments than participants solely indulging in the desired future or merely dwelling on present reality (Experiment 1). Effective implementation of set goals is addressed in the second and third experiments (Gollwitzer, Am. Psychol. 54 (1999) 493-503). Adolescents who had to furnish a set educational goal with relevant implementation intentions (specifying where, when, and how they would start goal pursuit) were comparatively more successful in meeting the goal (Experiment 2). Linking anticipated si tuations with goal-directed behaviors (i.e., if-then plans) rather than the mere thinking about good opportunities to act makes implementation intentions facilitate action initiation (Experiment 3). ©2001 Elsevier Science Ltd. All rights reserved. _____________________________________________________________________________________ Successful goal attainment demands completing two different tasks. People have to first turn their desires into binding goals, and second they have to attain the set goal. Both tasks benefit from selfregulatory strategies. In this article we describe a series of experiments with children, adolescents, and young adults that investigate self-regulatory processes facilitating effective goal setting and successful goal striving. The experimental studies investigate (1) different routes to goal setting depending on how", "title": "" }, { "docid": "neg:1840082_7", "text": "How we teach and learn is undergoing a revolution, due to changes in technology and connectivity. Education may be one of the best application areas for advanced NLP techniques, and NLP researchers have much to contribute to this problem, especially in the areas of learning to write, mastery learning, and peer learning. In this paper I consider what happens when we convert natural language processors into natural language coaches. 1 Why Should You Care, NLP Researcher? There is a revolution in learning underway. Students are taking Massive Open Online Courses as well as online tutorials and paid online courses. Technology and connectivity makes it possible for students to learn from anywhere in the world, at any time, to fit their schedules. And in today’s knowledge-based economy, going to school only in one’s early years is no longer enough; in future most people are going to need continuous, lifelong education. Students are changing too — they expect to interact with information and technology. Fortunately, pedagogical research shows significant benefits of active learning over passive methods. The modern view of teaching means students work actively in class, talk with peers, and are coached more than graded by their instructors. In this new world of education, there is a great need for NLP research to step in and help. I hope in this paper to excite colleagues about the possibilities and suggest a few new ways of looking at them. I do not attempt to cover the field of language and learning comprehensively, nor do I claim there is no work in the field. In fact there is quite a bit, such as a recent special issue on language learning resources (Sharoff et al., 2014), the long running ACL workshops on Building Educational Applications using NLP (Tetreault et al., 2015), and a recent shared task competition on grammatical error detection for second language learners (Ng et al., 2014). But I hope I am casting a few interesting thoughts in this direction for those colleagues who are not focused on this particular topic.", "title": "" }, { "docid": "neg:1840082_8", "text": "Light scattered from multiple surfaces can be used to retrieve information of hidden environments. However, full three-dimensional retrieval of an object hidden from view by a wall has only been achieved with scanning systems and requires intensive computational processing of the retrieved data. Here we use a non-scanning, single-photon single-pixel detector in combination with a deep convolutional artificial neural network: this allows us to locate the position and to also simultaneously provide the actual identity of a hidden person, chosen from a database of people (N = 3). Artificial neural networks applied to specific computational imaging problems can therefore enable novel imaging capabilities with hugely simplified hardware and processing times.", "title": "" }, { "docid": "neg:1840082_9", "text": "Grounded cognition rejects traditional views that cognition is computation on amodal symbols in a modular system, independent of the brain's modal systems for perception, action, and introspection. Instead, grounded cognition proposes that modal simulations, bodily states, and situated action underlie cognition. Accumulating behavioral and neural evidence supporting this view is reviewed from research on perception, memory, knowledge, language, thought, social cognition, and development. Theories of grounded cognition are also reviewed, as are origins of the area and common misperceptions of it. Theoretical, empirical, and methodological issues are raised whose future treatment is likely to affect the growth and impact of grounded cognition.", "title": "" }, { "docid": "neg:1840082_10", "text": "While recommendation approaches exploiting different input sources have started to proliferate in the literature, an explicit study of the effect of the combination of heterogeneous inputs is still missing. On the other hand, in this context there are sides to recommendation quality requiring further characterisation and methodological research –a gap that is acknowledged in the field. We present a comparative study on the influence that different types of information available in social systems have on item recommendation. Aiming to identify which sources of user interest evidence –tags, social contacts, and user-item interaction data– are more effective to achieve useful recommendations, and in what aspect, we evaluate a number of content-based, collaborative filtering, and social recommenders on three datasets obtained from Delicious, Last.fm, and MovieLens. Aiming to determine whether and how combining such information sources may enhance over individual recommendation approaches, we extend the common accuracy-oriented evaluation practice with various metrics to measure further recommendation quality dimensions, namely coverage, diversity, novelty, overlap, and relative diversity between ranked item recommendations. We report empiric observations showing that exploiting tagging information by content-based recommenders provides high coverage and novelty, and combining social networking and collaborative filtering information by hybrid recommenders results in high accuracy and diversity. This, along with the fact that recommendation lists from the evaluated approaches had low overlap and relative diversity values between them, gives insights that meta-hybrid recommenders combining the above strategies may provide valuable, balanced item suggestions in terms of performance and non-performance metrics.", "title": "" }, { "docid": "neg:1840082_11", "text": "In the cognitive neuroscience literature on the distinction between categorical and coordinate spatial relations, it has often been observed that categorical spatial relations are referred to linguistically by words like English prepositions, many of which specify binary oppositions-e.g., above/below, left/right, on/off, in/out. However, the actual semantic content of English prepositions, and of comparable word classes in other languages, has not been carefully considered. This paper has three aims. The first and most important aim is to inform cognitive neuroscientists interested in spatial representation about relevant research on the kinds of categorical spatial relations that are encoded in the 6000+ languages of the world. Emphasis is placed on cross-linguistic similarities and differences involving deictic relations, topological relations, and projective relations, the last of which are organized around three distinct frames of reference--intrinsic, relative, and absolute. The second aim is to review what is currently known about the neuroanatomical correlates of linguistically encoded categorical spatial relations, with special focus on the left supramarginal and angular gyri, and to suggest ways in which cross-linguistic data can help guide future research in this area of inquiry. The third aim is to explore the interface between language and other mental systems, specifically by summarizing studies which suggest that although linguistic and perceptual/cognitive representations of space are at least partially distinct, language nevertheless has the power to bring about not only modifications of perceptual sensitivities but also adjustments of cognitive styles.", "title": "" }, { "docid": "neg:1840082_12", "text": "We present a method for reinforcement learning of closely related skills that are parameterized via a skill embedding space. We learn such skills by taking advantage of latent variables and exploiting a connection between reinforcement learning and variational inference. The main contribution of our work is an entropyregularized policy gradient formulation for hierarchical policies, and an associated, data-efficient and robust off-policy gradient algorithm based on stochastic value gradients. We demonstrate the effectiveness of our method on several simulated robotic manipulation tasks. We find that our method allows for discovery of multiple solutions and is capable of learning the minimum number of distinct skills that are necessary to solve a given set of tasks. In addition, our results indicate that the hereby proposed technique can interpolate and/or sequence previously learned skills in order to accomplish more complex tasks, even in the presence of sparse rewards.", "title": "" }, { "docid": "neg:1840082_13", "text": "This paper reviews the first challenge on efficient perceptual image enhancement with the focus on deploying deep learning models on smartphones. The challenge consisted of two tracks. In the first one, participants were solving the classical image super-resolution problem with a bicubic downscaling factor of 4. The second track was aimed at real-world photo enhancement, and the goal was to map low-quality photos from the iPhone 3GS device to the same photos captured with a DSLR camera. The target metric used in this challenge combined the runtime, PSNR scores and solutions’ perceptual results measured in the user study. To ensure the efficiency of the submitted models, we additionally measured their runtime and memory requirements on Android smartphones. The proposed solutions significantly improved baseline results defining the state-of-the-art for image enhancement on smartphones.", "title": "" }, { "docid": "neg:1840082_14", "text": "Quantum scrambling is the dispersal of local information into many-body quantum entanglements and correlations distributed throughout an entire system. This concept accompanies the dynamics of thermalization in closed quantum systems, and has recently emerged as a powerful tool for characterizing chaos in black holes1–4. However, the direct experimental measurement of quantum scrambling is difficult, owing to the exponential complexity of ergodic many-body entangled states. One way to characterize quantum scrambling is to measure an out-of-time-ordered correlation function (OTOC); however, because scrambling leads to their decay, OTOCs do not generally discriminate between quantum scrambling and ordinary decoherence. Here we implement a quantum circuit that provides a positive test for the scrambling features of a given unitary process5,6. This approach conditionally teleports a quantum state through the circuit, providing an unambiguous test for whether scrambling has occurred, while simultaneously measuring an OTOC. We engineer quantum scrambling processes through a tunable three-qubit unitary operation as part of a seven-qubit circuit on an ion trap quantum computer. Measured teleportation fidelities are typically about 80 per cent, and enable us to experimentally bound the scrambling-induced decay of the corresponding OTOC measurement. A quantum circuit in an ion-trap quantum computer provides a positive test for the scrambling features of a given unitary process.", "title": "" }, { "docid": "neg:1840082_15", "text": "Propelled by a fast evolving landscape of techniques and datasets, data science is growing rapidly. Against this background, topological data analysis (TDA) has carved itself a niche for the analysis of datasets that present complex interactions and rich structures. Its distinctive feature, topology, allows TDA to detect, quantify and compare the mesoscopic structures of data, while also providing a language able to encode interactions beyond networks. Here we briefly present the TDA paradigm and some applications, in order to highlight its relevance to the data science community.", "title": "" }, { "docid": "neg:1840082_16", "text": "PURPOSE\nTo review applications of Ajzen's theory of planned behavior in the domain of health and to verify the efficiency of the theory to explain and predict health-related behaviors.\n\n\nMETHODS\nMost material has been drawn from Current Contents (Social and Behavioral Sciences and Clinical Medicine) from 1985 to date, together with all peer-reviewed articles cited in the publications thus identified.\n\n\nFINDINGS\nThe results indicated that the theory performs very well for the explanation of intention; an averaged R2 of .41 was observed. Attitude toward the action and perceived behavioral control were most often the significant variables responsible for this explained variation in intention. The prediction of behavior yielded an averaged R2 of .34. Intention remained the most important predictor, but in half of the studies reviewed perceived behavioral control significantly added to the prediction.\n\n\nCONCLUSIONS\nThe efficiency of the model seems to be quite good for explaining intention, perceived behavioral control being as important as attitude across health-related behavior categories. The efficiency of the theory, however, varies between health-related behavior categories.", "title": "" }, { "docid": "neg:1840082_17", "text": "Existing measures of peer pressure and conformity may not be suitable for screening large numbers of adolescents efficiently, and few studies have differentiated peer pressure from theoretically related constructs, such as conformity or wanting to be popular. We developed and validated short measures of peer pressure, peer conformity, and popularity in a sample ( n= 148) of adolescent boys and girls in grades 11 to 13. Results showed that all measures constructed for the study were internally consistent. Although all measures of peer pressure, conformity, and popularity were intercorrelated, peer pressure and peer conformity were stronger predictors of risk behaviors than measures assessing popularity, general conformity, or dysphoria. Despite a simplified scoring format, peer conformity vignettes were equal to if not better than the peer pressure measures in predicting risk behavior. Findings suggest that peer pressure and peer conformity are potentially greater risk factors than a need to be popular, and that both peer pressure and peer conformity can be measured with short scales suitable for large-scale testing.", "title": "" }, { "docid": "neg:1840082_18", "text": "The complete nucleotide sequence of tomato infectious chlorosis virus (TICV) was determined and compared with those of other members of the genus Crinivirus. RNA 1 is 8,271 nucleotides long with three open reading frames and encodes proteins involved in replication. RNA 2 is 7,913 nucleotides long and encodes eight proteins common within the genus Crinivirus that are involved in genome protection, movement and other functions yet to be identified. Similarity between TICV and other criniviruses varies throughout the genome but TICV is related more closely to lettuce infectious yellows virus than to any other crinivirus, thus identifying a third group within the genus.", "title": "" } ]
1840083
Deep automatic license plate recognition system
[ { "docid": "pos:1840083_0", "text": "Automatic license plate recognition (ALPR) is one of the most important aspects of applying computer techniques towards intelligent transportation systems. In order to recognize a license plate efficiently, however, the location of the license plate, in most cases, must be detected in the first place. Due to this reason, detecting the accurate location of a license plate from a vehicle image is considered to be the most crucial step of an ALPR system, which greatly affects the recognition rate and speed of the whole system. In this paper, a region-based license plate detection method is proposed. In this method, firstly, mean shift is used to filter and segment a color vehicle image in order to get candidate regions. These candidate regions are then analyzed and classified in order to decide whether a candidate region contains a license plate. Unlike other existing license plate detection methods, the proposed method focuses on regions, which demonstrates to be more robust to interference characters and more accurate when compared with other methods.", "title": "" }, { "docid": "pos:1840083_1", "text": "In this work we present a framework for the recognition of natural scene text. Our framework does not require any human-labelled data, and performs word recognition on the whole image holistically, departing from the character based recognition systems of the past. The deep neural network models at the centre of this framework are trained solely on data produced by a synthetic text generation engine – synthetic data that is highly realistic and sufficient to replace real data, giving us infinite amounts of training data. This excess of data exposes new possibilities for word recognition models, and here we consider three models, each one “reading” words in a different way: via 90k-way dictionary encoding, character sequence encoding, and bag-of-N-grams encoding. In the scenarios of language based and completely unconstrained text recognition we greatly improve upon state-of-the-art performance on standard datasets, using our fast, simple machinery and requiring zero data-acquisition costs.", "title": "" }, { "docid": "pos:1840083_2", "text": "A simple approach to learning invariances in image classification consists in augmenting the training set with transformed versions of the original images. However, given a large set of possible transformations, selecting a compact subset is challenging. Indeed, all transformations are not equally informative and adding uninformative transformations increases training time with no gain in accuracy. We propose a principled algorithm -- Image Transformation Pursuit (ITP) -- for the automatic selection of a compact set of transformations. ITP works in a greedy fashion, by selecting at each iteration the one that yields the highest accuracy gain. ITP also allows to efficiently explore complex transformations, that combine basic transformations. We report results on two public benchmarks: the CUB dataset of bird images and the ImageNet 2010 challenge. Using Fisher Vector representations, we achieve an improvement from 28.2% to 45.2% in top-1 accuracy on CUB, and an improvement from 70.1% to 74.9% in top-5 accuracy on ImageNet. We also show significant improvements for deep convnet features: from 47.3% to 55.4% on CUB and from 77.9% to 81.4% on ImageNet.", "title": "" } ]
[ { "docid": "neg:1840083_0", "text": "This paper describes a new method for recognizing overtraced strokes to 2D geometric primitives, which are further interpreted as 2D line drawings. This method can support rapid grouping and fitting of overtraced polylines or conic curves based on the classified characteristics of each stroke during its preprocessing stage. The orientation and its endpoints of a classified stroke are used in the stroke grouping process. The grouped strokes are then fitted with 2D geometry. This method can deal with overtraced sketch strokes in both solid and dash linestyles, fit grouped polylines as a whole polyline and simply fit conic strokes without computing the direction of a stroke. It avoids losing joint information due to segmentation of a polyline into line-segments. The proposed method has been tested with our freehand sketch recognition system (FSR), which is robust and easier to use by removing some limitations embedded with most existing sketching systems which only accept non-overtraced stroke drawing. The test results showed that the proposed method can support freehand sketching based conceptual design with no limitations on drawing sequence, directions and overtraced cases while achieving a satisfactory interpretation rate.", "title": "" }, { "docid": "neg:1840083_1", "text": "We derive upper and lower limits on the majority vote accuracy with respect to individual accuracy p, the number of classifiers in the pool (L), and the pairwise dependence between classifiers, measured by Yule’s Q statistic. Independence between individual classifiers is typically viewed as an asset in classifier fusion. We show that the majority vote with dependent classifiers can potentially offer a dramatic improvement both over independent classifiers and over an individual classifier with accuracy p. A functional relationship between the limits and the pairwise dependence Q is derived. Two patterns of the joint distribution for classifier outputs (correct/incorrect) are identified to derive the limits: the pattern of success and the pattern of failure. The results support the intuition that negative pairwise dependence is beneficial although not straightforwardly related to the accuracy. The pattern of success showed that for the highest improvement over p, all pairs of classifiers in the pool should have the same negative dependence.", "title": "" }, { "docid": "neg:1840083_2", "text": "In order to advance action generation and creation in robots beyond simple learned schemas we need computational tools that allow us to automatically interpret and represent human actions. This paper presents a system that learns manipulation action plans by processing unconstrained videos from the World Wide Web. Its goal is to robustly generate the sequence of atomic actions of seen longer actions in video in order to acquire knowledge for robots. The lower level of the system consists of two convolutional neural network (CNN) based recognition modules, one for classifying the hand grasp type and the other for object recognition. The higher level is a probabilistic manipulation action grammar based parsing module that aims at generating visual sentences for robot manipulation. Experiments conducted on a publicly available unconstrained video dataset show that the system is able to learn manipulation actions by “watching” unconstrained videos with high accuracy.", "title": "" }, { "docid": "neg:1840083_3", "text": "This study presents the first results of an analysis primarily based on semi-structured interviews with government officials and managers who are responsible for smart city initiatives in four North American cities—Philadelphia and Seattle in the United States, Quebec City in Canada, and Mexico City in Mexico. With the reference to the Smart City Initiatives Framework that we suggested in our previous research, this study aims to build a new understanding of smart city initiatives. Main findings are categorized into eight aspects including technology, management and organization, policy context, governance, people and communities, economy, built infrastructure, and natural environ-", "title": "" }, { "docid": "neg:1840083_4", "text": "Sound is a medium that conveys functional and emotional information in a form of multilayered streams. With the use of such advantage, robot sound design can open a way for being more efficient communication in human-robot interaction. As the first step of research, we examined how individuals perceived the functional and emotional intention of robot sounds and whether the perceived information from sound is associated with their previous experience with science fiction movies. The sound clips were selected based on the context of the movie scene (i.e., Wall-E, R2-D2, BB8, Transformer) and classified as functional (i.e., platform, monitoring, alerting, feedback) and emotional (i.e., positive, neutral, negative). A total of 12 participants were asked to identify the perceived properties for each of the 30 items. We found that the perceived emotional and functional messages varied from those originally intended and differed by previous experience.", "title": "" }, { "docid": "neg:1840083_5", "text": "Skip connections made the training of very deep neural networks possible and have become an indispendable component in a variety of neural architectures. A satisfactory explanation for their success remains elusive. Here, we present an explanation for the benefits of skip connections in training very deep neural networks. We argue that skip connections help break symmetries inherent in the loss landscapes of deep networks, leading to drastically simplified landscapes. In particular, skip connections between adjacent layers in a multilayer network break the permutation symmetry of nodes in a given layer, and the recently proposed DenseNet architecture, where each layer projects skip connections to every layer above it, also breaks the rescaling symmetry of connectivity matrices between different layers. This hypothesis is supported by evidence from a toy model with binary weights and from experiments with fully-connected networks suggesting (i) that skip connections do not necessarily improve training unless they help break symmetries and (ii) that alternative ways of breaking the symmetries also lead to significant performance improvements in training deep networks, hence there is nothing special about skip connections in this respect. We find, however, that skip connections confer additional benefits over and above symmetry-breaking, such as the ability to deal effectively with the vanishing gradients problem.", "title": "" }, { "docid": "neg:1840083_6", "text": "Cryptocurrency, and its underlying technologies, has been gaining popularity for transaction management beyond financial transactions. Transaction information is maintained in the blockchain, which can be used to audit the integrity of the transaction. The focus on this paper is the potential availability of block-chain technology of other transactional uses. Block-chain is one of the most stable open ledgers that preserves transaction information, and is difficult to forge. Since the information stored in block-chain is not related to personally identifiable information, it has the characteristics of anonymity. Also, the block-chain allows for transparent transaction verification since all information in the block-chain is open to the public. These characteristics are the same as the requirements for a voting system. That is, strong robustness, anonymity, and transparency. In this paper, we propose an electronic voting system as an application of blockchain, and describe block-chain based voting at a national level through examples.", "title": "" }, { "docid": "neg:1840083_7", "text": "Knowledge bases of real-world facts about entities and their relationships are useful resources for a variety of natural language processing tasks. However, because knowledge bases are typically incomplete, it is useful to be able to perform link prediction, i.e., predict whether a relationship not in the knowledge base is likely to be true. This paper combines insights from several previous link prediction models into a new embedding model STransE that represents each entity as a lowdimensional vector, and each relation by two matrices and a translation vector. STransE is a simple combination of the SE and TransE models, but it obtains better link prediction performance on two benchmark datasets than previous embedding models. Thus, STransE can serve as a new baseline for the more complex models in the link prediction task.", "title": "" }, { "docid": "neg:1840083_8", "text": "Following Ebbinghaus (1885/1964), a number of procedures have been devised to measure short-term memory using immediate serial recall: digit span, Knox's (1913) cube imitation test and Corsi's (1972) blocks task. Understanding the cognitive processes involved in these tasks was obstructed initially by the lack of a coherent concept of short-term memory and later by the mistaken assumption that short-term and long-term memory reflected distinct processes as well as different kinds of experimental task. Despite its apparent conceptual simplicity, a variety of cognitive mechanisms are responsible for short-term memory, and contemporary theories of working memory have helped to clarify these. Contrary to the earliest writings on the subject, measures of short-term memory do not provide a simple measure of mental capacity, but they do provide a way of understanding some of the key mechanisms underlying human cognition.", "title": "" }, { "docid": "neg:1840083_9", "text": "Nowadays, the number of layers and of neurons in each layer of a deep network are typically set manually. While very deep and wide networks have proven effective in general, they come at a high memory and computation cost, thus making them impractical for constrained platforms. These networks, however, are known to have many redundant parameters, and could thus, in principle, be replaced by more compact architectures. In this paper, we introduce an approach to automatically determining the number of neurons in each layer of a deep network during learning. To this end, we propose to make use of a group sparsity regularizer on the parameters of the network, where each group is defined to act on a single neuron. Starting from an overcomplete network, we show that our approach can reduce the number of parameters by up to 80% while retaining or even improving the network accuracy.", "title": "" }, { "docid": "neg:1840083_10", "text": "This paper presents a novel approach for search engine results clustering that relies on the semantics of the retrieved documents rather than the terms in those documents. The proposed approach takes into consideration both lexical and semantics similarities among documents and applies activation spreading technique in order to generate semantically meaningful clusters. This approach allows documents that are semantically similar to be clustered together rather than clustering documents based on similar terms. A prototype is implemented and several experiments are conducted to test the prospered solution. The result of the experiment confirmed that the proposed solution achieves remarkable results in terms of precision.", "title": "" }, { "docid": "neg:1840083_11", "text": "A novel method for simultaneous keyphrase extraction and generic text summarization is proposed by modeling text documents as weighted undirected and weighted bipartite graphs. Spectral graph clustering algorithms are useed for partitioning sentences of the documents into topical groups with sentence link priors being exploited to enhance clustering quality. Within each topical group, saliency scores for keyphrases and sentences are generated based on a mutual reinforcement principle. The keyphrases and sentences are then ranked according to their saliency scores and selected for inclusion in the top keyphrase list and summaries of the document. The idea of building a hierarchy of summaries for documents capturing different levels of granularity is also briefly discussed. Our method is illustrated using several examples from news articles, news broadcast transcripts and web documents.", "title": "" }, { "docid": "neg:1840083_12", "text": "The sharing economy is a new online community that has important implications for offline behavior. This study evaluates whether engagement in the sharing economy is associated with an actor’s aversion to risk. Using a web-based survey and a field experiment, we apply an adaptation of Holt and Laury’s (2002) risk lottery game to a representative sample of sharing economy participants. We find that frequency of activity in the sharing economy predicts risk aversion, but only in interaction with satisfaction. While greater satisfaction with sharing economy websites is associated with a decrease in risk aversion, greater frequency of usage is associated with greater risk aversion. This analysis shows the limitations of a static perspective on how risk attitudes relate to participation in the sharing economy.", "title": "" }, { "docid": "neg:1840083_13", "text": "The conceptualization of the notion of a system in systems engineering, as exemplified in, for instance, the engineering standard IEEE Std 1220-1998, is problematic when applied to the design of socio-technical systems. This is argued using Intelligent Transportation Systems as an example. A preliminary conceptualization of socio-technical systems is introduced which includes technical and social elements and actors, as well as four kinds of relations. Current systems engineering practice incorporates technical elements and actors in the system but sees social elements exclusively as contextual. When designing socio-technical systems, however, social elements and the corresponding relations must also be considered as belonging to the system.", "title": "" }, { "docid": "neg:1840083_14", "text": "We address offensive tactic recognition in broadcast basketball videos. As a crucial component towards basketball video content understanding, tactic recognition is quite challenging because it involves multiple independent players, each of which has respective spatial and temporal variations. Motivated by the observation that most intra-class variations are caused by non-key players, we present an approach that integrates key player detection into tactic recognition. To save the annotation cost, our approach can work on training data with only video-level tactic annotation, instead of key players labeling. Specifically, this task is formulated as an MIL (multiple instance learning) problem where a video is treated as a bag with its instances corresponding to subsets of the five players. We also propose a representation to encode the spatio-temporal interaction among multiple players. It turns out that our approach not only effectively recognizes the tactics but also precisely detects the key players.", "title": "" }, { "docid": "neg:1840083_15", "text": "Energy consumption management has become an essential concept in cloud computing. In this paper, we propose a new power aware load balancing, named Bee-MMT (artificial bee colony algorithm-Minimal migration time), to decline power consumption in cloud computing; as a result of this decline, CO2 production and operational cost will be decreased. According to this purpose, an algorithm based on artificial bee colony algorithm (ABC) has been proposed to detect over utilized hosts and then migrate one or more VMs from them to reduce their utilization; following that we detect underutilized hosts and, if it is possible, migrate all VMs which have been allocated to these hosts and then switch them to the sleep mode. However, there is a trade-off between energy consumption and providing high quality of service to the customers. Consequently, we consider SLA Violation as a metric to qualify the QOS that require to satisfy the customers. The results show that the proposed method can achieve greater power consumption saving than other methods like LR-MMT (local regression-Minimal migration time), DVFS (Dynamic Voltage Frequency Scaling), IQR-MMT (Interquartile Range-MMT), MAD-MMT (Median Absolute Deviation) and non-power aware.", "title": "" }, { "docid": "neg:1840083_16", "text": "Autonomous walking bipedal machines, possibly useful for rehabilitation and entertainment purposes, need a high energy efficiency, offered by the concept of ‘Passive Dynamic Walking’ (exploitation of the natural dynamics of the robot). 2D passive dynamic bipeds have been shown to be inherently stable, but in the third dimension two problematic degrees of freedom are introduced: yaw and roll. We propose a design for a 3D biped with a pelvic body as a passive dynamic compensator, which will compensate for the undesired yaw and roll motion, and allow the rest of the robot to move as if it were a 2D machine. To test our design, we perform numerical simulations on a multibody model of the robot. With limit cycle analysis we calculate the stability of the robot when walking at its natural speed. The simulation shows that the compensator, indeed, effectively compensates for both the yaw and the roll motion, and that the walker is stable.", "title": "" }, { "docid": "neg:1840083_17", "text": "We started investigating the collection of HTML tables on the Web and developed the WebTables system a few years ago [4]. Since then, our work has been motivated by applying WebTables in a broad set of applications at Google, resulting in several product launches. In this paper, we describe the challenges faced, lessons learned, and new insights that we gained from our efforts. The main challenges we faced in our efforts were (1) identifying tables that are likely to contain high-quality data (as opposed to tables used for navigation, layout, or formatting), and (2) recovering the semantics of these tables or signals that hint at their semantics. The result is a semantically enriched table corpus that we used to develop several services. First, we created a search engine for structured data whose index includes over a hundred million HTML tables. Second, we enabled users of Google Docs (through its Research Panel) to find relevant data tables and to insert such data into their documents as needed. Most recently, we brought WebTables to a much broader audience by using the table corpus to provide richer tabular snippets for fact-seeking web search queries on Google.com.", "title": "" }, { "docid": "neg:1840083_18", "text": "Ultrasound imaging makes use of backscattering of waves during their interaction with scatterers present in biological tissues. Simulation of synthetic ultrasound images is a challenging problem on account of inability to accurately model various factors of which some include intra-/inter scanline interference, transducer to surface coupling, artifacts on transducer elements, inhomogeneous shadowing and nonlinear attenuation. Current approaches typically solve wave space equations making them computationally expensive and slow to operate. We propose a generative adversarial network (GAN) inspired approach for fast simulation of patho-realistic ultrasound images. We apply the framework to intravascular ultrasound (IVUS) simulation. A stage 0 simulation performed using pseudo B-mode ultrasound image simulator yields speckle mapping of a digitally defined phantom. The stage I GAN subsequently refines them to preserve tissue specific speckle intensities. The stage II GAN further refines them to generate high resolution images with patho-realistic speckle profiles. We evaluate patho-realism of simulated images with a visual Turing test indicating an equivocal confusion in discriminating simulated from real. We also quantify the shift in tissue specific intensity distributions of the real and simulated images to prove their similarity.", "title": "" }, { "docid": "neg:1840083_19", "text": "• MacArthur Fellowship, 2010 • Guggenheim Fellowship, 2010 • Li Ka Shing Foundation Women in Science Distinguished Lectu re Series Award, 2010 • MIT Technology Review TR-35 Award (recognizing the world’s top innovators under the age of 35), 2009. • Okawa Foundation Research Award, 2008. • Sloan Research Fellow, 2007. • Best Paper Award, 2007 USENIX Security Symposium. • George Tallman Ladd Research Award, Carnegie Mellon Univer sity, 2007. • Highest ranked paper, 2006 IEEE Security and Privacy Sympos ium; paper invited to a special issue of the IEEE Transactions on Dependable and Secure Computing. • NSF CAREER Award on “Exterminating Large Scale Internet Att acks”, 2005. • IBM Faculty Award, 2005. • Highest ranked paper, 1999 IEEE Computer Security Foundati on Workshop; paper invited to a special issue of Journal of Computer Security.", "title": "" } ]
1840084
A Discourse-Driven Content Model for Summarising Scientific Articles Evaluated in a Complex Question Answering Task
[ { "docid": "pos:1840084_0", "text": "Identifying background (context) information in scientific articles can help scholars understand major contributions in their research area more easily. In this paper, we propose a general framework based on probabilistic inference to extract such context information from scientific papers. We model the sentences in an article and their lexical similarities as aMarkov Random Fieldtuned to detect the patterns that context data create, and employ a Belief Propagationmechanism to detect likely context sentences. We also address the problem of generating surveys of scientific papers. Our experiments show greater pyramid scores for surveys generated using such context information rather than citation sentences alone.", "title": "" } ]
[ { "docid": "neg:1840084_0", "text": "ÐStochastic discrimination is a general methodology for constructing classifiers appropriate for pattern recognition. It is based on combining arbitrary numbers of very weak components, which are usually generated by some pseudorandom process, and it has the property that the very complex and accurate classifiers produced in this way retain the ability, characteristic of their weak component pieces, to generalize to new data. In fact, it is often observed, in practice, that classifier performance on test sets continues to rise as more weak components are added, even after performance on training sets seems to have reached a maximum. This is predicted by the underlying theory, for even though the formal error rate on the training set may have reached a minimum, more sophisticated measures intrinsic to this method indicate that classifier performance on both training and test sets continues to improve as complexity increases. In this paper, we begin with a review of the method of stochastic discrimination as applied to pattern recognition. Through a progression of examples keyed to various theoretical issues, we discuss considerations involved with its algorithmic implementation. We then take such an algorithmic implementation and compare its performance, on a large set of standardized pattern recognition problems from the University of California Irvine, and Statlog collections, to many other techniques reported on in the literature, including boosting and bagging. In doing these studies, we compare our results to those reported in the literature by the various authors for the other methods, using the same data and study paradigms used by them. Included in this paper is an outline of the underlying mathematical theory of stochastic discrimination and a remark concerning boosting, which provides a theoretical justification for properties of that method observed in practice, including its ability to generalize. Index TermsÐPattern recognition, classification algorithms, stochastic discrimination, SD.", "title": "" }, { "docid": "neg:1840084_1", "text": "News topics, which are constructed from news stories using the techniques of Topic Detection and Tracking (TDT), bring convenience to users who intend to see what is going on through the Internet. However, it is almost impossible to view all the generated topics, because of the large amount. So it will be helpful if all topics are ranked and the top ones, which are both timely and important, can be viewed with high priority. Generally, topic ranking is determined by two primary factors. One is how frequently and recently a topic is reported by the media; the other is how much attention users pay to it. Both media focus and user attention varies as time goes on, so the effect of time on topic ranking has already been included. However, inconsistency exists between both factors. In this paper, an automatic online news topic ranking algorithm is proposed based on inconsistency analysis between media focus and user attention. News stories are organized into topics, which are ranked in terms of both media focus and user attention. Experiments performed on practical Web datasets show that the topic ranking result reflects the influence of time, the media and users. The main contributions of this paper are as follows. First, we present the quantitative measure of the inconsistency between media focus and user attention, which provides a basis for topic ranking and an experimental evidence to show that there is a gap between what the media provide and what users view. Second, to the best of our knowledge, it is the first attempt to synthesize the two factors into one algorithm for automatic online topic ranking.", "title": "" }, { "docid": "neg:1840084_2", "text": "Standard model-free deep reinforcement learning (RL) algorithms sample a new initial state for each trial, allowing them to optimize policies that can perform well even in highly stochastic environments. However, problems that exhibit considerable initial state variation typically produce high-variance gradient estimates for model-free RL, making direct policy or value function optimization challenging. In this paper, we develop a novel algorithm that instead partitions the initial state space into “slices”, and optimizes an ensemble of policies, each on a different slice. The ensemble is gradually unified into a single policy that can succeed on the whole state space. This approach, which we term divide-and-conquer RL, is able to solve complex tasks where conventional deep RL methods are ineffective. Our results show that divide-and-conquer RL greatly outperforms conventional policy gradient methods on challenging grasping, manipulation, and locomotion tasks, and exceeds the performance of a variety of prior methods. Videos of policies learned by our algorithm can be viewed at https://sites.google.com/view/dnc-rl/.", "title": "" }, { "docid": "neg:1840084_3", "text": "In this survey we overview graph-based clustering and its applications in computational linguistics. We summarize graph-based clustering as a five-part story: hypothesis, modeling, measure, algorithm and evaluation. We then survey three typical NLP problems in which graph-based clustering approaches have been successfully applied. Finally, we comment on the strengths and weaknesses of graph-based clustering and envision that graph-based clustering is a promising solution for some emerging NLP problems.", "title": "" }, { "docid": "neg:1840084_4", "text": "To date, there is little information on the impact of more aggressive treatment regimen such as BEACOPP (bleomycin, etoposide, doxorubicin, cyclophosphamide, vincristine, procarbazine, and prednisone) on the fertility of male patients with Hodgkin lymphoma (HL). We evaluated the impact of BEACOPP regimen on fertility status in 38 male patients with advanced-stage HL enrolled into trials of the German Hodgkin Study Group (GHSG). Before treatment, 6 (23%) patients had normozoospermia and 20 (77%) patients had dysspermia. After treatment, 34 (89%) patients had azoospermia, 4 (11%) had other dysspermia, and no patients had normozoospermia. There was no difference in azoospermia rate between patients treated with BEACOPP baseline and those given BEACOPP escalated (93% vs 87%, respectively; P > .999). After treatment, most of patients (93%) had abnormal values of follicle-stimulating hormone, whereas the number of patients with abnormal levels of testosterone and luteinizing hormone was less pronounced-57% and 21%, respectively. In univariate analysis, none of the evaluated risk factors (ie, age, clinical stage, elevated erythrocyte sedimentation rate, B symptoms, large mediastinal mass, extranodal disease, and 3 or more lymph nodes) was statistically significant. Male patients with HL are at high risk of infertility after treatment with BEACOPP.", "title": "" }, { "docid": "neg:1840084_5", "text": "Over the past decade, machine learning techniques and in particular predictive modeling and pattern recognition in biomedical sciences, from drug delivery systems to medical imaging, have become one of the most important methods of assisting researchers in gaining a deeper understanding of issues in their entirety and solving complex medical problems. Deep learning is a powerful machine learning algorithm in classification that extracts low-to high-level features. In this paper, we employ a convolutional neural network to distinguish an Alzheimers brain from a normal, healthy brain. The importance of classifying this type of medical data lies in its potential to develop a predictive model or system in order to recognize the symptoms of Alzheimers disease when compared with normal subjects and to estimate the stages of the disease. Classification of clinical data for medical conditions such as Alzheimers disease has always been challenging, and the most problematic aspect has always been selecting the strongest discriminative features. Using the Convolutional Neural Network (CNN) and the famous architecture LeNet-5, we successfully classified functional MRI data of Alzheimers subjects from normal controls, where the accuracy of testing data reached 96.85%. This experiment suggests that the shift and scale invariant features extracted by CNN followed by deep learning classification represents the most powerful method of distinguishing clinical data from healthy data in fMRI. This approach also allows for expansion of the methodology to predict more complicated systems.", "title": "" }, { "docid": "neg:1840084_6", "text": "3 Relating modules to external clinical traits 2 3.a Quantifying module–trait associations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 3.b Gene relationship to trait and important modules: Gene Significance and Module Membership . . . . 2 3.c Intramodular analysis: identifying genes with high GS and MM . . . . . . . . . . . . . . . . . . . . . . 3 3.d Summary output of network analysis results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4", "title": "" }, { "docid": "neg:1840084_7", "text": "For multi-copter unmanned aerial vehicles (UAVs) sensing of the actual altitude is an important task. Many functions providing increased flight safety and easy maneuverability rely on altitude data. Commonly used sensors provide the altitude only relative to the starting position, or are limited in range and/or resolution. With the 77 GHz FMCW radar-based altimeter presented in this paper not only the actual altitude over ground but also obstacles such as trees and bushes can be detected. The capability of this solution is verified by measurements over different terrain and vegetation.", "title": "" }, { "docid": "neg:1840084_8", "text": "The ability of a normal human listener to recognize objects in the environment from only the sounds they produce is extraordinarily robust with regard to characteristics of the acoustic environment and of other competing sound sources. In contrast, computer systems designed to recognize sound sources function precariously, breaking down whenever the target sound is degraded by reverberation, noise, or competing sounds. Robust listening requires extensive contextual knowledge, but the potential contribution of sound-source recognition to the process of auditory scene analysis has largely been neglected by researchers building computational models of the scene analysis process. This thesis proposes a theory of sound-source recognition, casting recognition as a process of gathering information to enable the listener to make inferences about objects in the environment or to predict their behavior. In order to explore the process, attention is restricted to isolated sounds produced by a small class of sound sources, the non-percussive orchestral musical instruments. Previous research on the perception and production of orchestral instrument sounds is reviewed from a vantage point based on the excitation and resonance structure of the sound-production process, revealing a set of perceptually salient acoustic features. A computer model of the recognition process is developed that is capable of “listening” to a recording of a musical instrument and classifying the instrument as one of 25 possibilities. The model is based on current models of signal processing in the human auditory system. It explicitly extracts salient acoustic features and uses a novel improvisational taxonomic architecture (based on simple statistical pattern-recognition techniques) to classify the sound source. The performance of the model is compared directly to that of skilled human listeners, using", "title": "" }, { "docid": "neg:1840084_9", "text": "To date, the majority of ad hoc routing protocol research has been done using simulation only. One of the most motivating reasons to use simulation is the difficulty of creating a real implementation. In a simulator, the code is contained within a single logical component, which is clearly defined and accessible. On the other hand, creating an implementation requires use of a system with many components, including many that have little or no documentation. The implementation developer must understand not only the routing protocol, but all the system components and their complex interactions. Further, since ad hoc routing protocols are significantly different from traditional routing protocols, a new set of features must be introduced to support the routing protocol. In this paper we describe the event triggers required for AODV operation, the design possibilities and the decisions for our ad hoc on-demand distance vector (AODV) routing protocol implementation, AODV-UCSB. This paper is meant to aid researchers in developing their own on-demand ad hoc routing protocols and assist users in determining the implementation design that best fits their needs.", "title": "" }, { "docid": "neg:1840084_10", "text": "The pathogenesis underlining many neurodegenerative diseases remains incompletely understood. The lack of effective biomarkers and disease preventative medicine demands the development of new techniques to efficiently probe the mechanisms of disease and to detect early biomarkers predictive of disease onset. Raman spectroscopy is an established technique that allows the label-free fingerprinting and imaging of molecules based on their chemical constitution and structure. While analysis of isolated biological molecules has been widespread in the chemical community, applications of Raman spectroscopy to study clinically relevant biological species, disease pathogenesis, and diagnosis have been rapidly increasing since the past decade. The growing number of biomedical applications has shown the potential of Raman spectroscopy for detection of novel biomarkers that could enable the rapid and accurate screening of disease susceptibility and onset. Here we provide an overview of Raman spectroscopy and related techniques and their application to neurodegenerative diseases. We further discuss their potential utility in research, biomarker detection, and diagnosis. Challenges to routine use of Raman spectroscopy in the context of neuroscience research are also presented.", "title": "" }, { "docid": "neg:1840084_11", "text": "Recent findings suggest that consolidation of emotional memories is influenced by menstrual phase in women. In contrast to other phases, in the mid-luteal phase when progesterone levels are elevated, cortisol levels are increased and correlated with emotional memory. This study examined the impact of progesterone on cortisol and memory consolidation of threatening stimuli under stressful conditions. Thirty women were recruited for the high progesterone group (in the mid-luteal phase) and 26 for the low progesterone group (in non-luteal phases of the menstrual cycle). Women were shown a series of 20 neutral or threatening images followed immediately by either a stressor (cold pressor task) or control condition. Participants returned two days later for a surprise free recall test of the images and salivary cortisol responses were monitored. High progesterone levels were associated with higher baseline and stress-evoked cortisol levels, and enhanced memory of negative images when stress was received. A positive correlation was found between stress-induced cortisol levels and memory recall of threatening images. These findings suggest that progesterone mediates cortisol responses to stress and subsequently predicts memory recall for emotionally arousing stimuli.", "title": "" }, { "docid": "neg:1840084_12", "text": "The goal of Word Sense Disambiguation (WSD) is to identify the correct meaning of a word in the particular context. Traditional supervised methods only use labeled data (context), while missing rich lexical knowledge such as the gloss which defines the meaning of a word sense. Recent studies have shown that incorporating glosses into neural networks for WSD has made significant improvement. However, the previous models usually build the context representation and gloss representation separately. In this paper, we find that the learning for the context and gloss representation can benefit from each other. Gloss can help to highlight the important words in the context, thus building a better context representation. Context can also help to locate the key words in the gloss of the correct word sense. Therefore, we introduce a co-attention mechanism to generate co-dependent representations for the context and gloss. Furthermore, in order to capture both word-level and sentence-level information, we extend the attention mechanism in a hierarchical fashion. Experimental results show that our model achieves the state-of-the-art results on several standard English all-words WSD test datasets.", "title": "" }, { "docid": "neg:1840084_13", "text": "By combining patch-clamp methods with two-photon microscopy, it is possible to target recordings to specific classes of neurons in vivo. Here we describe methods for imaging and recording from the soma and dendrites of neurons identified using genetically encoded probes such as green fluorescent protein (GFP) or functional indicators such as Oregon Green BAPTA-1. Two-photon targeted patching can also be adapted for use with wild-type brains by perfusing the extracellular space with a membrane-impermeable dye to visualize the cells by their negative image and target them for electrical recordings, a technique termed \"shadowpatching.\" We discuss how these approaches can be adapted for single-cell electroporation to manipulate specific cells genetically. These approaches thus permit the recording and manipulation of rare genetically, morphologically, and functionally distinct subsets of neurons in the intact nervous system.", "title": "" }, { "docid": "neg:1840084_14", "text": "The statistics of professional sports, including players and teams, provide numerous opportunities for research. Cricket is one of the most popular team sports, with billions of fans all over the world. In this thesis, we address two problems related to the One Day International (ODI) format of the game. First, we propose a novel method to predict the winner of ODI cricket matches using a team-composition based approach at the start of the match. Second, we present a method to quantitatively assess the performances of individual players in a match of ODI cricket which incorporates the game situations under which the players performed. The player performances are further used to predict the player of the match award. Players are the fundamental unit of a team. Players of one team work against the players of the opponent team in order to win a match. The strengths and abilities of the players of a team play a key role in deciding the outcome of a match. However, a team changes its composition depending on the match conditions, venue, and opponent team, etc. Therefore, we propose a novel dynamic approach which takes into account the varying strengths of the individual players and reflects the changes in player combinations over time. Our work suggests that the relative team strength between the competing teams forms a distinctive feature for predicting the winner. Modeling the team strength boils down to modeling individual players’ batting and bowling performances, forming the basis of our approach. We use career statistics as well as the recent performances of a player to model him. Using the relative strength of one team versus the other, along with two player-independent features, namely, the toss outcome and the venue of the match, we evaluate multiple supervised machine learning algorithms to predict the winner of the match. We show that, for our approach, the k-Nearest Neighbor (kNN) algorithm yields better results as compared to other classifiers. Players have multiple roles in a game of cricket, predominantly as batsmen and bowlers. Over the generations, statistics such as batting and bowling averages, and strike and economy rates have been used to judge the performance of individual players. These measures, however, do not take into consideration the context of the game in which a player performed across the course of a match. Further, these types of statistics are incapable of comparing the performance of players across different roles. Therefore, we present an approach to quantitatively assess the performances of individual players in a single match of ODI cricket. We have developed a new measure, called the Work Index, which represents the amount of work that is yet to be done by a team to achieve its target. Our approach incorporates game situations and the team strengths to measure the player contributions. This not only helps us in", "title": "" }, { "docid": "neg:1840084_15", "text": "Introduction Since the introduction of a new UK Ethics Committee Authority (UKECA) in 2004 and the setting up of the Central Office for Research Ethics Committees (COREC), research proposals have come under greater scrutiny than ever before. The era of self-regulation in UK research ethics has ended (Kerrison and Pollock, 2005). The UKECA recognise various committees throughout the UK that can approve proposals for research in NHS facilities (National Patient Safety Agency, 2007), and the scope of research for which approval must be sought is defined by the National Research Ethics Service, which has superceded COREC. Guidance on sample size (Central Office for Research Ethics Committees, 2007: 23) requires that 'the number should be sufficient to achieve worthwhile results, but should not be so high as to involve unnecessary recruitment and burdens for participants'. It also suggests that formal sample estimation size should be based on the primary outcome, and that if there is more than one outcome then the largest sample size should be chosen. Sample size is a function of three factors – the alpha level, beta level and magnitude of the difference (effect size) hypothesised. Referring to the expected size of effect, COREC (2007: 23) guidance states that 'it is important that the difference is not unrealistically high, as this could lead to an underestimate of the required sample size'. In this paper, issues of alpha, beta and effect size will be considered from a practical perspective. A freely-available statistical software package called GPower (Buchner et al, 1997) will be used to illustrate concepts and provide practical assistance to novitiate researchers and members of research ethics committees. There are a wide range of freely available statistical software packages, such as PS (Dupont and Plummer, 1997) and STPLAN (Brown et al, 2000). Each has features worth exploring, but GPower was chosen because of its ease of use and the wide range of study designs for which it caters. Using GPower, sample size and power can be estimated or checked by those with relatively little technical knowledge of statistics. Alpha and beta errors and power Researchers begin with a research hypothesis – a 'hunch' about the way that the world might be. For example, that treatment A is better than treatment B. There are logical reasons why this can never be demonstrated as absolutely true, but evidence that it may or may not be true can be obtained by …", "title": "" }, { "docid": "neg:1840084_16", "text": "Mosquitoes represent the major arthropod vectors of human disease worldwide transmitting malaria, lymphatic filariasis, and arboviruses such as dengue virus and Zika virus. Unfortunately, no treatment (in the form of vaccines or drugs) is available for most of these diseases andvectorcontrolisstillthemainformofprevention. Thelimitationsoftraditionalinsecticide-based strategies, particularly the development of insecticide resistance, have resulted in significant efforts to develop alternative eco-friendly methods. Biocontrol strategies aim to be sustainable and target a range of different mosquito species to reduce the current reliance on insecticide-based mosquito control. In thisreview, weoutline non-insecticide basedstrategiesthat havebeenimplemented orare currently being tested. We also highlight the use of mosquito behavioural knowledge that can be exploited for control strategies.", "title": "" }, { "docid": "neg:1840084_17", "text": "The reflectarray antenna is a substitution of reflector antennas by making use of planar phased array techniques [1]. The array elements are specially designed, providing proper phase compensations to the spatial feed through various techniques [2–4]. The bandwidth limitation due to microstrip structures has led to various multi-band designs [5–6]. In these designs, the multi-band performance is realized through multi-layer structures, causing additional volume requirement and fabrication cost. An alternative approach is provided in [7–8], where single-layer structures are adopted. The former [7] implements a dual-band linearly polarized reflectarray whereas the latter [8] establishes a single-layer tri-band concept with circular polarization (CP). In this paper, a prototype based on the conceptual structure in [8] is designed, fabricated, and measured. The prototype is composed of three sub-arrays on a single layer. They have pencil beam patterns at 32 GHz (Ka-band), 8.4 GHz (X-band), and 7.1 GHz (C-band), respectively. Considering the limited area, two phase compensation techniques are adopted by these sub-arrays. The varied element size (VES) technique is applied to the C-band, whereas the element rotation (ER) technique is used in both X-band and Ka-band.", "title": "" }, { "docid": "neg:1840084_18", "text": "This survey presents recent progress on Affective Computing (AC) using mobile devices. AC has been one of the most active research topics for decades. The primary limitation of traditional AC research refers to as impermeable emotions. This criticism is prominent when emotions are investigated outside social contexts. It is problematic because some emotions are directed at other people and arise from interactions with them. The development of smart mobile wearable devices (e.g., Apple Watch, Google Glass, iPhone, Fitbit) enables the wild and natural study for AC in the aspect of computer science. This survey emphasizes the AC study and system using smart wearable devices. Various models, methodologies and systems are discussed in order to examine the state of the art. Finally, we discuss remaining challenges and future works.", "title": "" }, { "docid": "neg:1840084_19", "text": "The balanced business scorecard is a widely-used management framework for optimal measurement of organizational performance. Explains that the scorecard originated in an attempt to address the problem of systems apparently not working. However, the problem proved to be less the information systems than the broader organizational systems, specifically business performance measurement. Discusses the fundamental points to cover in implementation of the scorecard. Presents ten “golden rules” developed as a means of bringing the framework closer to practical application. The Nolan Norton Institute developed the balanced business scorecard in 1990, resulting in the much-referenced Harvard Business Review article, “Measuring performance in the organization of the future”, by Robert Kaplan and David Norton. The balanced scorecard supplemented traditional financial measures with three additional perspectives: customers, internal business processes and learning and growth. Currently, the balanced business scorecard is a powerful and widely-accepted framework for defining performance measures and communicating objectives and vision to the organization. Many companies around the world have worked with the balanced business scorecard but experiences vary. Based on practical experiences of clients of Nolan, Norton & Co. and KPMG in putting the balanced business scorecard to work, the following ten golden rules for its implementation have been determined: 1 There are no standard solutions: all businesses differ. 2 Top management support is essential. 3 Strategy is the starting point. 4 Determine a limited and balanced number of objectives and measures. 5 No in-depth analyses up front, but refine and learn by doing. 6 Take a bottom-up and top-down approach. 7 It is not a systems issue, but systems are an issue. 8 Consider delivery systems at the start. 9 Consider the effect of performance indicators on behaviour. 10 Not all measures can be quantified.", "title": "" } ]
1840085
Emotional Human Machine Conversation Generation Based on SeqGAN
[ { "docid": "pos:1840085_0", "text": "With the rise in popularity of artificial intelligence, the technology of verbal communication between man and machine has received an increasing amount of attention, but generating a good conversation remains a difficult task. The key factor in human-machine conversation is whether the machine can give good responses that are appropriate not only at the content level (relevant and grammatical) but also at the emotion level (consistent emotional expression). In our paper, we propose a new model based on long short-term memory, which is used to achieve an encoder-decoder framework, and we address the emotional factor of conversation generation by changing the model’s input using a series of input transformations: a sequence without an emotional category, a sequence with an emotional category for the input sentence, and a sequence with an emotional category for the output responses. We perform a comparison between our work and related work and find that we can obtain slightly better results with respect to emotion consistency. Although in terms of content coherence our result is lower than those of related work, in the present stage of research, our method can generally generate emotional responses in order to control and improve the user’s emotion. Our experiment shows that through the introduction of emotional intelligence, our model can generate responses appropriate not only in content but also in emotion.", "title": "" }, { "docid": "pos:1840085_1", "text": "As a new way of training generative models, Generative Adversarial Net (GAN) that uses a discriminative model to guide the training of the generative model has enjoyed considerable success in generating real-valued data. However, it has limitations when the goal is for generating sequences of discrete tokens. A major reason lies in that the discrete outputs from the generative model make it difficult to pass the gradient update from the discriminative model to the generative model. Also, the discriminative model can only assess a complete sequence, while for a partially generated sequence, it is nontrivial to balance its current score and the future one once the entire sequence has been generated. In this paper, we propose a sequence generation framework, called SeqGAN, to solve the problems. Modeling the data generator as a stochastic policy in reinforcement learning (RL), SeqGAN bypasses the generator differentiation problem by directly performing gradient policy update. The RL reward signal comes from the GAN discriminator judged on a complete sequence, and is passed back to the intermediate state-action steps using Monte Carlo search. Extensive experiments on synthetic data and real-world tasks demonstrate significant improvements over strong baselines. Introduction Generating sequential synthetic data that mimics the real one is an important problem in unsupervised learning. Recently, recurrent neural networks (RNNs) with long shortterm memory (LSTM) cells (Hochreiter and Schmidhuber 1997) have shown excellent performance ranging from natural language generation to handwriting generation (Wen et al. 2015; Graves 2013). The most common approach to training an RNN is to maximize the log predictive likelihood of each true token in the training sequence given the previous observed tokens (Salakhutdinov 2009). However, as argued in (Bengio et al. 2015), the maximum likelihood approaches suffer from so-called exposure bias in the inference stage: the model generates a sequence iteratively and predicts next token conditioned on its previously predicted ones that may be never observed in the training data. Such a discrepancy between training and inference can incur accumulatively along with the sequence and will become prominent ∗Weinan Zhang is the corresponding author. Copyright c © 2017, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. as the length of sequence increases. To address this problem, (Bengio et al. 2015) proposed a training strategy called scheduled sampling (SS), where the generative model is partially fed with its own synthetic data as prefix (observed tokens) rather than the true data when deciding the next token in the training stage. Nevertheless, (Huszár 2015) showed that SS is an inconsistent training strategy and fails to address the problem fundamentally. Another possible solution of the training/inference discrepancy problem is to build the loss function on the entire generated sequence instead of each transition. For instance, in the application of machine translation, a task specific sequence score/loss, bilingual evaluation understudy (BLEU) (Papineni et al. 2002), can be adopted to guide the sequence generation. However, in many other practical applications, such as poem generation (Zhang and Lapata 2014) and chatbot (Hingston 2009), a task specific loss may not be directly available to score a generated sequence accurately. General adversarial net (GAN) proposed by (Goodfellow and others 2014) is a promising framework for alleviating the above problem. Specifically, in GAN a discriminative net D learns to distinguish whether a given data instance is real or not, and a generative net G learns to confuse D by generating high quality data. This approach has been successful and been mostly applied in computer vision tasks of generating samples of natural images (Denton et al. 2015). Unfortunately, applying GAN to generating sequences has two problems. Firstly, GAN is designed for generating real-valued, continuous data but has difficulties in directly generating sequences of discrete tokens, such as texts (Huszár 2015). The reason is that in GANs, the generator starts with random sampling first and then a determistic transform, govermented by the model parameters. As such, the gradient of the loss from D w.r.t. the outputs by G is used to guide the generative model G (paramters) to slightly change the generated value to make it more realistic. If the generated data is based on discrete tokens, the “slight change” guidance from the discriminative net makes little sense because there is probably no corresponding token for such slight change in the limited dictionary space (Goodfellow 2016). Secondly, GAN can only give the score/loss for an entire sequence when it has been generated; for a partially generated sequence, it is non-trivial to balance how good as it is now and the future score as the entire sequence. ar X iv :1 60 9. 05 47 3v 6 [ cs .L G ] 2 5 A ug 2 01 7 In this paper, to address the above two issues, we follow (Bachman and Precup 2015; Bahdanau et al. 2016) and consider the sequence generation procedure as a sequential decision making process. The generative model is treated as an agent of reinforcement learning (RL); the state is the generated tokens so far and the action is the next token to be generated. Unlike the work in (Bahdanau et al. 2016) that requires a task-specific sequence score, such as BLEU in machine translation, to give the reward, we employ a discriminator to evaluate the sequence and feedback the evaluation to guide the learning of the generative model. To solve the problem that the gradient cannot pass back to the generative model when the output is discrete, we regard the generative model as a stochastic parametrized policy. In our policy gradient, we employ Monte Carlo (MC) search to approximate the state-action value. We directly train the policy (generative model) via policy gradient (Sutton et al. 1999), which naturally avoids the differentiation difficulty for discrete data in a conventional GAN. Extensive experiments based on synthetic and real data are conducted to investigate the efficacy and properties of the proposed SeqGAN. In our synthetic data environment, SeqGAN significantly outperforms the maximum likelihood methods, scheduled sampling and PG-BLEU. In three realworld tasks, i.e. poem generation, speech language generation and music generation, SeqGAN significantly outperforms the compared baselines in various metrics including human expert judgement. Related Work Deep generative models have recently drawn significant attention, and the ability of learning over large (unlabeled) data endows them with more potential and vitality (Salakhutdinov 2009; Bengio et al. 2013). (Hinton, Osindero, and Teh 2006) first proposed to use the contrastive divergence algorithm to efficiently training deep belief nets (DBN). (Bengio et al. 2013) proposed denoising autoencoder (DAE) that learns the data distribution in a supervised learning fashion. Both DBN and DAE learn a low dimensional representation (encoding) for each data instance and generate it from a decoding network. Recently, variational autoencoder (VAE) that combines deep learning with statistical inference intended to represent a data instance in a latent hidden space (Kingma and Welling 2014), while still utilizing (deep) neural networks for non-linear mapping. The inference is done via variational methods. All these generative models are trained by maximizing (the lower bound of) training data likelihood, which, as mentioned by (Goodfellow and others 2014), suffers from the difficulty of approximating intractable probabilistic computations. (Goodfellow and others 2014) proposed an alternative training methodology to generative models, i.e. GANs, where the training procedure is a minimax game between a generative model and a discriminative model. This framework bypasses the difficulty of maximum likelihood learning and has gained striking successes in natural image generation (Denton et al. 2015). However, little progress has been made in applying GANs to sequence discrete data generation problems, e.g. natural language generation (Huszár 2015). This is due to the generator network in GAN is designed to be able to adjust the output continuously, which does not work on discrete data generation (Goodfellow 2016). On the other hand, a lot of efforts have been made to generate structured sequences. Recurrent neural networks can be trained to produce sequences of tokens in many applications such as machine translation (Sutskever, Vinyals, and Le 2014; Bahdanau, Cho, and Bengio 2014). The most popular way of training RNNs is to maximize the likelihood of each token in the training data whereas (Bengio et al. 2015) pointed out that the discrepancy between training and generating makes the maximum likelihood estimation suboptimal and proposed scheduled sampling strategy (SS). Later (Huszár 2015) theorized that the objective function underneath SS is improper and explained the reason why GANs tend to generate natural-looking samples in theory. Consequently, the GANs have great potential but are not practically feasible to discrete probabilistic models currently. As pointed out by (Bachman and Precup 2015), the sequence data generation can be formulated as a sequential decision making process, which can be potentially be solved by reinforcement learning techniques. Modeling the sequence generator as a policy of picking the next token, policy gradient methods (Sutton et al. 1999) can be adopted to optimize the generator once there is an (implicit) reward function to guide the policy. For most practical sequence generation tasks, e.g. machine translation (Sutskever, Vinyals, and Le 2014), the reward signal is meaningful only for the entire sequence, for instance in the game of Go (Silver et al. 2016), the reward signal is only set at the end of the game. In", "title": "" }, { "docid": "pos:1840085_2", "text": "In this paper, we carry out two experiments on the TIMIT speech corpus with bidirectional and unidirectional Long Short Term Memory (LSTM) networks. In the first experiment (framewise phoneme classification) we find that bidirectional LSTM outperforms both unidirectional LSTM and conventional Recurrent Neural Networks (RNNs). In the second (phoneme recognition) we find that a hybrid BLSTM-HMM system improves on an equivalent traditional HMM system, as well as unidirectional LSTM-HMM.", "title": "" } ]
[ { "docid": "neg:1840085_0", "text": "The convergence properties of a nearest neighbor rule that uses an editing procedure to reduce the number of preclassified samples and to improve the performance of the rule are developed. Editing of the preclassified samples using the three-nearest neighbor rule followed by classification using the single-nearest neighbor rule with the remaining preclassified samples appears to produce a decision procedure whose risk approaches the Bayes' risk quite closely in many problems with only a few preclassified samples. The asymptotic risk of the nearest neighbor rules and the nearest neighbor rules using edited preclassified samples is calculated for several problems.", "title": "" }, { "docid": "neg:1840085_1", "text": "ion for Falsification Thomas Ball , Orna Kupferman , and Greta Yorsh 3 1 Microsoft Research, Redmond, WA, USA. Email: tball@microsoft.com, URL: research.microsoft.com/ ∼tball 2 Hebrew University, School of Eng. and Comp. Sci., Jerusalem 91904, Israel. Email: orna@cs.huji.ac.il, URL: www.cs.huji.ac.il/ ∼orna 3 Tel-Aviv University, School of Comp. Sci., Tel-Aviv 69978, Israel. Email:gretay@post.tau.ac.il, URL: www.math.tau.ac.il/ ∼gretay Microsoft Research Technical Report MSR-TR-2005-50 Abstract. Abstraction is traditionally used in the process of verification. There, an abstraction of a concrete system is sound if properties of the abstract system also hold in the conAbstraction is traditionally used in the process of verification. There, an abstraction of a concrete system is sound if properties of the abstract system also hold in the concrete system. Specifically, if an abstract state satisfies a property ψ thenall the concrete states that correspond to a satisfyψ too. Since the ideal goal of proving a system correct involves many obstacles, the primary use of formal methods nowadays is fal ification. There, as intesting, the goal is to detect errors, rather than to prove correctness. In the falsification setting, we can say that an abstraction is sound if errors of the abstract system exist also in the concrete system. Specifically, if an abstract state a violates a propertyψ, thenthere existsa concrete state that corresponds to a and violatesψ too. An abstraction that is sound for falsification need not be sound for verification. This suggests that existing frameworks for abstraction for verification may be too restrictive when used for falsification, and that a new framework is needed in order to take advantage of the weaker definition of soundness in the falsification setting. We present such a framework, show that it is indeed stronger (than other abstraction frameworks designed for verification), demonstrate that it can be made even stronger by parameterizing its transitions by predicates, and describe how it can be used for falsification of branching-time and linear-time temporal properties, as well as for generating testing goals for a concrete system by reasoning about its abstraction.", "title": "" }, { "docid": "neg:1840085_2", "text": "Honeypots are more and more used to collect data on malicious activities on the Internet and to better understand the strategies and techniques used by attackers to compromise target systems. Analysis and modeling methodologies are needed to support the characterization of attack processes based on the data collected from the honeypots. This paper presents some empirical analyses based on the data collected from the Leurré.com honeypot platforms deployed on the Internet and presents some preliminary modeling studies aimed at fulfilling such objectives.", "title": "" }, { "docid": "neg:1840085_3", "text": "We propose a framework for solving combinatorial optimization problems of which the output can be represented as a sequence of input elements. As an alternative to the Pointer Network, we parameterize a policy by a model based entirely on (graph) attention layers, and train it efficiently using REINFORCE with a simple and robust baseline based on a deterministic (greedy) rollout of the best policy found during training. We significantly improve over state-of-the-art results for learning algorithms for the 2D Euclidean TSP, reducing the optimality gap for a single tour construction by more than 75% (to 0.33%) and 50% (to 2.28%) for instances with 20 and 50 nodes respectively.", "title": "" }, { "docid": "neg:1840085_4", "text": "To ensure high quality software, it is crucial that non‐functional requirements (NFRs) are well specified and thoroughly tested in parallel with functional requirements (FRs). Nevertheless, in requirement specification the focus is mainly on FRs, even though NFRs have a critical role in the success of software projects. This study presents a systematic literature review of the NFR specification in order to identify the current state of the art and needs for future research. The systematic review summarizes the 51 relevant papers found and discusses them within seven major sub categories with “combination of other approaches” being the one with most prior results.", "title": "" }, { "docid": "neg:1840085_5", "text": "Existing text generation methods tend to produce repeated and “boring” expressions. To tackle this problem, we propose a new text generation model, called Diversity-Promoting Generative Adversarial Network (DP-GAN). The proposed model assigns low reward for repeatedly generated text and high reward for “novel” and fluent text, encouraging the generator to produce diverse and informative text. Moreover, we propose a novel languagemodel based discriminator, which can better distinguish novel text from repeated text without the saturation problem compared with existing classifier-based discriminators. The experimental results on review generation and dialogue generation tasks demonstrate that our model can generate substantially more diverse and informative text than existing baselines.1", "title": "" }, { "docid": "neg:1840085_6", "text": "BACKGROUND/AIMS\nEnd-stage liver disease accounts for one in forty deaths worldwide. Chronic infections with hepatitis B virus (HBV) and hepatitis C virus (HCV) are well-recognized risk factors for cirrhosis and liver cancer, but estimates of their contributions to worldwide disease burden have been lacking.\n\n\nMETHODS\nThe prevalence of serologic markers of HBV and HCV infections among patients diagnosed with cirrhosis or hepatocellular carcinoma (HCC) was obtained from representative samples of published reports. Attributable fractions of cirrhosis and HCC due to these infections were estimated for 11 WHO-based regions.\n\n\nRESULTS\nGlobally, 57% of cirrhosis was attributable to either HBV (30%) or HCV (27%) and 78% of HCC was attributable to HBV (53%) or HCV (25%). Regionally, these infections usually accounted for >50% of HCC and cirrhosis. Applied to 2002 worldwide mortality estimates, these fractions represent 929,000 deaths due to chronic HBV and HCV infections, including 446,000 cirrhosis deaths (HBV: n=235,000; HCV: n=211,000) and 483,000 liver cancer deaths (HBV: n=328,000; HCV: n=155,000).\n\n\nCONCLUSIONS\nHBV and HCV infections account for the majority of cirrhosis and primary liver cancer throughout most of the world, highlighting the need for programs to prevent new infections and provide medical management and treatment for those already infected.", "title": "" }, { "docid": "neg:1840085_7", "text": "Active learning (AL) is an increasingly popular strategy for mitigating the amount of labeled data required to train classifiers, thereby reducing annotator effort. We describe a real-world, deployed application of AL to the problem of biomedical citation screening for systematic reviews at the Tufts Medical Center's Evidence-based Practice Center. We propose a novel active learning strategy that exploits a priori domain knowledge provided by the expert (specifically, labeled features)and extend this model via a Linear Programming algorithm for situations where the expert can provide ranked labeled features. Our methods outperform existing AL strategies on three real-world systematic review datasets. We argue that evaluation must be specific to the scenario under consideration. To this end, we propose a new evaluation framework for finite-pool scenarios, wherein the primary aim is to label a fixed set of examples rather than to simply induce a good predictive model. We use a method from medical decision theory for eliciting the relative costs of false positives and false negatives from the domain expert, constructing a utility measure of classification performance that integrates the expert preferences. Our findings suggest that the expert can, and should, provide more information than instance labels alone. In addition to achieving strong empirical results on the citation screening problem, this work outlines many important steps for moving away from simulated active learning and toward deploying AL for real-world applications.", "title": "" }, { "docid": "neg:1840085_8", "text": "Nootropics or smart drugs are well-known compounds or supplements that enhance the cognitive performance. They work by increasing the mental function such as memory, creativity, motivation, and attention. Recent researches were focused on establishing a new potential nootropic derived from synthetic and natural products. The influence of nootropic in the brain has been studied widely. The nootropic affects the brain performances through number of mechanisms or pathways, for example, dopaminergic pathway. Previous researches have reported the influence of nootropics on treating memory disorders, such as Alzheimer's, Parkinson's, and Huntington's diseases. Those disorders are observed to impair the same pathways of the nootropics. Thus, recent established nootropics are designed sensitively and effectively towards the pathways. Natural nootropics such as Ginkgo biloba have been widely studied to support the beneficial effects of the compounds. Present review is concentrated on the main pathways, namely, dopaminergic and cholinergic system, and the involvement of amyloid precursor protein and secondary messenger in improving the cognitive performance.", "title": "" }, { "docid": "neg:1840085_9", "text": "Recurrent neural networks have recently shown significant potential in different language applications, ranging from natural language processing to language modelling. This paper introduces a research effort to use such networks to develop and evaluate natural language acquisition on a humanoid robot. Here, the problem is twofold. First, the focus will be put on using the gesture-word combination stage observed in infants to transition from single to multi-word utterances. Secondly, research will be carried out in the domain of connecting action learning with language learning. In the former, the long-short term memory architecture will be implemented, whilst in the latter multiple time-scale recurrent neural networks will be used. This will allow for comparison between the two architectures, whilst highlighting the strengths and shortcomings of both with respect to the language learning problem. Here, the main research efforts, challenges and expected outcomes are described.", "title": "" }, { "docid": "neg:1840085_10", "text": "We measure the value of promotional activities and referrals by content creators to an online platform of user-generated content. To do so, we develop a modeling approach that explains individual-level choices of visiting the platform, creating, and purchasing content, as a function of consumer characteristics and marketing activities, allowing for the possibility of interdependence of decisions within and across users. Empirically, we apply our model to Hewlett-Packard’s (HP) print-on-demand service of user-created magazines, named MagCloud. We use two distinct data sets to show the applicability of our approach: an aggregate-level data set from Google Analytics, which is a widely available source of data to managers, and an individual-level data set from HP. Our results compare content creator activities, which include referrals and word-ofmouth efforts, with firm-based actions, such as price promotions and public relations. We show that price promotions have strong effects, but limited to the purchase decisions, while content creator referrals and public relations have broader effects which impact all consumer decisions at the platform. We provide recommendations to the level of the firm’s investments when “free” promotional activities by content creators exist. These “free” marketing campaigns are likely to have a substantial presence in most online services of user-generated content.", "title": "" }, { "docid": "neg:1840085_11", "text": "The annihilating filter-based low-rank Hanel matrix approach (ALOHA) is one of the state-of-the-art compressed sensing approaches that directly interpolates the missing k-space data using low-rank Hankel matrix completion. Inspired by the recent mathematical discovery that links deep neural networks to Hankel matrix decomposition using data-driven framelet basis, here we propose a fully data-driven deep learning algorithm for k-space interpolation. Our network can be also easily applied to non-Cartesian k-space trajectories by simply adding an additional re-gridding layer. Extensive numerical experiments show that the proposed deep learning method significantly outperforms the existing image-domain deep learning approaches.", "title": "" }, { "docid": "neg:1840085_12", "text": "We address action temporal localization in untrimmed long videos. This is important because videos in real applications are usually unconstrained and contain multiple action instances plus video content of background scenes or other activities. To address this challenging issue, we exploit the effectiveness of deep networks in action temporal localization via multi-stage segment-based 3D ConvNets: (1) a proposal stage identifies candidate segments in a long video that may contain actions; (2) a classification stage learns one-vs-all action classification model to serve as initialization for the localization stage; and (3) a localization stage fine-tunes on the model learnt in the classification stage to localize each action instance. We propose a novel loss function for the localization stage to explicitly consider temporal overlap and therefore achieve high temporal localization accuracy. On two large-scale benchmarks, our approach achieves significantly superior performances compared with other state-of-the-art systems: mAP increases from 1.7% to 7.4% on MEXaction2 and increased from 15.0% to 19.0% on THUMOS 2014, when the overlap threshold for evaluation is set to 0.5.", "title": "" }, { "docid": "neg:1840085_13", "text": "Software engineers are frequently faced with tasks that can be expressed as optimization problems. To support them with automation, search-based model-driven engineering combines the abstraction power of models with the versatility of meta-heuristic search algorithms. While current approaches in this area use genetic algorithms with xed mutation operators to explore the solution space, the e ciency of these operators may heavily depend on the problem at hand. In this work, we propose FitnessStudio, a technique for generating e cient problem-tailored mutation operators automatically based on a two-tier framework. The lower tier is a regular meta-heuristic search whose mutation operator is trained by an upper-tier search using a higher-order model transformation. We implemented this framework using the Henshin transformation language and evaluated it in a benchmark case, where the generated mutation operators enabled an improvement to the state of the art in terms of result quality, without sacri cing performance.", "title": "" }, { "docid": "neg:1840085_14", "text": "Responding to a 2015 MISQ call for research on service innovation, this study develops a conceptual model of service innovation in higher education academic libraries. Digital technologies have drastically altered the delivery of information services in the past decade, raising questions about critical resources, their interaction with digital technologies, and the value of new services and their measurement. Based on new product development (NPD) and new service development (NSD) processes and the service-dominant logic (SDL) perspective, this research-in-progress presents a conceptual model that theorizes interactions between critical resources and digital technologies in an iterative process for delivery of service innovation in academic libraries. The study also suggests future research paths to confirm, expand, and validate the new service innovation model.", "title": "" }, { "docid": "neg:1840085_15", "text": "This paper studies a class of adaptive gradient based momentum algorithms that update the search directions and learning rates simultaneously using past gradients. This class, which we refer to as the “Adam-type”, includes the popular algorithms such as Adam (Kingma & Ba, 2014) , AMSGrad (Reddi et al., 2018) , AdaGrad (Duchi et al., 2011). Despite their popularity in training deep neural networks (DNNs), the convergence of these algorithms for solving non-convex problems remains an open question. In this paper, we develop an analysis framework and a set of mild sufficient conditions that guarantee the convergence of the Adam-type methods, with a convergence rate of order O(log T/ √ T ) for non-convex stochastic optimization. Our convergence analysis applies to a new algorithm called AdaFom (AdaGrad with First Order Momentum). We show that the conditions are essential, by identifying concrete examples in which violating the conditions makes an algorithm diverge. Besides providing one of the first comprehensive analysis for Adam-type methods in the non-convex setting, our results can also help the practitioners to easily monitor the progress of algorithms and determine their convergence behavior.", "title": "" }, { "docid": "neg:1840085_16", "text": "People called night owls habitually have late bedtimes and late times of arising, sometimes suffering a heritable circadian disturbance called delayed sleep phase syndrome (DSPS). Those with DSPS, those with more severe progressively-late non-24-hour sleep-wake cycles, and those with bipolar disorder may share genetic tendencies for slowed or delayed circadian cycles. We searched for polymorphisms associated with DSPS in a case-control study of DSPS research participants and a separate study of Sleep Center patients undergoing polysomnography. In 45 participants, we resequenced portions of 15 circadian genes to identify unknown polymorphisms that might be associated with DSPS, non-24-hour rhythms, or bipolar comorbidities. We then genotyped single nucleotide polymorphisms (SNPs) in both larger samples, using Illumina Golden Gate assays. Associations of SNPs with the DSPS phenotype and with the morningness-eveningness parametric phenotype were computed for both samples, then combined for meta-analyses. Delayed sleep and \"eveningness\" were inversely associated with loci in circadian genes NFIL3 (rs2482705) and RORC (rs3828057). A group of haplotypes overlapping BHLHE40 was associated with non-24-hour sleep-wake cycles, and less robustly, with delayed sleep and bipolar disorder (e.g., rs34883305, rs34870629, rs74439275, and rs3750275 were associated with n=37, p=4.58E-09, Bonferroni p=2.95E-06). Bright light and melatonin can palliate circadian disorders, and genetics may clarify the underlying circadian photoperiodic mechanisms. After further replication and identification of the causal polymorphisms, these findings may point to future treatments for DSPS, non-24-hour rhythms, and possibly bipolar disorder or depression.", "title": "" }, { "docid": "neg:1840085_17", "text": "We describe an extensible approach to generating questions for the purpose of reading comprehension assessment and practice. Our framework for question generation composes general-purpose rules to transform declarative sentences into questions, is modular in that existing NLP tools can be leveraged, and includes a statistical component for scoring questions based on features of the input, output, and transformations performed. In an evaluation in which humans rated questions according to several criteria, we found that our implementation achieves 43.3% precisionat-10 and generates approximately 6.8 acceptable questions per 250 words of source text.", "title": "" }, { "docid": "neg:1840085_18", "text": "Enterprise Resource Planning (ERP) systems hold great promise for integrating business processes and have proven their worth in a variety of organizations. Yet the gains that they have enabled in terms of increased productivity and cost savings are often achieved in the face of daunting usability problems. While one frequently hears anecdotes about the difficulties involved in using ERP systems, there is little documentation of the types of problems typically faced by users. The purpose of this study is to begin addressing this gap by categorizing and describing the usability issues encountered by one division of a Fortune 500 company in the first years of its large-scale ERP implementation. This study also demonstrates the promise of using collaboration theory to evaluate usability characteristics of existing systems and to design new systems. Given the impressive results already achieved by some corporations with these systems, imagine how much more would be possible if understanding how to use them weren’t such an", "title": "" }, { "docid": "neg:1840085_19", "text": "Toxic epidermal necrolysis (TEN) is one of the most threatening adverse reactions to various drugs. No case of concomitant occurrence TEN and severe granulocytopenia following the treatment with cefuroxime has been reported to date. Herein we present a case of TEN that developed eighteen days of the initiation of cefuroxime axetil therapy for urinary tract infection in a 73-year-old woman with chronic renal failure and no previous history of allergic diathesis. The condition was associated with severe granulocytopenia and followed by gastrointestinal hemorrhage, severe sepsis and multiple organ failure syndrome development. Despite intensive medical treatment the patient died. The present report underlines the potential of cefuroxime to simultaneously induce life threatening adverse effects such as TEN and severe granulocytopenia. Further on, because the patient was also taking furosemide for chronic renal failure, the possible unfavorable interactions between the two drugs could be hypothesized. Therefore, awareness of the possible drug interaction is necessary, especially when given in conditions of their altered pharmacokinetics as in case of chronic renal failure.", "title": "" } ]
1840086
How to Build a CC System
[ { "docid": "pos:1840086_0", "text": "The computational creativity community (rightfully) takes a dim view of supposedly creative systems that operate by mere generation. However, what exactly this means has never been adequately defined, and therefore the idea of requiring systems to exceed this standard is problematic. Here, we revisit the question of mere generation and attempt to qualitatively identify what constitutes exceeding this threshold. This exercise leads to the conclusion that the question is likely no longer relevant for the field and that a failure to recognize this is likely detrimental to its future health.", "title": "" } ]
[ { "docid": "neg:1840086_0", "text": "Logic and Philosophy of Science Research Group, Hokkaido University, Japan Jan 7, 2015 Abstract In this paper we provide an analysis and overview of some notable definitions, works and thoughts concerning discrete physics (digital philosophy) that mainly suggest a finite and discrete characteristic for the physical world, as well as, of the cellular automaton, which could serve as the basis of a (or the only) perfect mathematical deterministic model for the physical reality.", "title": "" }, { "docid": "neg:1840086_1", "text": "Asserts have long been a strongly recommended (if non-functional) adjunct to programs. They certainly don't add any user-evident feature value; and it can take quite some skill and effort to devise and add useful asserts. However, they are believed to add considerable value to the developer. Certainly, they can help with automated verification; but even in the absence of that, claimed advantages include improved understandability, maintainability, easier fault localization and diagnosis, all eventually leading to better software quality. We focus on this latter claim, and use a large dataset of asserts in C and C++ programs to explore the connection between asserts and defect occurrence. Our data suggests a connection: functions with asserts do have significantly fewer defects. This indicates that asserts do play an important role in software quality; we therefore explored further the factors that play a role in assertion placement: specifically, process factors (such as developer experience and ownership) and product factors, particularly interprocedural factors, exploring how the placement of assertions in functions are influenced by local and global network properties of the callgraph. Finally, we also conduct a differential analysis of assertion use across different application domains.", "title": "" }, { "docid": "neg:1840086_2", "text": "Reliable continuous core temperature measurement is of major importance for monitoring patients. The zero heat flux method (ZHF) can potentially fulfil the requirements of non-invasiveness, reliability and short delay time that current measurement methods lack. The purpose of this study was to determine the performance of a new ZHF device on the forehead regarding these issues. Seven healthy subjects performed a protocol of 10 min rest, 30 min submaximal exercise (average temperature increase about 1.5 °C) and 10 min passive recovery in ambient conditions of 35 °C and 50% relative humidity. ZHF temperature (T(zhf)) was compared to oesophageal (T(es)) and rectal (T(re)) temperature. ΔT(zhf)-T(es) had an average bias ± standard deviation of 0.17 ± 0.19 °C in rest, -0.05 ± 0.18 °C during exercise and -0.01 ± 0.20 °C during recovery, the latter two being not significant. The 95% limits of agreement ranged from -0.40 to 0.40 °C and T(zhf) had hardly any delay compared to T(es). T(re) showed a substantial delay and deviation from T(es) when core temperature changed rapidly. Results indicate that the studied ZHF sensor tracks T(es) very well in hot and stable ambient conditions and may be a promising alternative for reliable non-invasive continuous core temperature measurement in hospital.", "title": "" }, { "docid": "neg:1840086_3", "text": "Insight into the growth (or shrinkage) of “knowledge communities” of authors that build on each other's work can be gained by studying the evolution over time of clusters of documents. We cluster documents based on the documents they cite in common using the Streemer clustering method, which finds cohesive foreground clusters (the knowledge communities) embedded in a diffuse background. We build predictive models with features based on the citation structure, the vocabulary of the papers, and the affiliations and prestige of the authors and use these models to study the drivers of community growth and the predictors of how widely a paper will be cited. We find that scientific knowledge communities tend to grow more rapidly if their publications build on diverse information and use narrow vocabulary and that papers that lie on the periphery of a community have the highest impact, while those not in any community have the lowest impact.", "title": "" }, { "docid": "neg:1840086_4", "text": "Anopheles mosquitoes, sp is the main vector of malaria disease that is widespread in many parts of the world including in Papua Province. There are four speciesof Anopheles mosquitoes, sp, in Papua namely: An.farauti, An.koliensis, An. subpictus, and An.punctulatus. Larviciding synthetic cause resistance. This study aims to analyze the potential of papaya leaf and seeds extracts (Carica papaya) as larvicides against the mosquitoes Anopheles sp. The experiment was conducted at the Laboratory of Health Research and Development in Jayapura Papua province. The method used is an experimental post only control group design. Sampling was done randomly on the larvae of Anopheles sp of breeding places in Kampung Kehiran Jayapura Sentani District, 1,500 larvae. Analysis of data using statistical analysis to test the log probit mortality regression dosage, Kruskall Wallis and Mann Whitney. The results showed that papaya leaf extract effective in killing larvae of Anopheles sp, value Lethal Concentration (LC50) were 422.311 ppm, 1399.577 ppm LC90, Lethal Time (LT50) 13.579 hours, LT90 23.478 hours. Papaya seed extract is effective in killing mosquito larvae Anopheles sp, with 21.983 ppm LC50, LC90 ppm 137.862, 13.269 hours LT50, LT90 26.885 hours. Papaya seed extract is more effective in killing larvae of Anopheles sp. The mixture of papaya leaf extract and seeds are effective in killing mosquito larvae Anopheles sp, indicated by the percentage of larval mortality, the observation hours to 12, the highest larval mortality in comparison 0,05:0,1 extract, 52%, ratio 0.1 : 0.1 by 48 %, on a 24 hour observation, larval mortality in both groups reached 100 %.", "title": "" }, { "docid": "neg:1840086_5", "text": "WiFi offloading is envisioned as a promising solution to the mobile data explosion problem in cellular networks. WiFi offloading for moving vehicles, however, poses unique characteristics and challenges, due to high mobility, fluctuating mobile channels, etc. In this paper, we focus on the problem of WiFi offloading in vehicular communication environments. Specifically, we discuss the challenges and identify the research issues related to drive-thru Internet access and effectiveness of vehicular WiFi offloading. Moreover, we review the state-of-the-art offloading solutions, in which advanced vehicular communications can be employed. We also shed some lights on the path for future research on this topic.", "title": "" }, { "docid": "neg:1840086_6", "text": "Staging and response criteria were initially developed for Hodgkin lymphoma (HL) over 60 years ago, but not until 1999 were response criteria published for non-HL (NHL). Revisions to these criteria for both NHL and HL were published in 2007 by an international working group, incorporating PET for response assessment, and were widely adopted. After years of experience with these criteria, a workshop including representatives of most major international lymphoma cooperative groups and cancer centers was held at the 11(th) International Conference on Malignant Lymphoma (ICML) in June, 2011 to determine what changes were needed. An Imaging Task Force was created to update the relevance of existing imaging for staging, reassess the role of interim PET-CT, standardize PET-CT reporting, and to evaluate the potential prognostic value of quantitative analyses using PET and CT. A clinical task force was charged with assessing the potential of PET-CT to modify initial staging. A subsequent workshop was help at ICML-12, June 2013. Conclusions included: PET-CT should now be used to stage FDG-avid lymphomas; for others, CT will define stage. Whereas Ann Arbor classification will still be used for disease localization, patients should be treated as limited disease [I (E), II (E)], or extensive disease [III-IV (E)], directed by prognostic and risk factors. Since symptom designation A and B are frequently neither recorded nor accurate, and are not prognostic in most widely used prognostic indices for HL or the various types of NHL, these designations need only be applied to the limited clinical situations where they impact treatment decisions (e.g., stage II HL). PET-CT can replace the bone marrow biopsy (BMBx) for HL. A positive PET of bone or bone marrow is adequate to designate advanced stage in DLBCL. However, BMBx can be considered in DLBCL with no PET evidence of BM involvement, if identification of discordant histology is relevant for patient management, or if the results would alter treatment. BMBx remains recommended for staging of other histologies, primarily if it will impact therapy. PET-CT will be used to assess response in FDG-avid histologies using the 5-point scale, and included in new PET-based response criteria, but CT should be used in non-avid histologies. The definition of PD can be based on a single node, but must consider the potential for flare reactions seen early in treatment with newer targeted agents which can mimic disease progression. Routine surveillance scans are strongly discouraged, and the number of scans should be minimized in practice and in clinical trials, when not a direct study question. Hopefully, these recommendations will improve the conduct of clinical trials and patient management.", "title": "" }, { "docid": "neg:1840086_7", "text": "The multigram model assumes that language can be described as the output of a memoryless source that emits variable-length sequences of words. The estimation of the model parameters can be formulated as a Maximum Likelihood estimation problem from incomplete data. We show that estimates of the model parameters can be computed through an iterative Expectation-Maximization algorithm and we describe a forward-backward procedure for its implementation. We report the results of a systematical evaluation of multi-grams for language modeling on the ATIS database. The objective performance measure is the test set perplexity. Our results show that multigrams outperform conventional n-grams for this task.", "title": "" }, { "docid": "neg:1840086_8", "text": "With the rise of e-commerce, people are accustomed to writing their reviews after receiving the goods. These comments are so important that a bad review can have a direct impact on others buying. Besides, the abundant information within user reviews is very useful for extracting user preferences and item properties. In this paper, we investigate the approach to effectively utilize review information for recommender systems. The proposed model is named LSTM-Topic matrix factorization (LTMF) which integrates both LSTM and Topic Modeling for review understanding. In the experiments on popular review dataset Amazon , our LTMF model outperforms previous proposed HFT model and ConvMF model in rating prediction. Furthermore, LTMF shows the better ability on making topic clustering than traditional topic model based method, which implies integrating the information from deep learning and topic modeling is a meaningful approach to make a better understanding of reviews.", "title": "" }, { "docid": "neg:1840086_9", "text": "The deterioration of cancellous bone structure due to aging and disease is characterized by a conversion from plate elements to rod elements. Consequently the terms \"rod-like\" and \"plate-like\" are frequently used for a subjective classification of cancellous bone. In this work a new morphometric parameter called Structure Model Index (SMI) is introduced, which makes it possible to quantify the characteristic form of a three-dimensionally described structure in terms of the amount of plates and rod composing the structure. The SMI is calculated by means of three-dimensional image analysis based on a differential analysis of the triangulated bone surface. For an ideal plate and rod structure the SMI value is 0 and 3, respectively, independent of the physical dimensions. For a structure with both plates and rods of equal thickness the value lies between 0 and 3, depending on the volume ratio of rods and plates. The SMI parameter is evaluated by examining bone biopsies from different skeletal sites. The bone samples were measured three-dimensionally with a micro-CT system. Samples with the same volume density but varying trabecular architecture can uniquely be characterized with the SMI. Furthermore the SMI values were found to correspond well with the perceived structure type.", "title": "" }, { "docid": "neg:1840086_10", "text": "Extraction-Transformation-Loading (ETL) tools are pieces of software responsible for the extraction of data from several sources, their cleansing, customization and insertion into a data warehouse. In this paper, we delve into the logical design of ETL scenarios and provide a generic and customizable framework in order to support the DW designer in his task. First, we present a metamodel particularly customized for the definition of ETL activities. We follow a workflow-like approach, where the output of a certain activity can either be stored persistently or passed to a subsequent activity. Also, we employ a declarative database programming language, LDL, to define the semantics of each activity. The metamodel is generic enough to capture any possible ETL activity. Nevertheless, in the pursuit of higher reusability and flexibility, we specialize the set of our generic metamodel constructs with a palette of frequently-used ETL activities, which we call templates. Moreover, in order to achieve a uniform extensibility mechanism for this library of built-ins, we have to deal with specific language issues. Therefore, we also discuss the mechanics of template instantiation to concrete activities. The design concepts that we introduce have been implemented in a tool, ARKTOS II, which is also presented.", "title": "" }, { "docid": "neg:1840086_11", "text": "This thesis addresses the problem of scheduling multiple, concurrent, adaptively parallel jobs on a multiprogrammed shared-memory multiprocessor. Adaptively parallel jobs are jobs for which the number of processors that can be used without waste varies during execution. We focus on the specific case of parallel jobs that are scheduled using a randomized work-stealing algorithm, as is used in the Cilk multithreaded language. We begin by developing a theoretical model for two-level scheduling systems, or those in which the operating system allocates processors to jobs, and the jobs schedule their threads on the processors. To analyze the performance of a job scheduling algorithm, we model the operating system as an adversary. We show that a greedy scheduler achieves an execution time that is within a factor of 2 of optimal under these conditions. Guided by our model, we present a randomized work-stealing algorithm for adaptively parallel jobs, algorithm WSAP, which takes a unique approach to estimating the processor desire of a job. We show that attempts to directly measure a job’s instantaneous parallelism are inherently misleading. We also describe a dynamic processor-allocation algorithm, algorithm DP, that allocates processors to jobs in a fair and efficient way. Using these two algorithms, we present the design and implementation of Cilk-AP, a two-level scheduling system for adaptively parallel workstealing jobs. Cilk-AP is implemented by extending the runtime system of Cilk. We tested the Cilk-AP system on a shared-memory symmetric multiprocessor (SMP) with 16 processors. Our experiments show that, relative to the original Cilk system, Cilk-AP incurs negligible overhead and provides up to 37% improvement in throughput and 30% improvement in response time in typical multiprogramming scenarios. This thesis represents joint work with Charles Leiserson and Kunal Agrawal of the Supercomputing Technologies Group at MIT’s Computer Science and Artificial Intelligence Laboratory. Thesis Supervisor: Charles E. Leiserson Title: Professor", "title": "" }, { "docid": "neg:1840086_12", "text": "Several end-to-end deep learning approaches have been recently presented which extract either audio or visual features from the input images or audio signals and perform speech recognition. However, research on end-to-end audiovisual models is very limited. In this work, we present an end-to-end audiovisual model based on residual networks and Bidirectional Gated Recurrent Units (BGRUs). To the best of our knowledge, this is the first audiovisual fusion model which simultaneously learns to extract features directly from the image pixels and audio waveforms and performs within-context word recognition on a large publicly available dataset (LRW). The model consists of two streams, one for each modality, which extract features directly from mouth regions and raw waveforms. The temporal dynamics in each stream/modality are modeled by a 2-layer BGRU and the fusion of multiple streams/modalities takes place via another 2-layer BGRU. A slight improvement in the classification rate over an end-to-end audio-only and MFCC-based model is reported in clean audio conditions and low levels of noise. In presence of high levels of noise, the end-to-end audiovisual model significantly outperforms both audio-only models.", "title": "" }, { "docid": "neg:1840086_13", "text": "JULY 2005, GSA TODAY ABSTRACT The subduction factory processes raw materials such as oceanic sediments and oceanic crust and manufactures magmas and continental crust as products. Aqueous fluids, which are extracted from oceanic raw materials via dehydration reactions during subduction, dissolve particular elements and overprint such elements onto the mantle wedge to generate chemically distinct arc basalt magmas. The production of calc-alkalic andesites typifies magmatism in subduction zones. One of the principal mechanisms of modern-day, calc-alkalic andesite production is thought to be mixing of two endmember magmas, a mantle-derived basaltic magma and an arc crust-derived felsic magma. This process may also have contributed greatly to continental crust formation, as the bulk continental crust possesses compositions similar to calc-alkalic andesites. If so, then the mafic melting residue after extraction of felsic melts should be removed and delaminated from the initial basaltic arc crust in order to form “andesitic” crust compositions. The waste materials from the factory, such as chemically modified oceanic materials and delaminated mafic lower crust materials, are transported down to the deep mantle and recycled as mantle plumes. The subduction factory has played a central role in the evolution of the solid Earth through creating continental crust and deep mantle geochemical reservoirs.", "title": "" }, { "docid": "neg:1840086_14", "text": "This study examines the effects of body shape (women’s waist-to-hip ratio and men’s waist-to-shoulder ratio) on desirability of a potential romantic partner. In judging desirability, we expected male participants to place more emphasis on female body shape, whereas females would focus more on personality characteristics. Further, we expected that relationship type would moderate the extent to which physical characteristics were valued over personality. Specifically, physical characteristics were expected to be most valued in short-term sexual encounters when compared with long-term relationships. Two hundred and thirty-nine participants (134 females, 105 males; 86% Caucasian) rated the desirability of an opposite-sex target for a date, a one-time sexual encounter, and a serious relationship. All key hypotheses were supported by the data.", "title": "" }, { "docid": "neg:1840086_15", "text": "This paper describes the way of Market Basket Analysis implementation to Six Sigma methodology. Data Mining methods provide a lot of opportunities in the market sector. Basket Market Analysis is one of them. Six Sigma methodology uses several statistical methods. With implementation of Market Basket Analysis (as a part of Data Mining) to Six Sigma (to one of its phase), we can improve the results and change the Sigma performance level of the process. In our research we used GRI (General Rule Induction) algorithm to produce association rules between products in the market basket. These associations show a variety between the products. To show the dependence between the products we used a Web plot. The last algorithm in analysis was C5.0. This algorithm was used to build rule-based profiles.", "title": "" }, { "docid": "neg:1840086_16", "text": "Softmax is the most commonly used output function for multiclass problems and is widely used in areas such as vision, natural language processing, and recommendation. A softmax model has linear costs in the number of classes which makes it too expensive for many real-world problems. A common approach to speed up training involves sampling only some of the classes at each training step. It is known that this method is biased and that the bias increases the more the sampling distribution deviates from the output distribution. Nevertheless, almost all recent work uses simple sampling distributions that require a large sample size to mitigate the bias. In this work, we propose a new class of kernel based sampling methods and develop an efficient sampling algorithm. Kernel based sampling adapts to the model as it is trained, thus resulting in low bias. It can also be easily applied to many models because it relies only on the model’s last hidden layer. We empirically study the trade-off of bias, sampling distribution and sample size and show that kernel based sampling results in low bias with few samples.", "title": "" }, { "docid": "neg:1840086_17", "text": "In this paper, we propose a novel tensor graph convolutional neural network (TGCNN) to conduct convolution on factorizable graphs, for which here two types of problems are focused, one is sequential dynamic graphs and the other is cross-attribute graphs. Especially, we propose a graph preserving layer to memorize salient nodes of those factorized subgraphs, i.e. cross graph convolution and graph pooling. For cross graph convolution, a parameterized Kronecker sum operation is proposed to generate a conjunctive adjacency matrix characterizing the relationship between every pair of nodes across two subgraphs. Taking this operation, then general graph convolution may be efficiently performed followed by the composition of small matrices, which thus reduces high memory and computational burden. Encapsuling sequence graphs into a recursive learning, the dynamics of graphs can be efficiently encoded as well as the spatial layout of graphs. To validate the proposed TGCNN, experiments are conducted on skeleton action datasets as well as matrix completion dataset. The experiment results demonstrate that our method can achieve more competitive performance with the state-of-the-art methods.", "title": "" }, { "docid": "neg:1840086_18", "text": "BACKGROUND AND OBJECTIVES\nTo assess the influence of risk factors on the rates and kinetics of peripheral vein phlebitis (PVP) development and its theoretical influence in absolute PVP reduction after catheter replacement.\n\n\nMETHODS\nAll peripheral short intravenous catheters inserted during one month were included (1201 catheters and 967 patients). PVP risk factors were assessed by a Cox proportional hazard model. Cumulative probability, conditional failure of PVP and theoretical estimation of the benefit from replacement at different intervals were performed.\n\n\nRESULTS\nFemale gender, catheter insertion at the emergency or medical-surgical wards, forearm site, amoxicillin-clavulamate or aminoglycosides were independent predictors of PVP with hazard ratios (95 confidence interval) of 1.46 (1.09-2.15), 1.94 (1.01-3.73), 2.51 (1.29-4.88), 1.93 (1.20-3.01), 2.15 (1.45-3.20) and 2.10 (1.01-4.63), respectively. Maximum phlebitis incidence was reached sooner in patients with ≥2 risk factors (days 3-4) than in those with <2 (days 4-5). Conditional failure increased from 0.08 phlebitis/one catheter-day for devices with ≤1 risk factors to 0.26 for those with ≥3. The greatest benefit of routine catheter exchange was obtained by replacement every 60h. However, this benefit differed according to the number of risk factors: 24.8% reduction with ≥3, 13.1% with 2, and 9.2% with ≤1.\n\n\nCONCLUSIONS\nPVP dynamics is highly influenced by identifiable risk factors which may be used to refine the strategy of catheter management. Routine replacement every 72h seems to be strictly necessary only in high-risk catheters.", "title": "" }, { "docid": "neg:1840086_19", "text": "OBJECTIVE\nTo issue a recommendation on the types and amounts of physical activity needed to improve and maintain health in older adults.\n\n\nPARTICIPANTS\nA panel of scientists with expertise in public health, behavioral science, epidemiology, exercise science, medicine, and gerontology.\n\n\nEVIDENCE\nThe expert panel reviewed existing consensus statements and relevant evidence from primary research articles and reviews of the literature.\n\n\nPROCESS\nAfter drafting a recommendation for the older adult population and reviewing drafts of the Updated Recommendation from the American College of Sports Medicine (ACSM) and the American Heart Association (AHA) for Adults, the panel issued a final recommendation on physical activity for older adults.\n\n\nSUMMARY\nThe recommendation for older adults is similar to the updated ACSM/AHA recommendation for adults, but has several important differences including: the recommended intensity of aerobic activity takes into account the older adult's aerobic fitness; activities that maintain or increase flexibility are recommended; and balance exercises are recommended for older adults at risk of falls. In addition, older adults should have an activity plan for achieving recommended physical activity that integrates preventive and therapeutic recommendations. The promotion of physical activity in older adults should emphasize moderate-intensity aerobic activity, muscle-strengthening activity, reducing sedentary behavior, and risk management.", "title": "" } ]
1840087
Refining faster-RCNN for accurate object detection
[ { "docid": "pos:1840087_0", "text": "The visual cues from multiple support regions of different sizes and resolutions are complementary in classifying a candidate box in object detection. How to effectively integrate local and contextual visual cues from these regions has become a fundamental problem in object detection. Most existing works simply concatenated features or scores obtained from support regions. In this paper, we proposal a novel gated bi-directional CNN (GBD-Net) to pass messages between features from different support regions during both feature learning and feature extraction. Such message passing can be implemented through convolution in two directions and can be conducted in various layers. Therefore, local and contextual visual patterns can validate the existence of each other by learning their nonlinear relationships and their close iterations are modeled in a much more complex way. It is also shown that message passing is not always helpful depending on individual samples. Gated functions are further introduced to control message transmission and their on-and-off is controlled by extra visual evidence from the input sample. GBD-Net is implemented under the Fast RCNN detection framework. Its effectiveness is shown through experiments on three object detection datasets, ImageNet, Pascal VOC2007 and Microsoft COCO.", "title": "" } ]
[ { "docid": "neg:1840087_0", "text": "In this paper, the unknown parameters of the photovoltaic (PV) module are determined using Genetic Algorithm (GA) method. This algorithm based on minimizing the absolute difference between the maximum powers obtained from module datasheet and the maximum power obtained from the mathematical model of the PV module, at different operating conditions. This method does not need to initial values, so these parameters of the PV module are easily obtained with high accuracy. To validate the proposed method, the results obtained from it are compared with the experimental results obtained from the PV module datasheet for different operating conditions. The results obtained from the proposed model are found to be very close compared to the results given in the datasheet of the PV module.", "title": "" }, { "docid": "neg:1840087_1", "text": "Computational stereo is one of the classical problems in computer vision. Numerous algorithms and solutions have been reported in recent years focusing on developing methods for computing similarity, aggregating it to obtain spatial support and finally optimizing an energy function to find the final disparity. In this paper, we focus on the feature extraction component of stereo matching architecture and we show standard CNNs operation can be used to improve the quality of the features used to find point correspondences. Furthermore, we propose a simple space aggregation that hugely simplifies the correlation learning problem. Our results on benchmark data are compelling and show promising potential even without refining the solution.", "title": "" }, { "docid": "neg:1840087_2", "text": "On the Mathematical Foundations of Theoretical Statistics. Author(s): R. A. Fisher. Source: Philosophical Transactions of the Royal Society of London. Series A Solutions to Exercises. 325. Bibliography. 347. Index Discrete mathematics is an essential part of the foundations of (theoretical) computer science, statistics . 2) Statistical Methods by S.P.Gupta. 3) Mathematical Statistics by Saxena & Kapoor. 4) Statistics by Sancheti & Kapoor. 5) Introduction to Mathematical Statistics Fundamentals of Mathematical statistics by Guptha, S.C &Kapoor, V.K (Sulthan chand. &sons). 2. Introduction to Mathematical statistics by Hogg.R.V and and .", "title": "" }, { "docid": "neg:1840087_3", "text": "Model-free reinforcement learning (RL) has become a promising technique for designing a robust dynamic power management (DPM) framework that can cope with variations and uncertainties that emanate from hardware and application characteristics. Moreover, the potentially significant benefit of performing application-level scheduling as part of the system-level power management should be harnessed. This paper presents an architecture for hierarchical DPM in an embedded system composed of a processor chip and connected I/O devices (which are called system components.) The goal is to facilitate saving in the system component power consumption, which tends to dominate the total power consumption. The proposed (online) adaptive DPM technique consists of two layers: an RL-based component-level local power manager (LPM) and a system-level global power manager (GPM). The LPM performs component power and latency optimization. It employs temporal difference learning on semi-Markov decision process (SMDP) for model-free RL, and it is specifically optimized for an environment in which multiple (heterogeneous) types of applications can run in the embedded system. The GPM interacts with the CPU scheduler to perform effective application-level scheduling, thereby, enabling the LPM to do even more component power optimizations. In this hierarchical DPM framework, power and latency tradeoffs of each type of application can be precisely controlled based on a user-defined parameter. Experiments show that the amount of average power saving is up to 31.1% compared to existing approaches.", "title": "" }, { "docid": "neg:1840087_4", "text": "Rapidly progressive renal failure (RPRF) is an initial clinical diagnosis in patients who present with progressive renal impairment of short duration. The underlying etiology may be a primary renal disease or a systemic disorder. Important differential diagnoses include vasculitis (systemic or renal-limited), systemic lupus erythematosus, multiple myeloma, thrombotic microangiopathy and acute interstitial nephritis. Good history taking, clinical examination and relevant investigations including serology and ultimately kidney biopsy are helpful in clinching the diagnosis. Early definitive diagnosis of RPRF is essential to reverse the otherwise relentless progression to end-stage kidney disease.", "title": "" }, { "docid": "neg:1840087_5", "text": "Real-time eye and iris tracking is important for handsoff gaze-based password entry, instrument control by paraplegic patients, Internet user studies, as well as homeland security applications. In this project, a smart camera, LabVIEW and vision software tools are utilized to generate eye detection and tracking algorithms. The algorithms are uploaded to the smart camera for on-board image processing. Eye detection refers to finding eye features in a single frame. Eye tracking is achieved by detecting the same eye features across multiple image frames and correlating them to a particular eye. The algorithms are tested for eye detection and tracking under different conditions including different angles of the face, head motion speed, and eye occlusions to determine their usability for the proposed applications. This paper presents the implemented algorithms and performance results of these algorithms on the smart camera.", "title": "" }, { "docid": "neg:1840087_6", "text": "While there has been a success in 2D human pose estimation with convolutional neural networks (CNNs), 3D human pose estimation has not been thoroughly studied. In this paper, we tackle the 3D human pose estimation task with end-to-end learning using CNNs. Relative 3D positions between one joint and the other joints are learned via CNNs. The proposed method improves the performance of CNN with two novel ideas. First, we added 2D pose information to estimate a 3D pose from an image by concatenating 2D pose estimation result with the features from an image. Second, we have found that more accurate 3D poses are obtained by combining information on relative positions with respect to multiple joints, instead of just one root joint. Experimental results show that the proposed method achieves comparable performance to the state-of-the-art methods on Human 3.6m dataset.", "title": "" }, { "docid": "neg:1840087_7", "text": "Researchers have designed a variety of systems that promote wellness. However, little work has been done to examine how casual mobile games can help adults learn how to live healthfully. To explore this design space, we created OrderUP!, a game in which players learn how to make healthier meal choices. Through our field study, we found that playing OrderUP! helped participants engage in four processes of change identified by a well-established health behavior theory, the Transtheoretical Model: they improved their understanding of how to eat healthfully and engaged in nutrition-related analytical thinking, reevaluated the healthiness of their real life habits, formed helping relationships by discussing nutrition with others and started replacing unhealthy meals with more nutritious foods. Our research shows the promise of using casual mobile games to encourage adults to live healthier lifestyles.", "title": "" }, { "docid": "neg:1840087_8", "text": "Abnormality detection in biomedical images is a one-class classification problem, where methods learn a statistical model to characterize the inlier class using training data solely from the inlier class. Typical methods (i) need well-curated training data and (ii) have formulations that are unable to utilize expert feedback through (a small amount of) labeled outliers. In contrast, we propose a novel deep neural network framework that (i) is robust to corruption and outliers in the training data, which are inevitable in real-world deployment, and (ii) can leverage expert feedback through high-quality labeled data. We introduce an autoencoder formulation that (i) gives robustness through a non-convex loss and a heavy-tailed distribution model on the residuals and (ii) enables semi-supervised learning with labeled outliers. Results on three large medical datasets show that our method outperforms the state of the art in abnormality-detection accuracy.", "title": "" }, { "docid": "neg:1840087_9", "text": "While the RGB2GRAY conversion with fixed parameters is a classical and widely used tool for image decolorization, recent studies showed that adapting weighting parameters in a two-order multivariance polynomial model has great potential to improve the conversion ability. In this paper, by viewing the two-order model as the sum of three subspaces, it is observed that the first subspace in the two-order model has the dominating importance and the second and the third subspace can be seen as refinement. Therefore, we present a semiparametric strategy to take advantage of both the RGB2GRAY and the two-order models. In the proposed method, the RGB2GRAY result on the first subspace is treated as an immediate grayed image, and then the parameters in the second and the third subspace are optimized. Experimental results show that the proposed approach is comparable to other state-of-the-art algorithms in both quantitative evaluation and visual quality, especially for images with abundant colors and patterns. This algorithm also exhibits good resistance to noise. In addition, instead of the color contrast preserving ratio using the first-order gradient for decolorization quality metric, the color contrast correlation preserving ratio utilizing the second-order gradient is calculated as a new perceptual quality metric.", "title": "" }, { "docid": "neg:1840087_10", "text": "The task in the multi-agent path finding problem (MAPF) is to find paths for multiple agents, each with a different start and goal position, such that agents do not collide. It is possible to solve this problem optimally with algorithms that are based on the A* algorithm. Recently, we proposed an alternative algorithm called Conflict-Based Search (CBS) (Sharon et al. 2012), which was shown to outperform the A*-based algorithms in some cases. CBS is a two-level algorithm. At the high level, a search is performed on a tree based on conflicts between agents. At the low level, a search is performed only for a single agent at a time. While in some cases CBS is very efficient, in other cases it is worse than A*-based algorithms. This paper focuses on the latter case by generalizing CBS to Meta-Agent CBS (MA-CBS). The main idea is to couple groups of agents into meta-agents if the number of internal conflicts between them exceeds a given bound. MACBS acts as a framework that can run on top of any complete MAPF solver. We analyze our new approach and provide experimental results demonstrating that it outperforms basic CBS and other A*-based optimal solvers in many cases. Introduction and Background In the multi-agent path finding (MAPF) problem, we are given a graph, G(V,E), and a set of k agents labeled a1 . . . ak. Each agent ai has a start position si ∈ V and goal position gi ∈ V . At each time step an agent can either move to a neighboring location or can wait in its current location. The task is to return the least-cost set of actions for all agents that will move each of the agents to its goal without conflicting with other agents (i.e., without being in the same location at the same time or crossing the same edge simultaneously in opposite directions). MAPF has practical applications in robotics, video games, vehicle routing, and other domains (Silver 2005; Dresner & Stone 2008). In its general form, MAPF is NPcomplete, because it is a generalization of the sliding tile puzzle, which is NP-complete (Ratner & Warrnuth 1986). There are many variants to the MAPF problem. In this paper we consider the following common setting. The cumulative cost function to minimize is the sum over all agents of the number of time steps required to reach the goal location (Standley 2010; Sharon et al. 2011a). Both move Copyright c © 2012, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. and wait actions cost one. A centralized computing setting with a single CPU that controls all the agents is assumed. Note that a centralized computing setting is logically equivalent to a decentralized setting where each agent has its own computing power but agents are fully cooperative with full knowledge sharing and free communication. There are two main approaches for solving the MAPF in the centralized computing setting: the coupled and the decoupled approaches. In the decoupled approach, paths are planned for each agent separately. Algorithms from the decoupled approach run relatively fast, but optimality and even completeness are not always guaranteed (Silver 2005; Wang & Botea 2008; Jansen & Sturtevant 2008). New complete (but not optimal) decoupled algorithms were recently introduced for trees (Khorshid, Holte, & Sturtevant 2011) and for general graphs (Luna & Bekris 2011). Our aim is to solve the MAPF problem optimally and therefore the focus of this paper is on the coupled approach. In this approach MAPF is formalized as a global, singleagent search problem. One can activate an A*-based algorithm that searches a state space that includes all the different ways to permute the k agents into |V | locations. Consequently, the state space that is searched by the A*-based algorithms grow exponentially with the number of agents. Hence, finding the optimal solutions with A*-based algorithms requires significant computational expense. Previous optimal solvers dealt with this large search space in several ways. Ryan (2008; 2010) abstracted the problem into pre-defined structures such as cliques, halls and rings. He then modeled and solved the problem as a CSP problem. Note that the algorithm Ryan proposed does not necessarily returns the optimal solutions. Standley (2010; 2011) partitioned the given problem into smaller independent problems, if possible. Sharon et. al. (2011a; 2011b) suggested the increasing cost search tree (ICTS) a two-level framework where the high-level phase searches a tree with exact path costs for each of the agents and the low-level phase aims to verify whether there is a solution of this cost. In this paper we focus on the new Conflict Based Search algorithm (CBS) (Sharon et al. 2012) which optimally solves MAPF. CBS is a two-level algorithm where the highlevel search is performed on a constraint tree (CT) whose nodes include constraints on time and locations of a single agent. At each node in the constraint tree a low-level search is performed to find individual paths for all agents under the constraints given by the high-level node. Sharon et al. (2011a; 2011b; 2012) showed that the behavior of optimal MAPF algorithms can be very sensitive to characteristics of the given problem instance such as the topology and size of the graph, the number of agents, the branching factor etc. There is no universally dominant algorithm; different algorithms work well in different circumstances. In particular, experimental results have shown that CBS can significantly outperform all existing optimal MAPF algorithms on some domains (Sharon et al. 2012). However, Sharon et al. (2012) also identified cases where the CBS algorithm performs poorly. In such cases, CBS may even perform exponentially worse than A*. In this paper we aim at mitigating the worst-case performance of CBS by generalizing CBS into a new algorithm called Meta-agent CBS (MA-CBS). In MA-CBS the number of conflicts allowed at the high-level phase between any pair of agents is bounded by a predefined parameter B. When the number of conflicts exceed B, the conflicting agents are merged into a meta-agent and then treated as a joint composite agent by the low-level solver. By bounding the number of conflicts between any pair of agents, we prevent the exponential worst-case of basic CBS. This results in an new MAPF solver that significantly outperforms existing algorithms in a variety of domains. We present both theoretical and empirical support for this claim. In the low-level search, MA-CBS can use any complete MAPF solver. Thus, MA-CBS can be viewed as a solving framework and future MAPF algorithms could also be used by MA-CBS to improve its performance. Furthermore, we show that the original CBS algorithm corresponds to the extreme cases where B = ∞ (never merge agents), and the Independence Dependence (ID) framework (Standley 2010) is the other extreme case where B = 0 (always merge agents when conflicts occur). Thus, MA-CBS allows a continuum between CBS and ID, by setting different values of B between these two extremes. The Conflict Based Search Algorithm (CBS) The MA-CBS algorithm presented in this paper is based on the CBS algorithm (Sharon et al. 2012). We thus first describe the CBS algorithm in detail. Definitions for CBS We use the term path only in the context of a single agent and use the term solution to denote a set of k paths for the given set of k agents. A constraint for a given agent ai is a tuple (ai, v, t) where agent ai is prohibited from occupying vertex v at time step t.1 During the course of the algorithm, agents are associated with constraints. A consistent path for agent ai is a path that satisfies all its constraints. Likewise, a consistent solution is a solution that is made up from paths, such that the path for agent ai is consistent with the constraints of ai. A conflict is a tuple (ai, aj , v, t) where agent ai and agent aj occupy vertex v at time point t. A solution (of k paths) is valid if all its A conflict (as well as a constraint) may apply also to an edge when two agents traverse the same edge in opposite directions. paths have no conflicts. A consistent solution can be invalid if, despite the fact that the paths are consistent with their individual agent constraints, these paths still have conflicts. The key idea of CBS is to grow a set of constraints for each of the agents and find paths that are consistent with these constraints. If these paths have conflicts, and are thus invalid, the conflicts are resolved by adding new constraints. CBS works in two levels. At the high-level phase conflicts are found and constraints are added. At the low-level phase, the paths of the agents are updated to be consistent with the new constraints. We now describe each part of this process. High-level: Search the Constraint Tree (CT) At the high-level, CBS searches a constraint tree (CT). A CT is a binary tree. Each node N in the CT contains the following fields of data: 1. A set of constraints (N.constraints). The root of the CT contains an empty set of constraints. The child of a node in the CT inherits the constraints of the parent and adds one new constraint for one agent. 2. A solution (N.solution). A set of k paths, one path for each agent. The path for agent ai must be consistent with the constraints of ai. Such paths are found by the lowlevel search algorithm. 3. The total cost (N.cost). The cost of the current solution (summation over all the single-agent path costs). We denote this cost the f -value of the node. Node N in the CT is a goal node when N.solution is valid, i.e., the set of paths for all agents have no conflicts. The high-level phase performs a best-first search on the CT where nodes are ordered by their costs. Processing a node in the CT Given the list of constraints for a node N of the CT, the low-level search is invoked. This search returns one shortest path for each agent, ai, that is consistent with all the constraints associated with ai in node N . Once a consistent path has be", "title": "" }, { "docid": "neg:1840087_11", "text": "The networks of intelligent building are usually consist of a great number of smart devices. Since many smart devices only support on-site configuration and upgrade, and communication between devices could be observed and even altered by attackers, efficiency and security are two key concerns in maintaining and managing the devices used in intelligent building networks. In this paper, the authors apply the technology of software defined networking to satisfy the requirement for efficiency in intelligent building networks. More specific, a protocol stack in smart devices that support OpenFlow is designed. In addition, the authors designed the lightweight security mechanism with two foundation protocols and a full protocol that uses the foundation protocols as example. Performance and session key establishment for the security mechanism are also discussed.", "title": "" }, { "docid": "neg:1840087_12", "text": "The Self-Organizing Map (SOM) forms a nonlinear projection from a high-dimensional data manifold onto a low-dimensional grid. A representative model of some subset of data is associated with each grid point. The SOM algorithm computes an optimal collection of models that approximates the data in the sense of some error criterion and also takes into account the similarity relations of the models. The models then become ordered on the grid according to their similarity. When the SOM is used for the exploration of statistical data, the data vectors can be approximated by models of the same dimensionality. When mapping documents, one can represent them statistically by their word frequency histograms or some reduced representations of the histograms that can be regarded as data vectors. We have made SOMs of collections of over one million documents. Each document is mapped onto some grid point, with a link from this point to the document database. The documents are ordered on the grid according to their contents and neighboring documents can be browsed readily. Keywords or key texts can be used to search for the most relevant documents rst. New eeective coding and computing schemes of the mapping are described.", "title": "" }, { "docid": "neg:1840087_13", "text": "High network communication cost for synchronizing gradients and parameters is the well-known bottleneck of distributed training. In this work, we propose TernGrad that uses ternary gradients to accelerate distributed deep learning in data parallelism. Our approach requires only three numerical levels {−1, 0, 1}, which can aggressively reduce the communication time. We mathematically prove the convergence of TernGrad under the assumption of a bound on gradients. Guided by the bound, we propose layer-wise ternarizing and gradient clipping to improve its convergence. Our experiments show that applying TernGrad on AlexNet doesn’t incur any accuracy loss and can even improve accuracy. The accuracy loss of GoogLeNet induced by TernGrad is less than 2% on average. Finally, a performance model is proposed to study the scalability of TernGrad. Experiments show significant speed gains for various deep neural networks. Our source code is available 1.", "title": "" }, { "docid": "neg:1840087_14", "text": "Microglia, the resident macrophages of the CNS, are exquisitely sensitive to brain injury and disease, altering their morphology and phenotype to adopt a so-called activated state in response to pathophysiological brain insults. Morphologically activated microglia, like other tissue macrophages, exist as many different phenotypes, depending on the nature of the tissue injury. Microglial responsiveness to injury suggests that these cells have the potential to act as diagnostic markers of disease onset or progression, and could contribute to the outcome of neurodegenerative diseases. The persistence of activated microglia long after acute injury and in chronic disease suggests that these cells have an innate immune memory of tissue injury and degeneration. Microglial phenotype is also modified by systemic infection or inflammation. Evidence from some preclinical models shows that systemic manipulations can ameliorate disease progression, although data from other models indicates that systemic inflammation exacerbates disease progression. Systemic inflammation is associated with a decline in function in patients with chronic neurodegenerative disease, both acutely and in the long term. The fact that diseases with a chronic systemic inflammatory component are risk factors for Alzheimer disease implies that crosstalk occurs between systemic inflammation and microglia in the CNS.", "title": "" }, { "docid": "neg:1840087_15", "text": "Relational XQuery systems try to re-use mature relational data management infrastructures to create fast and scalable XML database technology. This paper describes the main features, key contributions, and lessons learned while implementing such a system. Its architecture consists of (i) a range-based encoding of XML documents into relational tables, (ii) a compilation technique that translates XQuery into a basic relational algebra, (iii) a restricted (order) property-aware peephole relational query optimization strategy, and (iv) a mapping from XML update statements into relational updates. Thus, this system implements all essential XML database functionalities (rather than a single feature) such that we can learn from the full consequences of our architectural decisions. While implementing this system, we had to extend the state-of-the-art with a number of new technical contributions, such as loop-lifted staircase join and efficient relational query evaluation strategies for XQuery theta-joins with existential semantics. These contributions as well as the architectural lessons learned are also deemed valuable for other relational back-end engines. The performance and scalability of the resulting system is evaluated on the XMark benchmark up to data sizes of 11GB. The performance section also provides an extensive benchmark comparison of all major XMark results published previously, which confirm that the goal of purely relational XQuery processing, namely speed and scalability, was met.", "title": "" }, { "docid": "neg:1840087_16", "text": "Mobile robots are increasingly being deployed in the real world in response to a heightened demand for applications such as transportation, delivery and inspection. The motion planning systems for these robots are expected to have consistent performance across the wide range of scenarios that they encounter. While state-of-the-art planners, with provable worst-case guarantees, can be employed to solve these planning problems, their finite time performance varies across scenarios. This thesis proposes that the planning module for a robot must adapt its search strategy to the distribution of planning problems encountered to achieve real-time performance. We address three principal challenges of this problem. Firstly, we show that even when the planning problem distribution is fixed, designing a nonadaptive planner can be challenging as the performance of planning strategies fluctuates with small changes in the environment. We characterize the existence of complementary strategies and propose to hedge our bets by executing a diverse ensemble of planners. Secondly, when the distribution is varying, we require a meta-planner that can automatically select such an ensemble from a library of black-box planners. We show that greedily training a list of predictors to focus on failure cases leads to an effective meta-planner. For situations where we have no training data, we show that we can learn an ensemble on-the-fly by adopting algorithms from online paging theory. Thirdly, in the interest of efficiency, we require a white-box planner that directly adapts its search strategy during a planning cycle. We propose an efficient procedure for training adaptive search heuristics in a data-driven imitation learning framework. We also draw a novel connection to Bayesian active learning, and propose algorithms to adaptively evaluate edges of a graph. Our approach leads to the synthesis of a robust real-time planning module that allows a UAV to navigate seamlessly across environments and speed-regimes. We evaluate our framework on a spectrum of planning problems and show closed-loop results on 3 UAV platforms a full-scale autonomous helicopter, a large scale hexarotor and a small quadrotor. While the thesis was motivated by mobile robots, we have shown that the individual algorithms are broadly applicable to other problem domains such as informative path planning and manipulation planning. We also establish novel connections between the disparate fields of motion planning and active learning, imitation learning and online paging which opens doors to several new research problems.", "title": "" }, { "docid": "neg:1840087_17", "text": "In this chapter, we give an overview of what ontologies are and how they can be used. We discuss the impact of the expressiveness, the number of domain elements, the community size, the conceptual dynamics, and other variables on the feasibility of an ontology project. Then, we break down the general promise of ontologies of facilitating the exchange and usage of knowledge to six distinct technical advancements that ontologies actually provide, and discuss how this should influence design choices in ontology projects. Finally, we summarize the main challenges of ontology management in real-world applications, and explain which expectations from practitioners can be met as", "title": "" }, { "docid": "neg:1840087_18", "text": "Expectation Maximization (EM) is among the most popular algorithms for estimating parameters of statistical models. However, EM, which is an iterative algorithm based on the maximum likelihood principle, is generally only guaranteed to find stationary points of the likelihood objective, and these points may be far from any maximizer. This article addresses this disconnect between the statistical principles behind EM and its algorithmic properties. Specifically, it provides a global analysis of EM for specific models in which the observations comprise an i.i.d. sample from a mixture of two Gaussians. This is achieved by (i) studying the sequence of parameters from idealized execution of EM in the infinite sample limit, and fully characterizing the limit points of the sequence in terms of the initial parameters; and then (ii) based on this convergence analysis, establishing statistical consistency (or lack thereof) for the actual sequence of parameters produced by EM.", "title": "" }, { "docid": "neg:1840087_19", "text": "The Global Positioning System (GPS) double-difference carrier-phase data are biased by an integer number of cycles. In this contribution a new method is introduced that enables very fast integer least-squares estimation of the ambiguities. The method makes use of an ambiguity transformation that allows one to reformulate the original ambiguity estimation problem as a new problem that is much easier to solve. The transformation aims at decorrelating the least-squares ambiguities and is based on an integer approximation of the conditional least-squares transformation. And through a flattening of the typical discontinuity in the GPS-spectrum of conditional variances of the ambiguities, the transformation returns new ambiguities that show a dramatic improvement in precision in comparison with the original double-difference ambiguities.", "title": "" } ]
1840088
Radio Frequency Time-of-Flight Distance Measurement for Low-Cost Wireless Sensor Localization
[ { "docid": "pos:1840088_0", "text": "In modern wireless communications products it is required to incorporate more and more different functions to comply with current market trends. A very attractive function with steadily growing market penetration is local positioning. To add this feature to low-cost mass-market devices without additional power consumption, it is desirable to use commercial communication chips and standards for localization of the wireless units. In this paper we present a concept to measure the distance between two IEEE 802.15.4 (ZigBee) compliant devices. The presented prototype hardware consists of a low- cost 2.45 GHz ZigBee chipset. For localization we use standard communication packets as transmit signals. Thus simultaneous data transmission and transponder localization is feasible. To achieve high positioning accuracy even in multipath environments, a coherent synthesis of measurements in multiple channels and a special signal phase evaluation concept is applied. With this technique the full available ISM bandwidth of 80 MHz is utilized. In first measurements with two different frequency references-a low-cost oscillator and a temperatur-compensated crystal oscillator-a positioning bias error of below 16 cm and 9 cm was obtained. The standard deviation was less than 3 cm and 1 cm, respectively. It is demonstrated that compared to signal correlation in time, the phase processing technique yields an accuracy improvement of roughly an order of magnitude.", "title": "" } ]
[ { "docid": "neg:1840088_0", "text": "This paper considers how design fictions in the form of 'imaginary abstracts' can be extended into complete 'fictional papers'. Imaginary abstracts are a type of design fiction that are usually included within the content of 'real' research papers, they comprise brief accounts of fictional problem frames, prototypes, user studies and findings. Design fiction abstracts have been proposed as a means to move beyond solutionism to explore the potential societal value and consequences of new HCI concepts. In this paper we contrast the properties of imaginary abstracts, with the properties of a published paper that presents fictional research, Game of Drones. Extending the notion of imaginary abstracts so that rather than including fictional abstracts within a 'non-fiction' research paper, Game of Drones is fiction from start to finish (except for the concluding paragraph where the fictional nature of the paper is revealed). In this paper we review the scope of design fiction in HCI research before contrasting the properties of imaginary abstracts with the properties of our example fictional research paper. We argue that there are clear merits and weaknesses to both approaches, but when used tactfully and carefully fictional research papers may further empower HCI's burgeoning design discourse with compelling new methods.", "title": "" }, { "docid": "neg:1840088_1", "text": "Conventional security exploits have relied on overwriting the saved return pointer on the stack to hijack the path of execution. Under Sun Microsystem’s Sparc processor architecture, we were able to implement a kernel modification to transparently and automatically guard applications’ return pointers. Our implementation called StackGhost under OpenBSD 2.8 acts as a ghost in the machine. StackGhost advances exploit prevention in that it protects every application run on the system without their knowledge nor does it require their source or binary modification. We will document several of the methods devised to preserve the sanctity of the system and will explore the performance ramifications of StackGhost.", "title": "" }, { "docid": "neg:1840088_2", "text": "This essay considers dynamic security design and corporate financing, with particular emphasis on informational micro-foundations. The central idea is that firm insiders must retain an appropriate share of firm risk, either to align their incentives with those of outside investors (moral hazard) or to signal favorable information about the quality of the firm’s assets. Informational problems lead to inevitable inefficiencies imperfect risk sharing, the possibility of bankruptcy, investment distortions, etc. The design of contracts that minimize these inefficiencies is a central question. This essay explores the implications of dynamic security design on firm operations and asset prices.", "title": "" }, { "docid": "neg:1840088_3", "text": "Machine Learning (ML) models are applied in a variety of tasks such as network intrusion detection or malware classification. Yet, these models are vulnerable to a class of malicious inputs known as adversarial examples. These are slightly perturbed inputs that are classified incorrectly by the ML model. The mitigation of these adversarial inputs remains an open problem. As a step towards a model-agnostic defense against adversarial examples, we show that they are not drawn from the same distribution than the original data, and can thus be detected using statistical tests. As the number of malicious points included in samples presented to the test diminishes, its detection confidence decreases. Hence, we introduce a complimentary approach to identify specific inputs that are adversarial among sets of inputs flagged by the statistical test. Specifically, we augment our ML model with an additional output, in which the model is trained to classify all adversarial inputs. We evaluate our approach on multiple adversarial example crafting methods (including the fast gradient sign and Jacobian-based saliency map methods) with several datasets. The statistical test flags sample sets containing adversarial inputs with confidence above 80%. Furthermore, our augmented model either detects adversarial examples with high accuracy (> 80%) or increases the adversary’s cost—the perturbation added—by more than 150%. In this way, we show that statistical properties of adversarial examples are essential to their detection.", "title": "" }, { "docid": "neg:1840088_4", "text": "Protein folding is a complex process that can lead to disease when it fails. Especially poorly understood are the very early stages of protein folding, which are likely defined by intrinsic local interactions between amino acids close to each other in the protein sequence. We here present EFoldMine, a method that predicts, from the primary amino acid sequence of a protein, which amino acids are likely involved in early folding events. The method is based on early folding data from hydrogen deuterium exchange (HDX) data from NMR pulsed labelling experiments, and uses backbone and sidechain dynamics as well as secondary structure propensities as features. The EFoldMine predictions give insights into the folding process, as illustrated by a qualitative comparison with independent experimental observations. Furthermore, on a quantitative proteome scale, the predicted early folding residues tend to become the residues that interact the most in the folded structure, and they are often residues that display evolutionary covariation. The connection of the EFoldMine predictions with both folding pathway data and the folded protein structure suggests that the initial statistical behavior of the protein chain with respect to local structure formation has a lasting effect on its subsequent states.", "title": "" }, { "docid": "neg:1840088_5", "text": "Hardware accelerators are being increasingly deployed to boost the performance and energy efficiency of deep neural network (DNN) inference. In this paper we propose Thundervolt, a new framework that enables aggressive voltage underscaling of high-performance DNN accelerators without compromising classification accuracy even in the presence of high timing error rates. Using post-synthesis timing simulations of a DNN acceleratormodeled on theGoogle TPU,we show that Thundervolt enables between 34%-57% energy savings on stateof-the-art speech and image recognition benchmarks with less than 1% loss in classification accuracy and no performance loss. Further, we show that Thundervolt is synergistic with and can further increase the energy efficiency of commonly used run-timeDNNpruning techniques like Zero-Skip.", "title": "" }, { "docid": "neg:1840088_6", "text": "This paper presents the implementation of a modified state observer-based adaptive dynamic inverse controller for the Black Kite micro aerial vehicle. The pitch and velocity adaptations are computed by the modified state observer in the presence of turbulence to simulate atmospheric conditions. This state observer uses the estimation error to generate the adaptations and, hence, is more robust than model reference adaptive controllers which use modeling or tracking error. In prior work, a traditional proportional-integral-derivative control law was tested in simulation for its adaptive capability in the longitudinal dynamics of the Black Kite micro aerial vehicle. This controller tracks the altitude and velocity commands during normal conditions, but fails in the presence of both parameter uncertainties and system failures. The modified state observer-based adaptations, along with the proportional-integral-derivative controller enables tracking despite these conditions. To simulate flight of the micro aerial vehicle with turbulence, a Dryden turbulence model is included. The turbulence levels used are based on the absolute load factor experienced by the aircraft. The length scale was set to 2.0 meters with a turbulence intensity of 5.0 m/s that generates a moderate turbulence. Simulation results for various flight conditions show that the modified state observer-based adaptations were able to adapt to the uncertainties and the controller tracks the commanded altitude and velocity. The summary of results for all of the simulated test cases and the response plots of various states for typical flight cases are presented.", "title": "" }, { "docid": "neg:1840088_7", "text": "Ishaq, O. 2016. Image Analysis and Deep Learning for Applications in Microscopy. Digital Comprehensive Summaries of Uppsala Dissertations from the Faculty of Science and Technology 1371. 76 pp. Uppsala: Acta Universitatis Upsaliensis. ISBN 978-91-554-9567-1. Quantitative microscopy deals with the extraction of quantitative measurements from samples observed under a microscope. Recent developments in microscopy systems, sample preparation and handling techniques have enabled high throughput biological experiments resulting in large amounts of image data, at biological scales ranging from subcellular structures such as fluorescently tagged nucleic acid sequences to whole organisms such as zebrafish embryos. Consequently, methods and algorithms for automated quantitative analysis of these images have become increasingly important. These methods range from traditional image analysis techniques to use of deep learning architectures. Many biomedical microscopy assays result in fluorescent spots. Robust detection and precise localization of these spots are two important, albeit sometimes overlapping, areas for application of quantitative image analysis. We demonstrate the use of popular deep learning architectures for spot detection and compare them against more traditional parametric model-based approaches. Moreover, we quantify the effect of pre-training and change in the size of training sets on detection performance. Thereafter, we determine the potential of training deep networks on synthetic and semi-synthetic datasets and their comparison with networks trained on manually annotated real data. In addition, we present a two-alternative forced-choice based tool for assisting in manual annotation of real image data. On a spot localization track, we parallelize a popular compressed sensing based localization method and evaluate its performance in conjunction with different optimizers, noise conditions and spot densities. We investigate its sensitivity to different point spread function estimates. Zebrafish is an important model organism, attractive for whole-organism image-based assays for drug discovery campaigns. The effect of drug-induced neuronal damage may be expressed in the form of zebrafish shape deformation. First, we present an automated method for accurate quantification of tail deformations in multi-fish micro-plate wells using image analysis techniques such as illumination correction, segmentation, generation of branch-free skeletons of partial tail-segments and their fusion to generate complete tails. Later, we demonstrate the use of a deep learning-based pipeline for classifying micro-plate wells as either drug-affected or negative controls, resulting in competitive performance, and compare the performance from deep learning against that from traditional image analysis approaches.", "title": "" }, { "docid": "neg:1840088_8", "text": "The pathogenesis underlining many neurodegenerative diseases remains incompletely understood. The lack of effective biomarkers and disease preventative medicine demands the development of new techniques to efficiently probe the mechanisms of disease and to detect early biomarkers predictive of disease onset. Raman spectroscopy is an established technique that allows the label-free fingerprinting and imaging of molecules based on their chemical constitution and structure. While analysis of isolated biological molecules has been widespread in the chemical community, applications of Raman spectroscopy to study clinically relevant biological species, disease pathogenesis, and diagnosis have been rapidly increasing since the past decade. The growing number of biomedical applications has shown the potential of Raman spectroscopy for detection of novel biomarkers that could enable the rapid and accurate screening of disease susceptibility and onset. Here we provide an overview of Raman spectroscopy and related techniques and their application to neurodegenerative diseases. We further discuss their potential utility in research, biomarker detection, and diagnosis. Challenges to routine use of Raman spectroscopy in the context of neuroscience research are also presented.", "title": "" }, { "docid": "neg:1840088_9", "text": "INTRODUCTION: Back in time dentists used to place implants in locations with sufficient bone-dimensions only, with less regard to placement of final definitive restoration but most of the times, the placement of implant is not as accurate as intended and even a minor variation in comparison to ideal placement causes difficulties in fabrication of final prosthesis. The use of bone substitutes and membranes is now one of the standard therapeutic approaches. In order to accelerate healing of bone graft over the bony defect, numerous techniques utilizing platelet and fibrinogen concentrates have been introduced in the literature.. OBJECTIVES: This study was designed to evaluate the efficacy of using Autologous Concentrated Growth Factors (CGF) Enriched Bone Graft Matrix (Sticky Bone) and CGF-Enriched Fibrin Membrane in management of dehiscence defect around dental implant in narrow maxillary anterior ridge. MATERIALS AND METHODS: Eleven DIO implants were inserted in six adult patients presenting an upper alveolar ridge width of less than 4mm determined by cone beam computed tomogeraphy (CBCT). After implant placement, the resultant vertical labial dehiscence defect was augmented utilizing Sticky Bone and CGF-Enriched Fibrin Membrane. Three CBCTs were made, pre-operatively, immediately postoperatively and six-months post-operatively. The change in vertical defect size was calculated radiographically then statistically analyzed. RESULTS: Vertical dehiscence defect was sufficiently recovered in 5 implant-sites while in the other 6 sites it was decreased to mean value of 1.25 mm ± 0.69 SD, i.e the defect coverage in 6 implants occurred with mean value of 4.59 mm ±0.49 SD. Also the results of the present study showed that the mean of average implant stability was 59.89 mm ± 3.92 CONCLUSIONS: The combination of PRF mixed with CGF with bone graft (allograft) can increase the quality (density) of the newly formed bone and enhance the rate of new bone formation.", "title": "" }, { "docid": "neg:1840088_10", "text": "Action recognition and human pose estimation are closely related but both problems are generally handled as distinct tasks in the literature. In this work, we propose a multitask framework for jointly 2D and 3D pose estimation from still images and human action recognition from video sequences. We show that a single architecture can be used to solve the two problems in an efficient way and still achieves state-of-the-art results. Additionally, we demonstrate that optimization from end-to-end leads to significantly higher accuracy than separated learning. The proposed architecture can be trained with data from different categories simultaneously in a seamlessly way. The reported results on four datasets (MPII, Human3.6M, Penn Action and NTU) demonstrate the effectiveness of our method on the targeted tasks.", "title": "" }, { "docid": "neg:1840088_11", "text": "This paper presents BOOM version 2, an updated version of the Berkeley Out-of-Order Machine first presented in [3]. The design exploration was performed through synthesis, place and route using the foundry-provided standard-cell library and the memory compiler in the TSMC 28 nm HPM process (high performance mobile). BOOM is an open-source processor that implements the RV64G RISC-V Instruction Set Architecture (ISA). Like most contemporary high-performance cores, BOOM is superscalar (able to execute multiple instructions per cycle) and out-oforder (able to execute instructions as their dependencies are resolved and not restricted to their program order). BOOM is implemented as a parameterizable generator written using the Chisel hardware construction language [2] that can used to generate synthesizable implementations targeting both FPGAs and ASICs. BOOMv2 is an update in which the design effort has been informed by analysis of synthesized, placed and routed data provided by a contemporary industrial tool flow. We also had access to standard singleand dual-ported memory compilers provided by the foundry, allowing us to explore design trade-offs using different SRAM memories and comparing against synthesized flip-flop arrays. The main distinguishing features of BOOMv2 include an updated 3-stage front-end design with a bigger set-associative Branch Target Buffer (BTB); a pipelined register rename stage; split floating point and integer register files; a dedicated floating point pipeline; separate issue windows for floating point, integer, and memory micro-operations; and separate stages for issue-select and register read. Managing the complexity of the register file was the largest obstacle to improving BOOM’s clock frequency. We spent considerable effort on placing-and-routing a semi-custom 9port register file to explore the potential improvements over a fully synthesized design, in conjunction with microarchitectural techniques to reduce the size and port count of the register file. BOOMv2 has a 37 fanout-of-four (FO4) inverter delay after synthesis and 50 FO4 after place-and-route, a 24% reduction from BOOMv1’s 65 FO4 after place-and-route. Unfortunately, instruction per cycle (IPC) performance drops up to 20%, mostly due to the extra latency between load instructions and dependent instructions. However, the new BOOMv2 physical design paves the way for IPC recovery later. BOOMv1-2f3i int/idiv/fdiv", "title": "" }, { "docid": "neg:1840088_12", "text": "Mining frequent itemsets and association rules is a popular and well researched approach for discovering interesting relationships between variables in large databases. The R package arules presented in this paper provides a basic infrastructure for creating and manipulating input data sets and for analyzing the resulting itemsets and rules. The package also includes interfaces to two fast mining algorithms, the popular C implementations of Apriori and Eclat by Christian Borgelt. These algorithms can be used to mine frequent itemsets, maximal frequent itemsets, closed frequent itemsets and association rules.", "title": "" }, { "docid": "neg:1840088_13", "text": "In this paper, we present a systematic framework for recognizing realistic actions from videos “in the wild”. Such unconstrained videos are abundant in personal collections as well as on the Web. Recognizing action from such videos has not been addressed extensively, primarily due to the tremendous variations that result from camera motion, background clutter, changes in object appearance, and scale, etc. The main challenge is how to extract reliable and informative features from the unconstrained videos. We extract both motion and static features from the videos. Since the raw features of both types are dense yet noisy, we propose strategies to prune these features. We use motion statistics to acquire stable motion features and clean static features. Furthermore, PageRank is used to mine the most informative static features. In order to further construct compact yet discriminative visual vocabularies, a divisive information-theoretic algorithm is employed to group semantically related features. Finally, AdaBoost is chosen to integrate all the heterogeneous yet complementary features for recognition. We have tested the framework on the KTH dataset and our own dataset consisting of 11 categories of actions collected from YouTube and personal videos, and have obtained impressive results for action recognition and action localization.", "title": "" }, { "docid": "neg:1840088_14", "text": "In this paper we present a wearable Haptic Feedback Device to convey intuitive motion direction to the user through haptic feedback based on vibrotactile illusions. Vibrotactile illusions occur on the skin when two or more vibrotactile actuators in proximity are actuated in coordinated sequence, causing the user to feel combined sensations, instead of separate ones. By combining these illusions we can produce various sensation patterns that are discernible by the user, thus allowing to convey different information with each pattern. A method to provide information about direction through vibrotactile illusions is introduced on this paper. This method uses a grid of vibrotactile actuators around the arm actuated in coordination. The sensation felt on the skin is consistent with the desired direction of motion, so the desired motion can be intuitively understood. We show that the users can recognize the conveyed direction, and implemented a proof of concept of the proposed method to guide users' elbow flexion/extension motion.", "title": "" }, { "docid": "neg:1840088_15", "text": "In the article by Powers et al, “2018 Guidelines for the Early Management of Patients With Acute Ischemic Stroke: A Guideline for Healthcare Professionals From the American Heart Association/American Stroke Association,” which published ahead of print January 24, 2018, and appeared in the March 2018 issue of the journal (Stroke. 2018;49:e46–e110. DOI: 10.1161/ STR.0000000000000158), a few corrections were needed. 1. On page e46, the text above the byline read: Reviewed for evidence-based integrity and endorsed by the American Association of Neurological Surgeons and Congress of Neurological Surgeons Endorsed by the Society for Academic Emergency Medicine It has been updated to read: Reviewed for evidence-based integrity and endorsed by the American Association of Neurological Surgeons and Congress of Neurological Surgeons Endorsed by the Society for Academic Emergency Medicine and Neurocritical Care Society The American Academy of Neurology affirms the value of this guideline as an educational tool for neurologists. 2. On page e60, in the section “2.2. Brain Imaging,” in the knowledge byte text below recommendation 12: • The seventh sentence read, “Therefore, only the eligibility criteria from these trials should be used for patient selection.” It has been updated to read, “Therefore, only the eligibility criteria from one or the other of these trials should be used for patient selection.” • The eighth sentence read, “...at this time, the DAWN and DEFUSE 3 eligibility should be strictly adhered to in clinical practice.” It has been updated to read, “...at this time, the DAWN or DEFUSE 3 eligibility should be strictly adhered to in clinical practice.” 3. On page e73, in the section “3.7. Mechanical Thrombectomy,” recommendation 8 read, “In selected patients with AIS within 6 to 24 hours....” It has been updated to read, “In selected patients with AIS within 16 to 24 hours....” 4. On page e73, in the section “3.7. Mechanical Thrombectomy,” in the knowledge byte text below recommendation 8: • The seventh sentence read, “Therefore, only the eligibility criteria from these trials should be used for patient selection.” It has been updated to read, “Therefore, only the eligibility criteria from one or the other of these trials should be used for patient selection.” • The eighth sentence read, “...at this time, the DAWN and DEFUSE-3 eligibility should be strictly adhered to in clinical practice.” It has been updated to read, “...at this time, the DAWN or DEFUSE-3 eligibility should be strictly adhered to in clinical practice.” 5. On page e76, in the section “3.10. Anticoagulants,” in the knowledge byte text below recommendation 1, the third sentence read, “...(LMWH, 64.2% versus aspirin, 6.52%; P=0.33).” It has been updated to read, “...(LMWH, 64.2% versus aspirin, 62.5%; P=0.33).” These corrections have been made to the current online version of the article, which is available at http://stroke.ahajournals.org/lookup/doi/10.1161/STR.0000000000000158. Correction", "title": "" }, { "docid": "neg:1840088_16", "text": "This paper presents an annotation scheme for events that negatively or positively affect entities (benefactive/malefactive events) and for the attitude of the writer toward their agents and objects. Work on opinion and sentiment tends to focus on explicit expressions of opinions. However, many attitudes are conveyed implicitly, and benefactive/malefactive events are important for inferring implicit attitudes. We describe an annotation scheme and give the results of an inter-annotator agreement study. The annotated corpus is available online.", "title": "" }, { "docid": "neg:1840088_17", "text": "The use of drones in agriculture is becoming more and more popular. The paper presents a novel approach to distinguish between different field's plowing techniques by means of an RGB-D sensor. The presented system can be easily integrated in commercially available Unmanned Aerial Vehicles (UAVs). In order to successfully classify the plowing techniques, two different measurement algorithms have been developed. Experimental tests show that the proposed methodology is able to provide a good classification of the field's plowing depths.", "title": "" }, { "docid": "neg:1840088_18", "text": "Mobile payments will gain significant traction in the coming years as the mobile and payment technologies mature and become widely available. Various technologies are competing to become the established standards for physical and virtual mobile payments, yet it is ultimately the users who will determine the level of success of the technologies through their adoption. Only if it becomes easier and cheaper to transact business using mobile payment applications than by using conventional methods will they become popular, either with users or providers. This document is a state of the art review of mobile payment technologies. It covers all of the technologies involved in a mobile payment solution, including mobile networks in section 2, mobile services in section 3, mobile platforms in section 4, mobile commerce in section 5 and different mobile payment solutions in sections 6 to 8.", "title": "" }, { "docid": "neg:1840088_19", "text": "Deaths due to prescription and illicit opioid overdose have been rising at an alarming rate, particularly in the USA. Although naloxone injection is a safe and effective treatment for opioid overdose, it is frequently unavailable in a timely manner due to legal and practical restrictions on its use by laypeople. As a result, an effort spanning decades has resulted in the development of strategies to make naloxone available for layperson or \"take-home\" use. This has included the development of naloxone formulations that are easier to administer for nonmedical users, such as intranasal and autoinjector intramuscular delivery systems, efforts to distribute naloxone to potentially high-impact categories of nonmedical users, as well as efforts to reduce regulatory barriers to more widespread distribution and use. Here we review the historical and current literature on the efficacy and safety of naloxone for use by nonmedical persons, provide an evidence-based discussion of the controversies regarding the safety and efficacy of different formulations of take-home naloxone, and assess the status of current efforts to increase its public distribution. Take-home naloxone is safe and effective for the treatment of opioid overdose when administered by laypeople in a community setting, shortening the time to reversal of opioid toxicity and reducing opioid-related deaths. Complementary strategies have together shown promise for increased dissemination of take-home naloxone, including 1) provision of education and training; 2) distribution to critical populations such as persons with opioid addiction, family members, and first responders; 3) reduction of prescribing barriers to access; and 4) reduction of legal recrimination fears as barriers to use. Although there has been considerable progress in decreasing the regulatory and legal barriers to effective implementation of community naloxone programs, significant barriers still exist, and much work remains to be done to integrate these programs into efforts to provide effective treatment of opioid use disorders.", "title": "" } ]
1840089
Towards a Semantic Driven Framework for Smart Grid Applications: Model-Driven Development Using CIM, IEC 61850 and IEC 61499
[ { "docid": "pos:1840089_0", "text": "openview network node manager designing and implementing an enterprise solution PDF high availability in websphere messaging solutions PDF designing web interfaces principles and patterns for rich interactions PDF pivotal certified spring enterprise integration specialist exam a study guide PDF active directory designing deploying and running active directory PDF application architecture for net designing applications and services patterns & practices PDF big data analytics from strategic planning to enterprise integration with tools techniques nosql and graph PDF designing and building security operations center PDF patterns of enterprise application architecture PDF java ee and net interoperability integration strategies patterns and best practices PDF making healthy places designing and building for health well-being and sustainability PDF architectural ceramics for the studio potter designing building installing PDF xml for data architects designing for reuse and integration the morgan kaufmann series in data management systems PDF", "title": "" }, { "docid": "pos:1840089_1", "text": "This is the second part of a two-part paper that has arisen from the work of the IEEE Power Engineering Society's Multi-Agent Systems (MAS) Working Group. Part I of this paper examined the potential value of MAS technology to the power industry, described fundamental concepts and approaches within the field of multi-agent systems that are appropriate to power engineering applications, and presented a comprehensive review of the power engineering applications for which MAS are being investigated. It also defined the technical issues which must be addressed in order to accelerate and facilitate the uptake of the technology within the power and energy sector. Part II of this paper explores the decisions inherent in engineering multi-agent systems for applications in the power and energy sector and offers guidance and recommendations on how MAS can be designed and implemented. Given the significant and growing interest in this field, it is imperative that the power engineering community considers the standards, tools, supporting technologies, and design methodologies available to those wishing to implement a MAS solution for a power engineering problem. This paper describes the various options available and makes recommendations on best practice. It also describes the problem of interoperability between different multi-agent systems and proposes how this may be tackled.", "title": "" }, { "docid": "pos:1840089_2", "text": "Model-driven engineering technologies offer a promising approach to address the inability of third-generation languages to alleviate the complexity of platforms and express domain concepts effectively.", "title": "" }, { "docid": "pos:1840089_3", "text": "Flexible intelligent electronic devices (IEDs) are highly desirable to support free allocation of function to IED by means of software reconfiguration without any change of hardware. The application of generic hardware platforms and component-based software technology seems to be a good solution. Due to the advent of IEC 61850, generic hardware platforms with a standard communication interface can be used to implement different kinds of functions with high flexibility. The remaining challenge is the unified function model that specifies various software components with appropriate granularity and provides a framework to integrate them efficiently. This paper proposes the function-block (FB)-based function model for flexible IEDs. The standard FBs are established by combining the IEC 61850 model and the IEC 61499 model. The design of a simplified distance protection IED using standard FBs is described and investigated. The testing results of the prototype system in MATLAB/Simulink demonstrate the feasibility and flexibility of FB-based IEDs.", "title": "" }, { "docid": "pos:1840089_4", "text": "This paper presents a new approach to power system automation, based on distributed intelligence rather than traditional centralized control. The paper investigates the interplay between two international standards, IEC 61850 and IEC 61499, and proposes a way of combining of the application functions of IEC 61850-compliant devices with IEC 61499-compliant “glue logic,” using the communication services of IEC 61850-7-2. The resulting ability to customize control and automation logic will greatly enhance the flexibility and adaptability of automation systems, speeding progress toward the realization of the smart grid concept.", "title": "" } ]
[ { "docid": "neg:1840089_0", "text": "Phenotypically identical cells can dramatically vary with respect to behavior during their lifespan and this variation is reflected in their molecular composition such as the transcriptomic landscape. Single-cell transcriptomics using next-generation transcript sequencing (RNA-seq) is now emerging as a powerful tool to profile cell-to-cell variability on a genomic scale. Its application has already greatly impacted our conceptual understanding of diverse biological processes with broad implications for both basic and clinical research. Different single-cell RNA-seq protocols have been introduced and are reviewed here-each one with its own strengths and current limitations. We further provide an overview of the biological questions single-cell RNA-seq has been used to address, the major findings obtained from such studies, and current challenges and expected future developments in this booming field.", "title": "" }, { "docid": "neg:1840089_1", "text": "About one fourth of patients with bipolar disorders (BD) have depressive episodes with a seasonal pattern (SP) coupled to a more severe disease. However, the underlying genetic influence on a SP in BD remains to be identified. We studied 269 BD Caucasian patients, with and without SP, recruited from university-affiliated psychiatric departments in France and performed a genetic single-marker analysis followed by a gene-based analysis on 349 single nucleotide polymorphisms (SNPs) spanning 21 circadian genes and 3 melatonin pathway genes. A SP in BD was nominally associated with 14 SNPs identified in 6 circadian genes: NPAS2, CRY2, ARNTL, ARNTL2, RORA and RORB. After correcting for multiple testing, using a false discovery rate approach, the associations remained significant for 5 SNPs in NPAS2 (chromosome 2:100793045-100989719): rs6738097 (pc = 0.006), rs12622050 (pc = 0.006), rs2305159 (pc = 0.01), rs1542179 (pc = 0.01), and rs1562313 (pc = 0.02). The gene-based analysis of the 349 SNPs showed that rs6738097 (NPAS2) and rs1554338 (CRY2) were significantly associated with the SP phenotype (respective Empirical p-values of 0.0003 and 0.005). The associations remained significant for rs6738097 (NPAS2) after Bonferroni correction. The epistasis analysis between rs6738097 (NPAS2) and rs1554338 (CRY2) suggested an additive effect. Genetic variations in NPAS2 might be a biomarker for a seasonal pattern in BD.", "title": "" }, { "docid": "neg:1840089_2", "text": "The growth of vehicles in Yogyakarta Province, Indonesia is not proportional to the growth of roads. This problem causes severe traffic jam in many main roads. Common traffic anomalies detection using surveillance camera requires manpower and costly, while traffic anomalies detection with crowdsourcing mobile applications are mostly owned by private. This research aims to develop a real-time traffic classification by harnessing the power of social network data, Twitter. In this study, Twitter data are processed to the stages of preprocessing, feature extraction, and tweet classification. This study compares classification performance of three machine learning algorithms, namely Naive Bayes (NB), Support Vector Machine (SVM), and Decision Tree (DT). Experimental results show that SVM algorithm produced the best performance among the other algorithms with 99.77% and 99.87% of classification accuracy in balanced and imbalanced data, respectively. This research implies that social network service may be used as an alternative source for traffic anomalies detection by providing information of traffic flow condition in real-time.", "title": "" }, { "docid": "neg:1840089_3", "text": "A microwave duplexer with high isolation is presented in this paper. The device is based on triple-mode filters that are built using silver-plated ceramic cuboids. To create a six-pole, six-transmission-zero filter in the DCS-1800 band, which is utilized in mobile communications, two cuboids are cascaded. To shift spurious harmonics, low dielectric caps are placed on the cuboid faces. These caps push the first cuboid spurious up in frequency by around 340 MHz compared to the uncapped cuboid, allowing a 700-MHz spurious free window. To verify the design, a DCS-1800 duplexer with 75-MHz widebands is built. It achieves around 1 dB of insertion loss for both the receive and transmit ports with around 70 dB of mutual isolation within only 20-MHz band separation, using a volume of only 30 cm3 .", "title": "" }, { "docid": "neg:1840089_4", "text": "We present a technique for efficiently synthesizing images of atmospheric clouds using a combination of Monte Carlo integration and neural networks. The intricacies of Lorenz-Mie scattering and the high albedo of cloud-forming aerosols make rendering of clouds---e.g. the characteristic silverlining and the \"whiteness\" of the inner body---challenging for methods based solely on Monte Carlo integration or diffusion theory. We approach the problem differently. Instead of simulating all light transport during rendering, we pre-learn the spatial and directional distribution of radiant flux from tens of cloud exemplars. To render a new scene, we sample visible points of the cloud and, for each, extract a hierarchical 3D descriptor of the cloud geometry with respect to the shading location and the light source. The descriptor is input to a deep neural network that predicts the radiance function for each shading configuration. We make the key observation that progressively feeding the hierarchical descriptor into the network enhances the network's ability to learn faster and predict with higher accuracy while using fewer coefficients. We also employ a block design with residual connections to further improve performance. A GPU implementation of our method synthesizes images of clouds that are nearly indistinguishable from the reference solution within seconds to minutes. Our method thus represents a viable solution for applications such as cloud design and, thanks to its temporal stability, for high-quality production of animated content.", "title": "" }, { "docid": "neg:1840089_5", "text": "Twitter enables large populations of end-users of software to publicly share their experiences and concerns about software systems in the form of micro-blogs. Such data can be collected and classified to help software developers infer users' needs, detect bugs in their code, and plan for future releases of their systems. However, automatically capturing, classifying, and presenting useful tweets is not a trivial task. Challenges stem from the scale of the data available, its unique format, diverse nature, and high percentage of irrelevant information and spam. Motivated by these challenges, this paper reports on a three-fold study that is aimed at leveraging Twitter as a main source of software user requirements. The main objective is to enable a responsive, interactive, and adaptive data-driven requirements engineering process. Our analysis is conducted using 4,000 tweets collected from the Twitter feeds of 10 software systems sampled from a broad range of application domains. The results reveal that around 50% of collected tweets contain useful technical information. The results also show that text classifiers such as Support Vector Machines and Naive Bayes can be very effective in capturing and categorizing technically informative tweets. Additionally, the paper describes and evaluates multiple summarization strategies for generating meaningful summaries of informative software-relevant tweets.", "title": "" }, { "docid": "neg:1840089_6", "text": "This paper presents an artificial neural network(ANN) approach to electric load forecasting. The ANN is used to learn the relationship among past, current and future temperatures and loads. In order to provide the forecasted load, the ANN interpolates among the load and temperature data in a training data set. The average absolute errors of the one-hour and 24-hour ahead forecasts in our test on actual utility data are shown to be 1.40% and 2.06%, respectively. This compares with an average error of 4.22% for 24hour ahead forecasts with a currently used forecasting technique applied to the same data.", "title": "" }, { "docid": "neg:1840089_7", "text": "BACKGROUND\nChocolate consumption has long been associated with enjoyment and pleasure. Popular claims confer on chocolate the properties of being a stimulant, relaxant, euphoriant, aphrodisiac, tonic and antidepressant. The last claim stimulated this review.\n\n\nMETHOD\nWe review chocolate's properties and the principal hypotheses addressing its claimed mood altering propensities. We distinguish between food craving and emotional eating, consider their psycho-physiological underpinnings, and examine the likely 'positioning' of any effect of chocolate to each concept.\n\n\nRESULTS\nChocolate can provide its own hedonistic reward by satisfying cravings but, when consumed as a comfort eating or emotional eating strategy, is more likely to be associated with prolongation rather than cessation of a dysphoric mood.\n\n\nLIMITATIONS\nThis review focuses primarily on clarifying the possibility that, for some people, chocolate consumption may act as an antidepressant self-medication strategy and the processes by which this may occur.\n\n\nCONCLUSIONS\nAny mood benefits of chocolate consumption are ephemeral.", "title": "" }, { "docid": "neg:1840089_8", "text": "Sampling is a core process for a variety of graphics applications. Among existing sampling methods, blue noise sampling remains popular thanks to its spatial uniformity and absence of aliasing artifacts. However, research so far has been mainly focused on blue noise sampling with a single class of samples. This could be insufficient for common natural as well as man-made phenomena requiring multiple classes of samples, such as object placement, imaging sensors, and stippling patterns.\n We extend blue noise sampling to multiple classes where each individual class as well as their unions exhibit blue noise characteristics. We propose two flavors of algorithms to generate such multi-class blue noise samples, one extended from traditional Poisson hard disk sampling for explicit control of sample spacing, and another based on our soft disk sampling for explicit control of sample count. Our algorithms support uniform and adaptive sampling, and are applicable to both discrete and continuous sample space in arbitrary dimensions. We study characteristics of samples generated by our methods, and demonstrate applications in object placement, sensor layout, and color stippling.", "title": "" }, { "docid": "neg:1840089_9", "text": "The taxonomical relationship of Cylindrospermopsis raciborskii and Raphidiopsis mediterranea was studied by morphological and 16S rRNA gene diversity analyses of natural populations from Lake Kastoria, Greece. Samples were obtained during a bloom (23,830 trichomes mL ) in August 2003. A high diversity of apical cell, trichome, heterocyte and akinete morphology, trichome fragmentation and reproduction was observed. Trichomes were grouped into three dominant morphotypes: the typical and the non-heterocytous morphotype of C. raciborskii and the typical morphotype of R. mediterranea. A morphometric comparison of the dominant morphotypes showed significant differences in mean values of cell and trichome sizes despite the high overlap in the range of the respective size values. Additionally, two new morphotypes representing developmental stages of the species are described while a new mode of reproduction involving a structurally distinct reproductive cell is described for the first time in planktic Nostocales. A putative life-cycle, common for C. raciborskii and R. mediterranea is proposed revealing that trichome reproduction of R. mediterranea gives rise both to R. mediterranea and C. raciborskii non-heterocytous morphotypes. The phylogenetic analysis of partial 16S rRNA gene (ca. 920 bp) of the co-existing Cylindrospermopsis and Raphidiopsis morphotypes revealed only one phylotype which showed 99.54% similarity to R. mediterranea HB2 (China) and 99.19% similarity to C. raciborskii form 1 (Australia). We propose that all morphotypes comprised stages of the life cycle of C. raciborkii whereas R. mediterranea from Lake Kastoria (its type locality) represents non-heterocytous stages of Cylindrospermopsis complex life cycle. 2009 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "neg:1840089_10", "text": "This introductory chapter reviews the emergence, classification, and contemporary examples of cultural robots: social robots that are shaped by, producers of, or participants in culture. We review the emergence of social robotics as a field, and then track early references to the terminology and key lines of inquiry of Cultural Robotics. Four categories of the integration of culture with robotics are outlined; and the content of the contributing chapters following this introductory chapter are summarised within these categories.", "title": "" }, { "docid": "neg:1840089_11", "text": "Gene set analysis is moving towards considering pathway topology as a crucial feature. Pathway elements are complex entities such as protein complexes, gene family members and chemical compounds. The conversion of pathway topology to a gene/protein networks (where nodes are a simple element like a gene/protein) is a critical and challenging task that enables topology-based gene set analyses. Unfortunately, currently available R/Bioconductor packages provide pathway networks only from single databases. They do not propagate signals through chemical compounds and do not differentiate between complexes and gene families. Here we present graphite, a Bioconductor package addressing these issues. Pathway information from four different databases is interpreted following specific biologically-driven rules that allow the reconstruction of gene-gene networks taking into account protein complexes, gene families and sensibly removing chemical compounds from the final graphs. The resulting networks represent a uniform resource for pathway analyses. Indeed, graphite provides easy access to three recently proposed topological methods. The graphite package is available as part of the Bioconductor software suite. graphite is an innovative package able to gather and make easily available the contents of the four major pathway databases. In the field of topological analysis graphite acts as a provider of biological information by reducing the pathway complexity considering the biological meaning of the pathway elements.", "title": "" }, { "docid": "neg:1840089_12", "text": "Adaptive tracking-by-detection methods are widely used in computer vision for tracking arbitrary objects. Current approaches treat the tracking problem as a classification task and use online learning techniques to update the object model. However, for these updates to happen one needs to convert the estimated object position into a set of labelled training examples, and it is not clear how best to perform this intermediate step. Furthermore, the objective for the classifier (label prediction) is not explicitly coupled to the objective for the tracker (estimation of object position). In this paper, we present a framework for adaptive visual object tracking based on structured output prediction. By explicitly allowing the output space to express the needs of the tracker, we avoid the need for an intermediate classification step. Our method uses a kernelised structured output support vector machine (SVM), which is learned online to provide adaptive tracking. To allow our tracker to run at high frame rates, we (a) introduce a budgeting mechanism that prevents the unbounded growth in the number of support vectors that would otherwise occur during tracking, and (b) show how to implement tracking on the GPU. Experimentally, we show that our algorithm is able to outperform state-of-the-art trackers on various benchmark videos. Additionally, we show that we can easily incorporate additional features and kernels into our framework, which results in increased tracking performance.", "title": "" }, { "docid": "neg:1840089_13", "text": "PIECE (Plant Intron Exon Comparison and Evolution) is a web-accessible database that houses intron and exon information of plant genes. PIECE serves as a resource for biologists interested in comparing intron-exon organization and provides valuable insights into the evolution of gene structure in plant genomes. Recently, we updated PIECE to a new version, PIECE 2.0 (http://probes.pw.usda.gov/piece or http://aegilops.wheat.ucdavis.edu/piece). PIECE 2.0 contains annotated genes from 49 sequenced plant species as compared to 25 species in the previous version. In the current version, we also added several new features: (i) a new viewer was developed to show phylogenetic trees displayed along with the structure of individual genes; (ii) genes in the phylogenetic tree can now be also grouped according to KOG (The annotation of Eukaryotic Orthologous Groups) and KO (KEGG Orthology) in addition to Pfam domains; (iii) information on intronless genes are now included in the database; (iv) a statistical summary of global gene structure information for each species and its comparison with other species was added; and (v) an improved GSDraw tool was implemented in the web server to enhance the analysis and display of gene structure. The updated PIECE 2.0 database will be a valuable resource for the plant research community for the study of gene structure and evolution.", "title": "" }, { "docid": "neg:1840089_14", "text": "A compact Substrate Integrated Waveguide (SIW) Leaky-Wave Antenna (LWA) is proposed. Internal vias are inserted in the SIW in order to have narrow walls, and so reducing the size of the SIW-LWA, the new structure is called Slow Wave - Substrate Integrated Waveguide - Leaky Wave Antenna (SW-SIW-LWA), since inserting the vias induce the SW effect. After designing the antenna and simulating with HFSS a reduction of 30% of the transverse side of the antenna is attained while maintaining an acceptable gain. Other parameters like the radiation efficiency, Gain, directivity, and radiation pattern are analyzed. Finally a Comparison of our miniaturization technique with Half-Mode Substrate Integrated Waveguide (HMSIW) technique realized in recent articles is done, shows that SW-SIW-LWA technique could be a good candidate for SIW miniaturization.", "title": "" }, { "docid": "neg:1840089_15", "text": "In this paper, hand dorsal images acquired under infrared light are used to design an accurate personal authentication system. Each of the image is segmented into palm dorsal and fingers which are subsequently used to extract palm dorsal veins and infrared hand geometry features respectively. A new quality estimation algorithm is proposed to estimate the quality of palm dorsal which assigns low values to the pixels containing hair or skin texture. Palm dorsal is enhanced using filtering. For vein extraction, information provided by the enhanced image and the vein quality is consolidated using a variational approach. The proposed vein extraction can handle the issues of hair, skin texture and variable width veins so as to extract the genuine veins accurately. Several post processing techniques are introduced in this paper for accurate feature extraction of infrared hand geometry features. Matching scores are obtained by matching palm dorsal veins and infrared hand geometry features. These are eventually fused for authentication. For performance evaluation, a database of 1500 hand images acquired from 300 different hands is created. Experimental results demonstrate the superiority of the proposed system over existing", "title": "" }, { "docid": "neg:1840089_16", "text": "Dynamic imaging is a recently proposed action description paradigm for simultaneously capturing motion and temporal evolution information, particularly in the context of deep convolutional neural networks (CNNs). Compared with optical flow for motion characterization, dynamic imaging exhibits superior efficiency and compactness. Inspired by the success of dynamic imaging in RGB video, this study extends it to the depth domain. To better exploit three-dimensional (3D) characteristics, multi-view dynamic images are proposed. In particular, the raw depth video is densely projected with ∗Corresponding author. Tel.: +86 27 87558918 Email addresses: Yang Xiao@hust.edu.cn (Yang Xiao), chenjun2015@hust.edu.cn (Jun Chen), yancheng wang@hust.edu.cn (Yancheng Wang), zgcao@hust.edu.cn (Zhiguo Cao), zhouty@ihpc.a-star.edu.sg (Joey Tianyi Zhou), xbai@hust.edu.cn (Xiang Bai) Preprint submitted to Information Sciences December 31, 2018 ar X iv :1 80 6. 11 26 9v 3 [ cs .C V ] 2 7 D ec 2 01 8 respect to different virtual imaging viewpoints by rotating the virtual camera within the 3D space. Subsequently, dynamic images are extracted from the obtained multi-view depth videos and multi-view dynamic images are thus constructed from these images. Accordingly, more view-tolerant visual cues can be involved. A novel CNN model is then proposed to perform feature learning on multi-view dynamic images. Particularly, the dynamic images from different views share the same convolutional layers but correspond to different fully connected layers. This is aimed at enhancing the tuning effectiveness on shallow convolutional layers by alleviating the gradient vanishing problem. Moreover, as the spatial occurrence variation of the actions may impair the CNN, an action proposal approach is also put forth. In experiments, the proposed approach can achieve state-of-the-art performance on three challenging datasets.", "title": "" }, { "docid": "neg:1840089_17", "text": "Bowen's disease is a squamous cell carcinoma in situ and has the potential to progress to a squamous cell carcinoma. The authors treated two female patients (a 39-year-old and a 41-year-old) with Bowen's disease in the vulva area using topical photodynamic therapy (PDT), involving the use of 5-aminolaevulinic acid and a light-emitting diode device. The light was administered at an intensity of 80 mW/cm(2) for a dose of 120 J/cm(2) biweekly for 6 cycles. The 39-year-old patient showed excellent clinical improvement, but the other patient achieved only a partial response. Even though one patient underwent a total excision 1 year later due to recurrence, both patients were satisfied with the cosmetic outcomes of this therapy and the partial improvement over time. The common side effect of PDT was a stinging sensation. PDT provides a relatively effective and useful alternative treatment for Bowen's disease in the vulva area.", "title": "" }, { "docid": "neg:1840089_18", "text": "The phrase Internet of Things (IoT) heralds a vision of the future Internet where connecting physical things, from banknotes to bicycles, through a network will let them take an active part in the Internet, exchanging information about themselves and their surroundings. This will give immediate access to information about the physical world and the objects in it leading to innovative services and increase in efficiency and productivity. This paper studies the state-of-the-art of IoT and presents the key technological drivers, potential applications, challenges and future research areas in the domain of IoT. IoT definitions from different perspective in academic and industry communities are also discussed and compared. Finally some major issues of future research in IoT are identified and discussed briefly.", "title": "" }, { "docid": "neg:1840089_19", "text": "During animal development, accurate control of tissue specification and growth are critical to generate organisms of reproducible shape and size. The eye-antennal disc epithelium of Drosophila is a powerful model system to identify the signaling pathway and transcription factors that mediate and coordinate these processes. We show here that the Yorkie (Yki) pathway plays a major role in tissue specification within the developing fly eye disc epithelium at a time when organ primordia and regional identity domains are specified. RNAi-mediated inactivation of Yki, or its partner Scalloped (Sd), or increased activity of the upstream negative regulators of Yki cause a dramatic reorganization of the eye disc fate map leading to specification of the entire disc epithelium into retina. On the contrary, constitutive expression of Yki suppresses eye formation in a Sd-dependent fashion. We also show that knockdown of the transcription factor Homothorax (Hth), known to partner Yki in some developmental contexts, also induces an ectopic retina domain, that Yki and Scalloped regulate Hth expression, and that the gain-of-function activity of Yki is partially dependent on Hth. Our results support a critical role for Yki- and its partners Sd and Hth--in shaping the fate map of the eye epithelium independently of its universal role as a regulator of proliferation and survival.", "title": "" } ]
1840090
Coupled-Resonator Filters With Frequency-Dependent Couplings: Coupling Matrix Synthesis
[ { "docid": "pos:1840090_0", "text": "A method for the synthesis of multicoupled resonators filters with frequency-dependent couplings is presented. A circuit model of the filter that accurately represents the frequency responses over a very wide frequency band is postulated. The two-port parameters of the filter based on the circuit model are obtained by circuit analysis. The values of the circuit elements are synthesized by equating the two-port parameters obtained from the circuit analysis and the filtering characteristic function. Solutions similar to the narrowband case (where all the couplings are assumed frequency independent) are obtained analytically when all coupling elements are either inductive or capacitive. The synthesis technique is generalized to include all types of coupling elements. Several examples of wideband filters are given to demonstrate the synthesis techniques.", "title": "" } ]
[ { "docid": "neg:1840090_0", "text": "This work introduces the CASCADE error correction protocol and LDPC (Low-Density Parity Check) error correction codes which are both parity check based. We also give the results of computer simulations that are performed for comparing their performances (redundant information, success).", "title": "" }, { "docid": "neg:1840090_1", "text": "Currently, much of machine learning is opaque, just like a “black box”. However, in order for humans to understand, trust and effectively manage the emerging AI systems, an AI needs to be able to explain its decisions and conclusions. In this paper, I propose an argumentation-based approach to explainable AI, which has the potential to generate more comprehensive explanations than existing approaches.", "title": "" }, { "docid": "neg:1840090_2", "text": "The immunologic processes involved in Graves' disease (GD) have one unique characteristic--the autoantibodies to the TSH receptor (TSHR)--which have both linear and conformational epitopes. Three types of TSHR antibodies (stimulating, blocking, and cleavage) with different functional capabilities have been described in GD patients, which induce different signaling effects varying from thyroid cell proliferation to thyroid cell death. The establishment of animal models of GD by TSHR antibody transfer or by immunization with TSHR antigen has confirmed its pathogenic role and, therefore, GD is the result of a breakdown in TSHR tolerance. Here we review some of the characteristics of TSHR antibodies with a special emphasis on new developments in our understanding of what were previously called \"neutral\" antibodies and which we now characterize as autoantibodies to the \"cleavage\" region of the TSHR ectodomain.", "title": "" }, { "docid": "neg:1840090_3", "text": "A new kind of distributed power divider/combiner circuit for use in octave bandwidth (or more) microstrip power transistor amplifier is presented. The design, characteristics and advantages are discussed. Experimental results on a 4-way divider are presented and compared with theory.", "title": "" }, { "docid": "neg:1840090_4", "text": "We analyze the use, advantages, and drawbacks of graph kernels in chemoin-formatics, including a comparison of kernel-based approaches with other methodology, as well as examples of applications. Kernel-based machine learning [1], now widely applied in chemoinformatics, delivers state-of-the-art performance [2] in tasks like classification and regression. Molecular graph kernels [3] are a recent development where kernels are defined directly on the molecular structure graph. This allows the adaptation of methods from graph theory to structure graphs and their direct use with kernel learning algorithms. The main advantage of kernel learning, the so-called “kernel trick”, allows for a systematic, computationally feasible, and often globally optimal search for non-linear patterns, as well as the direct use of non-numerical inputs such as strings and graphs. A drawback is that solutions are expressed indirectly in terms of similarity to training samples, and runtimes that are typically quadratic or cubic in the number of training samples. Graph kernels [3] are positive semidefinite functions defined directly on graphs. The most important types are based on random walks, subgraph patterns, optimal assignments, and graphlets. Molecular structure graphs have strong properties that can be exploited [4], e.g., they are undirected, have no self-loops and no multiple edges, are connected (except for salts), annotated, often planar in the graph-theoretic sense, and their vertex degree is bounded by a small constant. In many applications, they are small. Many graph kernels are generalpurpose, some are suitable for structure graphs, and a few have been explicitly designed for them. We present three exemplary applications of the iterative similarity optimal assignment kernel [5], which was designed for the comparison of small structure graphs: The discovery of novel agonists of the peroxisome proliferator-activated receptor g [6] (ligand-based virtual screening), the estimation of acid dissociation constants [7] (quantitative structure-property relationships), and molecular de novo design [8].", "title": "" }, { "docid": "neg:1840090_5", "text": "This investigation is one in a series of studies that address the possibility of stroke rehabilitation using robotic devices to facilitate “adaptive training.” Healthy subjects, after training in the presence of systematically applied forces, typically exhibit a predictable “after-effect.” A critical question is whether this adaptive characteristic is preserved following stroke so that it might be exploited for restoring function. Another important question is whether subjects benefit more from training forces that enhance their errors than from forces that reduce their errors. We exposed hemiparetic stroke survivors and healthy age-matched controls to a pattern of disturbing forces that have been found by previous studies to induce a dramatic adaptation in healthy individuals. Eighteen stroke survivors made 834 movements in the presence of a robot-generated force field that pushed their hands proportional to its speed and perpendicular to its direction of motion — either clockwise or counterclockwise. We found that subjects could adapt, as evidenced by significant after-effects. After-effects were not correlated with the clinical scores that we used for measuring motor impairment. Further examination revealed that significant improvements occurred only when the training forces magnified the original errors, and not when the training forces reduced the errors or were zero. Within this constrained experimental task we found that error-enhancing therapy (as opposed to guiding the limb closer to the correct path) to be more effective than therapy that assisted the subject.", "title": "" }, { "docid": "neg:1840090_6", "text": "In recent years, study of influence propagation in social networks has gained tremendous attention. In this context, we can identify three orthogonal dimensions—the number of seed nodes activated at the beginning (known as budget), the expected number of activated nodes at the end of the propagation (known as expected spread or coverage), and the time taken for the propagation. We can constrain one or two of these and try to optimize the third. In their seminal paper, Kempe et al. constrained the budget, left time unconstrained, and maximized the coverage: this problem is known as Influence Maximization (or MAXINF for short). In this paper, we study alternative optimization problems which are naturally motivated by resource and time constraints on viral marketing campaigns. In the first problem, termed minimum target set selection (or MINTSS for short), a coverage threshold η is given and the task is to find the minimum size seed set such that by activating it, at least η nodes are eventually activated in the expected sense. This naturally captures the problem of deploying a viral campaign on a budget. In the second problem, termed MINTIME, the goal is to minimize the time in which a predefined coverage is achieved. More precisely, in MINTIME, a coverage threshold η and a budget threshold k are given, and the task is to find a seed set of size at most k such that by activating it, at least η nodes are activated in the expected sense, in the minimum possible time. This problem addresses the issue of timing when deploying viral campaigns. Both these problems are NP-hard, which motivates our interest in their approximation. For MINTSS, we develop a simple greedy algorithm and show that it provides a bicriteria approximation. We also establish a generic hardness result suggesting that improving this bicriteria approximation is likely to be hard. For MINTIME, we show that even bicriteria and tricriteria approximations are hard under several conditions. We show, however, that if we allow the budget for number of seeds k to be boosted by a logarithmic factor and allow the coverage to fall short, then the problem can be solved exactly in PTIME, i.e., we can achieve the required coverage within the time achieved by the optimal solution to MINTIME with budget k and coverage threshold η. Finally, we establish the value of the approximation algorithms, by conducting an experimental evaluation, comparing their quality against that achieved by various heuristics.", "title": "" }, { "docid": "neg:1840090_7", "text": "Nine projective linear measurements were taken to determine morphometric differences of the face among healthy young adult Chinese, Vietnamese, and Thais (60 in each group) and to assess the validity of six neoclassical facial canons in these populations. In addition, the findings in the Asian ethnic groups were compared to the data of 60 North American Caucasians. The canons served as criteria for determining the differences between the Asians and Caucasians. In neither Asian nor Caucasian subjects were the three sections of the facial profile equal. The validity of the five other facial canons was more frequent in Caucasians (range: 16.7–36.7%) than in Asians (range: 1.7–26.7%). Horizontal measurement results were significantly greater in the faces of the Asians (en–en, al–al, zy–zy) than in their white counterparts; as a result, the variation between the classical proportions and the actual measurements was significantly higher among Asians (range: 90–100%) than Caucasians (range: 13.3–48%). The dominant characteristics of the Asian face were a wider intercanthal distance in relation to a shorter palpebral fissure, a much wider soft nose within wide facial contours, a smaller mouth width, and a lower face smaller than the forehead height. In the absence of valid anthropometric norms of craniofacial measurements and proportion indices, our results, based on quantitative analysis of the main vertical and horizontal measurements of the face, offers surgeons guidance in judging the faces of Asian patients in preparation for corrective surgery.", "title": "" }, { "docid": "neg:1840090_8", "text": "Word embeddings are well known to capture linguistic regularities of the language on which they are trained. Researchers also observe that these regularities can transfer across languages. However, previous endeavors to connect separate monolingual word embeddings typically require cross-lingual signals as supervision, either in the form of parallel corpus or seed lexicon. In this work, we show that such cross-lingual connection can actually be established without any form of supervision. We achieve this end by formulating the problem as a natural adversarial game, and investigating techniques that are crucial to successful training. We carry out evaluation on the unsupervised bilingual lexicon induction task. Even though this task appears intrinsically cross-lingual, we are able to demonstrate encouraging performance without any cross-lingual clues.", "title": "" }, { "docid": "neg:1840090_9", "text": "Recurrent neural networks have recently shown significant potential in different language applications, ranging from natural language processing to language modelling. This paper introduces a research effort to use such networks to develop and evaluate natural language acquisition on a humanoid robot. Here, the problem is twofold. First, the focus will be put on using the gesture-word combination stage observed in infants to transition from single to multi-word utterances. Secondly, research will be carried out in the domain of connecting action learning with language learning. In the former, the long-short term memory architecture will be implemented, whilst in the latter multiple time-scale recurrent neural networks will be used. This will allow for comparison between the two architectures, whilst highlighting the strengths and shortcomings of both with respect to the language learning problem. Here, the main research efforts, challenges and expected outcomes are described.", "title": "" }, { "docid": "neg:1840090_10", "text": "This study focuses on the development of a web-based Attendance Register System or formerly known as ARS. The development of this system is motivated due to the fact that the students’ attendance records are one of the important elements that reflect their academic achievements in the higher academic institutions. However, the current practice implemented in most of the higher academic institutions in Malaysia is becoming more prone to human errors and frauds. Assisted by the System Development Life Cycle (SDLC) methodology, the ARS has been built using the web-based applications such as PHP, MySQL and Apache to cater the recording and reporting of the students’ attendances. The development of this prototype system is inspired by the feasibility study done in Universiti Teknologi MARA, Malaysia where 550 respondents have taken part in answering the questionnaires. From the analysis done, it has revealed that a more systematic and revolutionary system is indeed needed to be reinforced in order to improve the process of recording and reporting the attendances in the higher academic institution. ARS can be easily accessed by the lecturers via the Web and most importantly, the reports can be generated in realtime processing, thus, providing invaluable information about the students’ commitments in attending the classes. This paper will discuss in details the development of ARS from the feasibility study until the design phase.", "title": "" }, { "docid": "neg:1840090_11", "text": "Screening for cyclodextrin glycosyltransferase (CGTase)-producing alkaliphilic bacteria from samples collected from hyper saline soda lakes (Wadi Natrun Valley, Egypt), resulted in isolation of potent CGTase producing alkaliphilic bacterium, termed NPST-10. 16S rDNA sequence analysis identified the isolate as Amphibacillus sp. CGTase was purified to homogeneity up to 22.1 fold by starch adsorption and anion exchange chromatography with a yield of 44.7%. The purified enzyme was a monomeric protein with an estimated molecular weight of 92 kDa using SDS-PAGE. Catalytic activities of the enzyme were found to be 88.8 U mg(-1) protein, 20.0 U mg(-1) protein and 11.0 U mg(-1) protein for cyclization, coupling and hydrolytic activities, respectively. The enzyme was stable over a wide pH range from pH 5.0 to 11.0, with a maximal activity at pH 8.0. CGTase exhibited activity over a wide temperature range from 45 °C to 70 °C, with maximal activity at 50 °C and was stable at 30 °C to 55 °C for at least 1 h. Thermal stability of the purified enzyme could be significantly improved in the presence of CaCl(2). K(m) and V(max) values were estimated using soluble starch as a substrate to be 1.7 ± 0.15 mg/mL and 100 ± 2.0 μmol/min, respectively. CGTase was significantly inhibited in the presence of Co(2+), Zn(2+), Cu(2+), Hg(2+), Ba(2+), Cd(2+), and 2-mercaptoethanol. To the best of our knowledge, this is the first report of CGTase production by Amphibacillus sp. The achieved high conversion of insoluble raw corn starch into cyclodextrins (67.2%) with production of mainly β-CD (86.4%), makes Amphibacillus sp. NPST-10 desirable for the cyclodextrin production industry.", "title": "" }, { "docid": "neg:1840090_12", "text": "A growing body of evidence suggests that empathy for pain is underpinned by neural structures that are also involved in the direct experience of pain. In order to assess the consistency of this finding, an image-based meta-analysis of nine independent functional magnetic resonance imaging (fMRI) investigations and a coordinate-based meta-analysis of 32 studies that had investigated empathy for pain using fMRI were conducted. The results indicate that a core network consisting of bilateral anterior insular cortex and medial/anterior cingulate cortex is associated with empathy for pain. Activation in these areas overlaps with activation during directly experienced pain, and we link their involvement to representing global feeling states and the guidance of adaptive behavior for both self- and other-related experiences. Moreover, the image-based analysis demonstrates that depending on the type of experimental paradigm this core network was co-activated with distinct brain regions: While viewing pictures of body parts in painful situations recruited areas underpinning action understanding (inferior parietal/ventral premotor cortices) to a stronger extent, eliciting empathy by means of abstract visual information about the other's affective state more strongly engaged areas associated with inferring and representing mental states of self and other (precuneus, ventral medial prefrontal cortex, superior temporal cortex, and temporo-parietal junction). In addition, only the picture-based paradigms activated somatosensory areas, indicating that previous discrepancies concerning somatosensory activity during empathy for pain might have resulted from differences in experimental paradigms. We conclude that social neuroscience paradigms provide reliable and accurate insights into complex social phenomena such as empathy and that meta-analyses of previous studies are a valuable tool in this endeavor.", "title": "" }, { "docid": "neg:1840090_13", "text": "Neural interface technology has made enormous strides in recent years but stimulating electrodes remain incapable of reliably targeting specific cell types (e.g. excitatory or inhibitory neurons) within neural tissue. This obstacle has major scientific and clinical implications. For example, there is intense debate among physicians, neuroengineers and neuroscientists regarding the relevant cell types recruited during deep brain stimulation (DBS); moreover, many debilitating side effects of DBS likely result from lack of cell-type specificity. We describe here a novel optical neural interface technology that will allow neuroengineers to optically address specific cell types in vivo with millisecond temporal precision. Channelrhodopsin-2 (ChR2), an algal light-activated ion channel we developed for use in mammals, can give rise to safe, light-driven stimulation of CNS neurons on a timescale of milliseconds. Because ChR2 is genetically targetable, specific populations of neurons even sparsely embedded within intact circuitry can be stimulated with high temporal precision. Here we report the first in vivo behavioral demonstration of a functional optical neural interface (ONI) in intact animals, involving integrated fiberoptic and optogenetic technology. We developed a solid-state laser diode system that can be pulsed with millisecond precision, outputs 20 mW of power at 473 nm, and is coupled to a lightweight, flexible multimode optical fiber, approximately 200 microm in diameter. To capitalize on the unique advantages of this system, we specifically targeted ChR2 to excitatory cells in vivo with the CaMKIIalpha promoter. Under these conditions, the intensity of light exiting the fiber ( approximately 380 mW mm(-2)) was sufficient to drive excitatory neurons in vivo and control motor cortex function with behavioral output in intact rodents. No exogenous chemical cofactor was needed at any point, a crucial finding for in vivo work in large mammals. Achieving modulation of behavior with optical control of neuronal subtypes may give rise to fundamental network-level insights complementary to what electrode methodologies have taught us, and the emerging optogenetic toolkit may find application across a broad range of neuroscience, neuroengineering and clinical questions.", "title": "" }, { "docid": "neg:1840090_14", "text": "This work presents a combination of a teach-and-replay visual navigation and Monte Carlo localization methods. It improves a reliable teach-and-replay navigation method by replacing its dependency on precise dead-reckoning by introducing Monte Carlo localization to determine robot position along the learned path. In consequence, the navigation method becomes robust to dead-reckoning errors, can be started from at any point in the map and can deal with the ‘kidnapped robot’ problem. Furthermore, the robot is localized with MCL only along the taught path, i.e. in one dimension, which does not require a high number of particles and significantly reduces the computational cost. Thus, the combination of MCL and teach-and-replay navigation mitigates the disadvantages of both methods. The method was tested using a P3-AT ground robot and a Parrot AR.Drone aerial robot over a long indoor corridor. Experiments show the validity of the approach and establish a solid base for continuing this work.", "title": "" }, { "docid": "neg:1840090_15", "text": "In this paper, the performance of a rectangular microstrip patch antenna fed by microstrip line is designed to operate for ultra-wide band applications. It consists of a rectangular patch with U-shaped slot on one side of the substrate and a finite ground plane on the other side. The U-shaped slot and the finite ground plane are used to achieve an excellent impedance matching to increase the bandwidth. The proposed antenna is designed and optimized based on extensive 3D EM simulation studies. The proposed antenna is designed to operate over a frequency range from 3.6 to 15 GHz.", "title": "" }, { "docid": "neg:1840090_16", "text": "What service-quality attributes must Internet banks offer to induce consumers to switch to online transactions and keep using them?", "title": "" }, { "docid": "neg:1840090_17", "text": "A lot of research has been done on multiple-valued logic (MVL) such as ternary logic in these years. MVL reduces the number of necessary operations and also decreases the chip area that would be used. Carbon nanotube field effect transistors (CNTFETs) are considered a viable alternative for silicon transistors (MOSFETs). Combining carbon nanotube transistors and MVL can produce a unique design that is faster and more flexible. In this paper, we design a new half adder and a new multiplier by nanotechnology using a ternary logic, which decreases the power consumption and chip surface and raises the speed. The presented design is simulated using CNTFET of Stanford University and HSPICE software, and the results are compared with those of other studies.", "title": "" }, { "docid": "neg:1840090_18", "text": "We introduce a model for incorporating contextual information (such as geography) in learning vector-space representations of situated language. In contrast to approaches to multimodal representation learning that have used properties of the object being described (such as its color), our model includes information about the subject (i.e., the speaker), allowing us to learn the contours of a word’s meaning that are shaped by the context in which it is uttered. In a quantitative evaluation on the task of judging geographically informed semantic similarity between representations learned from 1.1 billion words of geo-located tweets, our joint model outperforms comparable independent models that learn meaning in isolation.", "title": "" }, { "docid": "neg:1840090_19", "text": "Process monitoring using indirect methods leverages on the usage of sensors. Using sensors to acquire vital process related information also presents itself with the problem of big data management and analysis. Due to uncertainty in the frequency of events occurring, a higher sampling rate is often used in real-time monitoring applications to increase the chances of capturing and understanding all possible events related to the process. Advanced signal processing methods helps to further decipher meaningful information from the acquired data. In this research work, power spectrum density (PSD) of sensor data acquired at sampling rates between 40 kHz-51.2 kHz was calculated and the co-relation between PSD and completed number of cycles/passes is presented. Here, the progress in number of cycles/passes is the event this research work intends to classify and the algorithm used to compute PSD is Welchs estimate method. A comparison between Welchs estimate method and statistical methods is also discussed. A clear co-relation was observed using Welchs estimate to classify the number of cyceles/passes.", "title": "" } ]
1840091
Signed networks in social media
[ { "docid": "pos:1840091_0", "text": "In trying out this hypothesis we shall understand by attitude the positive or negative relationship of a person p to another person o or to an impersonal entity x which may be a situation, an event, an idea, or a thing, etc. Examples are: to like, to love, to esteem, to value, and their opposites. A positive relation of this kind will be written L, a negative one ~L. Thus, pLo means p likes, loves, or values o, or, expressed differently, o is positive for p.", "title": "" } ]
[ { "docid": "neg:1840091_0", "text": "Trajectory search has long been an attractive and challenging topic which blooms various interesting applications in spatial-temporal databases. In this work, we study a new problem of searching trajectories by locations, in which context the query is only a small set of locations with or without an order specified, while the target is to find the k Best-Connected Trajectories (k-BCT) from a database such that the k-BCT best connect the designated locations geographically. Different from the conventional trajectory search that looks for similar trajectories w.r.t. shape or other criteria by using a sample query trajectory, we focus on the goodness of connection provided by a trajectory to the specified query locations. This new query can benefit users in many novel applications such as trip planning.\n In our work, we firstly define a new similarity function for measuring how well a trajectory connects the query locations, with both spatial distance and order constraint being considered. Upon the observation that the number of query locations is normally small (e.g. 10 or less) since it is impractical for a user to input too many locations, we analyze the feasibility of using a general-purpose spatial index to achieve efficient k-BCT search, based on a simple Incremental k-NN based Algorithm (IKNN). The IKNN effectively prunes and refines trajectories by using the devised lower bound and upper bound of similarity. Our contributions mainly lie in adapting the best-first and depth-first k-NN algorithms to the basic IKNN properly, and more importantly ensuring the efficiency in both search effort and memory usage. An in-depth study on the adaption and its efficiency is provided. Further optimization is also presented to accelerate the IKNN algorithm. Finally, we verify the efficiency of the algorithm by extensive experiments.", "title": "" }, { "docid": "neg:1840091_1", "text": "Skilled artists, using traditional media or modern computer painting tools, can create a variety of expressive styles that are very appealing in still images, but have been unsuitable for animation. The key difficulty is that existing techniques lack adequate temporal coherence to animate these styles effectively. Here we augment the range of practical animation styles by extending the guided texture synthesis method of Image Analogies [Hertzmann et al. 2001] to create temporally coherent animation sequences. To make the method art directable, we allow artists to paint portions of keyframes that are used as constraints. The in-betweens calculated by our method maintain stylistic continuity and yet change no more than necessary over time.", "title": "" }, { "docid": "neg:1840091_2", "text": "We present a methodology, called fast repetition rate (FRR) fluorescence, that measures the functional absorption cross-section (sigmaPS II) of Photosystem II (PS II), energy transfer between PS II units (p), photochemical and nonphotochemical quenching of chlorophyll fluorescence, and the kinetics of electron transfer on the acceptor side of PS II. The FRR fluorescence technique applies a sequence of subsaturating excitation pulses ('flashlets') at microsecond intervals to induce fluorescence transients. This approach is extremely flexible and allows the generation of both single-turnover (ST) and multiple-turnover (MT) flashes. Using a combination of ST and MT flashes, we investigated the effect of excitation protocols on the measured fluorescence parameters. The maximum fluorescence yield induced by an ST flash applied shortly (10 &mgr;s to 5 ms) following an MT flash increased to a level comparable to that of an MT flash, while the functional absorption cross-section decreased by about 40%. We interpret this phenomenon as evidence that an MT flash induces an increase in the fluorescence-rate constant, concomitant with a decrease in the photosynthetic-rate constant in PS II reaction centers. The simultaneous measurements of sigmaPS II, p, and the kinetics of Q-A reoxidation, which can be derived only from a combination of ST and MT flash fluorescence transients, permits robust characterization of the processes of photosynthetic energy-conversion.", "title": "" }, { "docid": "neg:1840091_3", "text": "Abstract: Printed antennas are becoming one of the most popular designs in personal wireless communications systems. In this paper, the design of a novel tapered meander line antenna is presented. The design analysis and characterization of the antenna is performed using the finite difference time domain technique and experimental verifications are performed to ensure the effectiveness of the numerical model. The new design features an operating frequency of 2.55 GHz with a 230 MHz bandwidth, which supports future generations of mobile communication systems.", "title": "" }, { "docid": "neg:1840091_4", "text": "Attention mechanisms are a design trend of deep neural networks that stands out in various computer vision tasks. Recently, some works have attempted to apply attention mechanisms to single image super-resolution (SR) tasks. However, they apply the mechanisms to SR in the same or similar ways used for high-level computer vision problems without much consideration of the different nature between SR and other problems. In this paper, we propose a new attention method, which is composed of new channelwise and spatial attention mechanisms optimized for SR and a new fused attention to combine them. Based on this, we propose a new residual attention module (RAM) and a SR network using RAM (SRRAM). We provide in-depth experimental analysis of different attention mechanisms in SR. It is shown that the proposed method can construct both deep and lightweight SR networks showing improved performance in comparison to existing state-of-the-art methods.", "title": "" }, { "docid": "neg:1840091_5", "text": "The term cyber security is often used interchangeably with the term information security. This paper argues that, although there is a substantial overlap between cyber security and information security, these two concepts are not totally analogous. Moreover, the paper posits that cyber security goes beyond the boundaries of traditional information security to include not only the protection of information resources, but also that of other assets, including the person him/herself. In information security, reference to the human factor usually relates to the role(s) of humans in the security process. In cyber security this factor has an additional dimension, namely, the humans as potential targets of cyber attacks or even unknowingly participating in a cyber attack. This additional dimension has ethical implications for society as a whole, since the protection of certain vulnerable groups, for example children, could be seen as a societal responsibility. a 2013 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "neg:1840091_6", "text": "Automatic differentiation—the mechanical transformation of numeric computer programs to calculate derivatives efficiently and accurately—dates to the origin of the computer age. Reverse mode automatic differentiation both antedates and generalizes the method of backwards propagation of errors used in machine learning. Despite this, practitioners in a variety of fields, including machine learning, have been little influenced by automatic differentiation, and make scant use of available tools. Here we review the technique of automatic differentiation, describe its two main modes, and explain how it can benefit machine learning practitioners. To reach the widest possible audience our treatment assumes only elementary differential calculus, and does not assume any knowledge of linear algebra.", "title": "" }, { "docid": "neg:1840091_7", "text": "With an expansive and ubiquitously available gold mine of educational data, Massive Open Online courses (MOOCs) have become the an important foci of learning analytics research. The hope is that this new surge of development will bring the vision of equitable access to lifelong learning opportunities within practical reach. MOOCs offer many valuable learning experiences to students, from video lectures, readings, assignments and exams, to opportunities to connect and collaborate with others through threaded discussion forums and other Web 2.0 technologies. Nevertheless, despite all this potential, MOOCs have so far failed to produce evidence that this potential is being realized in the current instantiation of MOOCs. In this work, we primarily explore video lecture interaction in Massive Open Online Courses (MOOCs), which is central to student learning experience on these educational platforms. As a research contribution, we operationalize video lecture clickstreams of students into behavioral actions, and construct a quantitative information processing index, that can aid instructors to better understand MOOC hurdles and reason about unsatisfactory learning outcomes. Our results illuminate the effectiveness of developing such a metric inspired by cognitive psychology, towards answering critical questions regarding students’ engagement, their future click interactions and participation trajectories that lead to in-video dropouts. We leverage recurring click behaviors to differentiate distinct video watching profiles for students in MOOCs. Additionally, we discuss about prediction of complete course dropouts, incorporating diverse perspectives from statistics and machine learning, to offer a more nuanced view into how the second generation of MOOCs be benefited, if course instructors were to better comprehend factors that lead to student attrition. Implications for research and practice are discussed.", "title": "" }, { "docid": "neg:1840091_8", "text": "In this study, we investigated a pattern-recognition technique based on an artificial neural network (ANN), which is called a massive training artificial neural network (MTANN), for reduction of false positives in computerized detection of lung nodules in low-dose computed tomography (CT) images. The MTANN consists of a modified multilayer ANN, which is capable of operating on image data directly. The MTANN is trained by use of a large number of subregions extracted from input images together with the teacher images containing the distribution for the \"likelihood of being a nodule.\" The output image is obtained by scanning an input image with the MTANN. The distinction between a nodule and a non-nodule is made by use of a score which is defined from the output image of the trained MTANN. In order to eliminate various types of non-nodules, we extended the capability of a single MTANN, and developed a multiple MTANN (Multi-MTANN). The Multi-MTANN consists of plural MTANNs that are arranged in parallel. Each MTANN is trained by using the same nodules, but with a different type of non-nodule. Each MTANN acts as an expert for a specific type of non-nodule, e.g., five different MTANNs were trained to distinguish nodules from various-sized vessels; four other MTANNs were applied to eliminate some other opacities. The outputs of the MTANNs were combined by using the logical AND operation such that each of the trained MTANNs eliminated none of the nodules, but removed the specific type of non-nodule with which the MTANN was trained, and thus removed various types of non-nodules. The Multi-MTANN consisting of nine MTANNs was trained with 10 typical nodules and 10 non-nodules representing each of nine different non-nodule types (90 training non-nodules overall) in a training set. The trained Multi-MTANN was applied to the reduction of false positives reported by our current computerized scheme for lung nodule detection based on a database of 63 low-dose CT scans (1765 sections), which contained 71 confirmed nodules including 66 biopsy-confirmed primary cancers, from a lung cancer screening program. The Multi-MTANN was applied to 58 true positives (nodules from 54 patients) and 1726 false positives (non-nodules) reported by our current scheme in a validation test; these were different from the training set. The results indicated that 83% (1424/1726) of non-nodules were removed with a reduction of one true positive (nodule), i.e., a classification sensitivity of 98.3% (57 of 58 nodules). By using the Multi-MTANN, the false-positive rate of our current scheme was improved from 0.98 to 0.18 false positives per section (from 27.4 to 4.8 per patient) at an overall sensitivity of 80.3% (57/71).", "title": "" }, { "docid": "neg:1840091_9", "text": "Given e-commerce scenarios that user profiles are invisible, session-based recommendation is proposed to generate recommendation results from short sessions. Previous work only considers the user's sequential behavior in the current session, whereas the user's main purpose in the current session is not emphasized. In this paper, we propose a novel neural networks framework, i.e., Neural Attentive Recommendation Machine (NARM), to tackle this problem. Specifically, we explore a hybrid encoder with an attention mechanism to model the user's sequential behavior and capture the user's main purpose in the current session, which are combined as a unified session representation later. We then compute the recommendation scores for each candidate item with a bi-linear matching scheme based on this unified session representation. We train NARM by jointly learning the item and session representations as well as their matchings. We carried out extensive experiments on two benchmark datasets. Our experimental results show that NARM outperforms state-of-the-art baselines on both datasets. Furthermore, we also find that NARM achieves a significant improvement on long sessions, which demonstrates its advantages in modeling the user's sequential behavior and main purpose simultaneously.", "title": "" }, { "docid": "neg:1840091_10", "text": "Data governance has become a significant approach that drives decision making in public organisations. Thus, the loss of data governance is a concern to decision makers, acting as a barrier to achieving their business plans in many countries and also influencing both operational and strategic decisions. The adoption of cloud computing is a recent trend in public sector organisations, that are looking to move their data into the cloud environment. The literature shows that data governance is one of the main concerns of decision makers who are considering adopting cloud computing; it also shows that data governance in general and for cloud computing in particular is still being researched and requires more attention from researchers. However, in the absence of a cloud data governance framework, this paper seeks to develop a conceptual framework for cloud data governance-driven decision making in the public sector.", "title": "" }, { "docid": "neg:1840091_11", "text": "Poker games provide a useful testbed for modern Artificial Intelligence techniques. Unlike many classical game domains such as chess and checkers, poker includes elements of imperfect information, stochastic events, and one or more adversarial agents to interact with. Furthermore, in poker it is possible to win or lose by varying degrees. Therefore, it can be advantageous to adapt ones’ strategy to exploit a weak opponent. A poker agent must address these challenges, acting in uncertain environments and exploiting other agents, in order to be highly successful. Arguably, poker games more closely resemble many real world problems than games with perfect information. In this brief paper, we outline Polaris, a Texas Hold’em poker program. Polaris recently defeated top human professionals at the Man vs. Machine Poker Championship and it is currently the reigning AAAI Computer Poker Competition winner in the limit equilibrium and no-limit events.", "title": "" }, { "docid": "neg:1840091_12", "text": "In this paper we make the case for IoT edge offloading, which strives to exploit the resources on edge computing devices by offloading fine-grained computation tasks from the cloud closer to the users and data generators (i.e., IoT devices). The key motive is to enhance performance, security and privacy for IoT services. Our proposal bridges the gap between cloud computing and IoT by applying a divide and conquer approach over the multi-level (cloud, edge and IoT) information pipeline. To validate the design of IoT edge offloading, we developed a unikernel-based prototype and evaluated the system under various hardware and network conditions. Our experimentation has shown promising results and revealed the limitation of existing IoT hardware and virtualization platforms, shedding light on future research of edge computing and IoT.", "title": "" }, { "docid": "neg:1840091_13", "text": "Stabilization by static output feedback (SOF) is a long-standing open problem in control: given an n by n matrix A and rectangular matrices B and C, find a p by q matrix K such that A + BKC is stable. Low-order controller design is a practically important problem that can be cast in the same framework, with (p+k)(q+k) design parameters instead of pq, where k is the order of the controller, and k << n. Robust stabilization further demands stability in the presence of perturbation and satisfactory transient as well as asymptotic system response. We formulate two related nonsmooth, nonconvex optimization problems over K, respectively with the following objectives: minimization of the -pseudospectral abscissa of A+BKC, for a fixed ≥ 0, and maximization of the complex stability radius of A + BKC. Finding global optimizers of these functions is hard, so we use a recently developed gradient sampling method that approximates local optimizers. For modest-sized systems, local optimization can be carried out from a large number of starting points with no difficulty. The best local optimizers may then be investigated as candidate solutions to the static output feedback or low-order controller design problem. We show results for two problems published in the control literature. The first is a turbo-generator example that allows us to show how different choices of the optimization objective lead to stabilization with qualitatively different properties, conveniently visualized by pseudospectral plots. The second is a well known model of a Boeing 767 aircraft at a flutter condition. For this problem, we are not aware of any SOF stabilizing K published in the literature. Our method was not only able to find an SOF stabilizing K, but also to locally optimize the complex stability radius of A + BKC. We also found locally optimizing order–1 and order–2 controllers for this problem. All optimizers are visualized using pseudospectral plots.", "title": "" }, { "docid": "neg:1840091_14", "text": "Whole-cell biosensors are a good alternative to enzyme-based biosensors since they offer the benefits of low cost and improved stability. In recent years, live cells have been employed as biosensors for a wide range of targets. In this review, we will focus on the use of microorganisms that are genetically modified with the desirable outputs in order to improve the biosensor performance. Different methodologies based on genetic/protein engineering and synthetic biology to construct microorganisms with the required signal outputs, sensitivity, and selectivity will be discussed.", "title": "" }, { "docid": "neg:1840091_15", "text": "Shuffled frog-leaping algorithm (SFLA) is a new memetic meta-heuristic algorithm with efficient mathematical function and global search capability. Traveling salesman problem (TSP) is a complex combinatorial optimization problem, which is typically used as benchmark for testing the effectiveness as well as the efficiency of a newly proposed optimization algorithm. When applying the shuffled frog-leaping algorithm in TSP, memeplex and submemeplex are built and the evolution of the algorithm, especially the local exploration in submemeplex is carefully adapted based on the prototype SFLA. Experimental results show that the shuffled frog leaping algorithm is efficient for small-scale TSP. Particularly for TSP with 51 cities, the algorithm manages to find six tours which are shorter than the optimal tour provided by TSPLIB. The shortest tour length is 428.87 instead of 429.98 which can be found cited elsewhere.", "title": "" }, { "docid": "neg:1840091_16", "text": "Localization in Wireless Sensor Networks (WSNs) is regarded as an emerging technology for numerous cyberphysical system applications, which equips wireless sensors with the capability to report data that is geographically meaningful for location based services and applications. However, due to the increasingly pervasive existence of smart sensors in WSN, a single localization technique that affects the overall performance is not sufficient for all applications. Thus, there have been many significant advances on localization techniques in WSNs in the past few years. The main goal in this paper is to present the state-of-the-art research results and approaches proposed for localization in WSNs. Specifically, we present the recent advances on localization techniques in WSNs by considering a wide variety of factors and categorizing them in terms of data processing (centralized vs. distributed), transmission range (range free vs. range based), mobility (static vs. mobile), operating environments (indoor vs. outdoor), node density (sparse vs dense), routing, algorithms, etc. The recent localization techniques in WSNs are also summarized in the form of tables. With this paper, readers can have a more thorough understanding of localization in sensor networks, as well as research trends and future research directions in this area.", "title": "" }, { "docid": "neg:1840091_17", "text": "In this paper, we use a advanced method called Faster R-CNN to detect traffic signs. This new method represents the highest level in object recognition, which don't need to extract image feature manually anymore and can segment image to get candidate region proposals automatically. Our experiment is based on a traffic sign detection competition in 2016 by CCF and UISEE company. The mAP(mean average precision) value of the result is 0.3449 that means Faster R-CNN can indeed be applied in this field. Even though the experiment did not achieve the best results, we explore a new method in the area of the traffic signs detection. We believe that we can get a better achievement in the future.", "title": "" }, { "docid": "neg:1840091_18", "text": "Existing Markov Chain Monte Carlo (MCMC) methods are either based on generalpurpose and domain-agnostic schemes, which can lead to slow convergence, or problem-specific proposals hand-crafted by an expert. In this paper, we propose ANICE-MC, a novel method to automatically design efficient Markov chain kernels tailored for a specific domain. First, we propose an efficient likelihood-free adversarial training method to train a Markov chain and mimic a given data distribution. Then, we leverage flexible volume preserving flows to obtain parametric kernels for MCMC. Using a bootstrap approach, we show how to train efficient Markov chains to sample from a prescribed posterior distribution by iteratively improving the quality of both the model and the samples. Empirical results demonstrate that A-NICE-MC combines the strong guarantees of MCMC with the expressiveness of deep neural networks, and is able to significantly outperform competing methods such as Hamiltonian Monte Carlo.", "title": "" }, { "docid": "neg:1840091_19", "text": "The Linked Movie Database (LinkedMDB) project provides a demonstration of the first open linked dataset connecting several major existing (and highly popular) movie web resources. The database exposed by LinkedMDB contains millions of RDF triples with hundreds of thousands of RDF links to existing web data sources that are part of the growing Linking Open Data cloud, as well as to popular movierelated web pages such as IMDb. LinkedMDB uses a novel way of creating and maintaining large quantities of high quality links by employing state-of-the-art approximate join techniques for finding links, and providing additional RDF metadata about the quality of the links and the techniques used for deriving them.", "title": "" } ]
1840092
Anchor-free distributed localization in sensor networks
[ { "docid": "pos:1840092_0", "text": "Instrumenting the physical world through large networks of wireless sensor nodes, particularly for applications like marine biology, requires that these nodes be very small, light, un-tethered and unobtrusive, imposing substantial restrictions on the amount of additional hardware that can be placed at each node. Practical considerations such as the small size, form factor, cost and power constraints of nodes preclude the use of GPS(Global Positioning System) for all nodes in these networks. The problem of localization, i.e., determining where a given node is physically located in a network is a challenging one, and yet extremely crucial for many applications of very large device networks. It needs to be solved in the absence of GPS on all the nodes in outdoor environments. In this paper, we propose a simple connectivity-metric based method for localization in outdoor environments that makes use of the inherent radiofrequency(RF) communications capabilities of these devices. A fixed number of reference points in the network transmit periodic beacon signals. Nodes use a simple connectivity metric to infer proximity to a given subset of these reference points and then localize themselves to the centroid of the latter. The accuracy of localization is then dependent on the separation distance between two adjacent reference points and the transmission range of these reference points. Initial experimental results show that the accuracy for 90% of our data points is within one-third of the separation distance. Keywords—localization, radio, wireless, GPS-less, connectivity, sensor networks.", "title": "" }, { "docid": "pos:1840092_1", "text": "GLS is a new distributed location service which tracks mobile node locations. GLS combined with geographic forwarding allows the construction of ad hoc mobile networks that scale to a larger number of nodes than possible with previous work. GLS is decentralized and runs on the mobile nodes themselves, requiring no fixed infrastructure. Each mobile node periodically updates a small set of other nodes (its location servers) with its current location. A node sends its position updates to its location servers without knowing their actual identities, assisted by a predefined ordering of node identifiers and a predefined geographic hierarchy. Queries for a mobile node's location also use the predefined identifier ordering and spatial hierarchy to find a location server for that node.\nExperiments using the ns simulator for up to 600 mobile nodes show that the storage and bandwidth requirements of GLS grow slowly with the size of the network. Furthermore, GLS tolerates node failures well: each failure has only a limited effect and query performance degrades gracefully as nodes fail and restart. The query performance of GLS is also relatively insensitive to node speeds. Simple geographic forwarding combined with GLS compares favorably with Dynamic Source Routing (DSR): in larger networks (over 200 nodes) our approach delivers more packets, but consumes fewer network resources.", "title": "" } ]
[ { "docid": "neg:1840092_0", "text": "The Time-Triggered Protocol (TTP), which is intended for use in distributed real-time control applications that require a high dependability and guaranteed timeliness, is discussed. It integrates all services that are required in the design of a fault-tolerant real-time system, such as predictable message transmission, message acknowledgment in group communication, clock synchronization, membership, rapid mode changes, redundancy management, and temporary blackout handling. It supports fault-tolerant configurations with replicated nodes and replicated communication channels. TTP provides these services with a small overhead so it can be used efficiently on twisted pair channels as well as on fiber optic networks.", "title": "" }, { "docid": "neg:1840092_1", "text": "While a large number of consumers in the US and Europe frequently shop on the Internet, research on what drives consumers to shop online has typically been fragmented. This paper therefore proposes a framework to increase researchers’ understanding of consumers’ attitudes toward online shopping and their intention to shop on the Internet. The framework uses the constructs of the Technology Acceptance Model (TAM) as a basis, extended by exogenous factors and applies it to the online shopping context. The review shows that attitudes toward online shopping and intention to shop online are not only affected by ease of use, usefulness, and enjoyment, but also by exogenous factors like consumer traits, situational factors, product characteristics, previous online shopping experiences, and trust in online shopping.", "title": "" }, { "docid": "neg:1840092_2", "text": "Optical see-through head-mounted displays (OSTHMDs) have many advantages in augmented reality application, but their utility in practical applications has been limited by the complexity of calibration. Because the human subject is an inseparable part of the eye-display system, previous methods for OSTHMD calibration have required extensive manual data collection using either instrumentation or manual point correspondences and are highly dependent on operator skill. This paper describes display-relative calibration (DRC) for OSTHMDs, a new two phase calibration method that minimizes the human element in the calibration process and ensures reliable calibration. Phase I of the calibration captures the parameters of the display system relative to a normalized reference frame and is performed in a jig with no human factors issues. The second phase optimizes the display for a specific user and the placement of the display on the head. Several phase II alternatives provide flexibility in a variety of applications including applications involving untrained users.", "title": "" }, { "docid": "neg:1840092_3", "text": "Acoustic-based music recommender systems have received increasing interest in recent years. Due to the semantic gap between low level acoustic features and high level music concepts, many researchers have explored collaborative filtering techniques in music recommender systems. Traditional collaborative filtering music recommendation methods only focus on user rating information. However, there are various kinds of social media information, including different types of objects and relations among these objects, in music social communities such as Last.fm and Pandora. This information is valuable for music recommendation. However, there are two challenges to exploit this rich social media information: (a) There are many different types of objects and relations in music social communities, which makes it difficult to develop a unified framework taking into account all objects and relations. (b) In these communities, some relations are much more sophisticated than pairwise relation, and thus cannot be simply modeled by a graph. In this paper, we propose a novel music recommendation algorithm by using both multiple kinds of social media information and music acoustic-based content. Instead of graph, we use hypergraph to model the various objects and relations, and consider music recommendation as a ranking problem on this hypergraph. While an edge of an ordinary graph connects only two objects, a hyperedge represents a set of objects. In this way, hypergraph can be naturally used to model high-order relations. Experiments on a data set collected from the music social community Last.fm have demonstrated the effectiveness of our proposed algorithm.", "title": "" }, { "docid": "neg:1840092_4", "text": "We explore the lattice sphere packing representation of a multi-antenna system and the algebraic space-time (ST) codes. We apply the sphere decoding (SD) algorithm to the resulted lattice code. For the uncoded system, SD yields, with small increase in complexity, a huge improvement over the well-known V-BLAST detection algorithm. SD of algebraic ST codes exploits the full diversity of the coded multi-antenna system, and makes the proposed scheme very appealing to take advantage of the richness of the multi-antenna environment. The fact that the SD does not depend on the constellation size, gives rise to systems with very high spectral efficiency, maximum-likelihood performance, and low decoding complexity.", "title": "" }, { "docid": "neg:1840092_5", "text": "In today's world, the amount of stored information has been enormously increasing day by day which is generally in the unstructured form and cannot be used for any processing to extract useful information, so several techniques such as summarization, classification, clustering, information extraction and visualization are available for the same which comes under the category of text mining. Text Mining can be defined as a technique which is used to extract interesting information or knowledge from the text documents. Text mining, also known as text data mining or knowledge discovery from textual databases, refers to the process of extracting interesting and non-trivial patterns or knowledge from text documents. Regarded by many as the next wave of knowledge discovery, text mining has very high commercial values.", "title": "" }, { "docid": "neg:1840092_6", "text": "The neural basis of variation in human intelligence is not well delineated. Numerous studies relating measures of brain size such as brain weight, head circumference, CT or MRI brain volume to different intelligence test measures, with variously defined samples of subjects have yielded inconsistent findings with correlations from approximately 0 to 0.6, with most correlations approximately 0.3 or 0.4. The study of intelligence in relation to postmortem cerebral volume is not available to date. We report the results of such a study on 100 cases (58 women and 42 men) having prospectively obtained Full Scale Wechsler Adult Intelligence Scale scores. Ability correlated with cerebral volume, but the relationship depended on the realm of intelligence studied, as well as the sex and hemispheric functional lateralization of the subject. General verbal ability was positively correlated with cerebral volume and each hemisphere's volume in women and in right-handed men accounting for 36% of the variation in verbal intelligence. There was no evidence of such a relationship in non-right-handed men, indicating that at least for verbal intelligence, functional asymmetry may be a relevant factor in structure-function relationships in men, but not in women. In women, general visuospatial ability was also positively correlated with cerebral volume, but less strongly, accounting for approximately 10% of the variance. In men, there was a non-significant trend of a negative correlation between visuospatial ability and cerebral volume, suggesting that the neural substrate of visuospatial ability may differ between the sexes. Analyses of additional research subjects used as test cases provided support for our regression models. In men, visuospatial ability and cerebral volume were strongly linked via the factor of chronological age, suggesting that the well-documented decline in visuospatial intelligence with age is related, at least in right-handed men, to the decrease in cerebral volume with age. We found that cerebral volume decreased only minimally with age in women. This leaves unknown the neural substrate underlying the visuospatial decline with age in women. Body height was found to account for 1-4% of the variation in cerebral volume within each sex, leaving the basis of the well-documented sex difference in cerebral volume unaccounted for. With finer testing instruments of specific cognitive abilities and measures of their associated brain regions, it is likely that stronger structure-function relationships will be observed. Our results point to the need for responsibility in the consideration of the possible use of brain images as intelligence tests.", "title": "" }, { "docid": "neg:1840092_7", "text": "This paper presents the design and measured performance of a novel intermediate-frequency variable-gain amplifier for Wideband Code-Division Multiple Access (WCDMA) transmitters. A compensation technique for parasitic coupling is proposed which allows a high dynamic range of 77 dB to be attained at 400 MHz while using a single variable-gain stage. Temperature compensation and decibel-linear characteristic are achieved by means of a control circuit which provides a lower than /spl plusmn/1.5 dB gain error over full temperature and gain ranges. The device is fabricated in a 0.8-/spl mu/m 46 GHz f/sub T/ silicon bipolar technology and drains up to 6 mA from a 2.7-V power supply.", "title": "" }, { "docid": "neg:1840092_8", "text": "We study the efficiency of deblocking algorithms for improving visual signals degraded by blocking artifacts from compression. Rather than using only the perceptually questionable PSNR, we instead propose a block-sensitive index, named PSNR-B, that produces objective judgments that accord with observations. The PSNR-B modifies PSNR by including a blocking effect factor. We also use the perceptually significant SSIM index, which produces results largely in agreement with PSNR-B. Simulation results show that the PSNR-B results in better performance for quality assessment of deblocked images than PSNR and a well-known blockiness-specific index.", "title": "" }, { "docid": "neg:1840092_9", "text": "BACKGROUND\nThe preparation consisting of a head-fixed mouse on a spherical or cylindrical treadmill offers unique advantages in a variety of experimental contexts. Head fixation provides the mechanical stability necessary for optical and electrophysiological recordings and stimulation. Additionally, it can be combined with virtual environments such as T-mazes, enabling these types of recording during diverse behaviors.\n\n\nNEW METHOD\nIn this paper we present a low-cost, easy-to-build acquisition system, along with scalable computational methods to quantitatively measure behavior (locomotion and paws, whiskers, and tail motion patterns) in head-fixed mice locomoting on cylindrical or spherical treadmills.\n\n\nEXISTING METHODS\nSeveral custom supervised and unsupervised methods have been developed for measuring behavior in mice. However, to date there is no low-cost, turn-key, general-purpose, and scalable system for acquiring and quantifying behavior in mice.\n\n\nRESULTS\nWe benchmark our algorithms against ground truth data generated either by manual labeling or by simpler methods of feature extraction. We demonstrate that our algorithms achieve good performance, both in supervised and unsupervised settings.\n\n\nCONCLUSIONS\nWe present a low-cost suite of tools for behavioral quantification, which serve as valuable complements to recording and stimulation technologies being developed for the head-fixed mouse preparation.", "title": "" }, { "docid": "neg:1840092_10", "text": "The goal of this research paper is to summarise the literature on implementation of the Blockchain and similar digital ledger techniques in various other domains beyond its application to crypto-currency and to draw appropriate conclusions. Blockchain being relatively a new technology, a representative sample of research is presented, spanning over the last ten years, starting from the early work in this field. Different types of usage of Blockchain and other digital ledger techniques, their challenges, applications, security and privacy issues were investigated. Identifying the most propitious direction for future use of Blockchain beyond crypto-currency is the main focus of the review study. Blockchain (BC), the technology behind Bitcoin crypto-currency system, is considered to be essential for forming the backbone for ensuring enhanced security and privacy for various applications in many other domains including the Internet of Things (IoT) eco-system. International research is currently being conducted in both academia and industry applying Blockchain in varied domains. The Proof-of-Work (PoW) mathematical challenge ensures BC security by maintaining a digital ledger of transactions that is considered to be unalterable. Furthermore, BC uses a changeable", "title": "" }, { "docid": "neg:1840092_11", "text": "Using Artificial Neural Networh (A\".) in critical applications can be challenging due to the often experimental nature of A\" construction and the \"black box\" label that is fiequently attached to A\".. Wellaccepted process models exist for algorithmic sofhyare development which facilitate software validation and acceptance. The sojiware development process model presented herein is targeted specifically toward artificial neural networks in crik-al appliicationr. 7% model is not unwieldy, and could easily be used on projects without critical aspects. This should be of particular interest to organizations that use AMVs and need to maintain or achieve a Capability Maturity Model (CM&?I or IS0 sofhyare development rating. Further, while this model is aimed directly at neural network development, with minor moda&ations, the model could be applied to any technique wherein knowledge is extractedfiom existing &ka, such as other numeric approaches or knowledge-based systems.", "title": "" }, { "docid": "neg:1840092_12", "text": "This paper studies monocular visual odometry (VO) problem. Most of existing VO algorithms are developed under a standard pipeline including feature extraction, feature matching, motion estimation, local optimisation, etc. Although some of them have demonstrated superior performance, they usually need to be carefully designed and specifically fine-tuned to work well in different environments. Some prior knowledge is also required to recover an absolute scale for monocular VO. This paper presents a novel end-to-end framework for monocular VO by using deep Recurrent Convolutional Neural Networks (RCNNs). Since it is trained and deployed in an end-to-end manner, it infers poses directly from a sequence of raw RGB images (videos) without adopting any module in the conventional VO pipeline. Based on the RCNNs, it not only automatically learns effective feature representation for the VO problem through Convolutional Neural Networks, but also implicitly models sequential dynamics and relations using deep Recurrent Neural Networks. Extensive experiments on the KITTI VO dataset show competitive performance to state-of-the-art methods, verifying that the end-to-end Deep Learning technique can be a viable complement to the traditional VO systems.", "title": "" }, { "docid": "neg:1840092_13", "text": "Radio frequency identification (RFID) of objects or people has become very popular in many services in industry, distribution logistics, manufacturing companies and goods flow systems. When RFID frequency rises into the microwave region, the tag antenna must be carefully designed to match the free space and to the following ASIC. In this paper, we present a novel folded dipole antenna with a very simple configuration. The required input impedance can be achieved easily by choosing suitable geometry parameters.", "title": "" }, { "docid": "neg:1840092_14", "text": "BACKGROUND\nIn recent years there has been a progressive rise in the number of asylum seekers and refugees displaced from their country of origin, with significant social, economic, humanitarian and public health implications. In this population, up-to-date information on the rate and characteristics of mental health conditions, and on interventions that can be implemented once mental disorders have been identified, are needed. This umbrella review aims at systematically reviewing existing evidence on the prevalence of common mental disorders and on the efficacy of psychosocial and pharmacological interventions in adult and children asylum seekers and refugees resettled in low, middle and high income countries.\n\n\nMETHODS\nWe conducted an umbrella review of systematic reviews summarizing data on the prevalence of common mental disorders and on the efficacy of psychosocial and pharmacological interventions in asylum seekers and/or refugees. Methodological quality of the included studies was assessed with the AMSTAR checklist.\n\n\nRESULTS\nThirteen reviews reported data on the prevalence of common mental disorders while fourteen reviews reported data on the efficacy of psychological or pharmacological interventions. Although there was substantial variability in prevalence rates, we found that depression and anxiety were at least as frequent as post-traumatic stress disorder, accounting for up to 40% of asylum seekers and refugees. In terms of psychosocial interventions, cognitive behavioral interventions, in particular narrative exposure therapy, were the most studied interventions with positive outcomes against inactive but not active comparators.\n\n\nCONCLUSIONS\nCurrent epidemiological data needs to be expanded with more rigorous studies focusing not only on post-traumatic stress disorder but also on depression, anxiety and other mental health conditions. In addition, new studies are urgently needed to assess the efficacy of psychosocial interventions when compared not only with no treatment but also each other. Despite current limitations, existing epidemiological and experimental data should be used to develop specific evidence-based guidelines, possibly by international independent organizations, such as the World Health Organization or the United Nations High Commission for Refugees. Guidelines should be applicable to different organizations of mental health care, including low and middle income countries as well as high income countries.", "title": "" }, { "docid": "neg:1840092_15", "text": "Path prediction is useful in a wide range of applications. Most of the existing solutions, however, are based on eager learning methods where models and patterns are extracted from historical trajectories and then used for future prediction. Since such approaches are committed to a set of statistically significant models or patterns, problems can arise in dynamic environments where the underlying models change quickly or where the regions are not covered with statistically significant models or patterns.\n We propose a \"semi-lazy\" approach to path prediction that builds prediction models on the fly using dynamically selected reference trajectories. Such an approach has several advantages. First, the target trajectories to be predicted are known before the models are built, which allows us to construct models that are deemed relevant to the target trajectories. Second, unlike the lazy learning approaches, we use sophisticated learning algorithms to derive accurate prediction models with acceptable delay based on a small number of selected reference trajectories. Finally, our approach can be continuously self-correcting since we can dynamically re-construct new models if the predicted movements do not match the actual ones.\n Our prediction model can construct a probabilistic path whose probability of occurrence is larger than a threshold and which is furthest ahead in term of time. Users can control the confidence of the path prediction by setting a probability threshold. We conducted a comprehensive experimental study on real-world and synthetic datasets to show the effectiveness and efficiency of our approach.", "title": "" }, { "docid": "neg:1840092_16", "text": "Most real-world dynamic systems are composed of different components that often evolve at very different rates. In traditional temporal graphical models, such as dynamic Bayesian networks, time is modeled at a fixed granularity, generally selected based on the rate at which the fastest component evolves. Inference must then be performed at this fastest granularity, potentially at significant computational cost. Continuous Time Bayesian Networks (CTBNs) avoid time-slicing in the representation by modeling the system as evolving continuously over time. The expectation-propagation (EP) inference algorithm of Nodelman et al. (2005) can then vary the inference granularity over time, but the granularity is uniform across all parts of the system, and must be selected in advance. In this paper, we provide a new EP algorithm that utilizes a general cluster graph architecture where clusters contain distributions that can overlap in both space (set of variables) and time. This architecture allows different parts of the system to be modeled at very different time granularities, according to their current rate of evolution. We also provide an information-theoretic criterion for dynamically re-partitioning the clusters during inference to tune the level of approximation to the current rate of evolution. This avoids the need to hand-select the appropriate granularity, and allows the granularity to adapt as information is transmitted across the network. We present experiments demonstrating that this approach can result in significant computational savings.", "title": "" }, { "docid": "neg:1840092_17", "text": "The visual system is the most studied sensory pathway, which is partly because visual stimuli have rather intuitive properties. There are reasons to think that the underlying principle ruling coding, however, is the same for vision and any other type of sensory signal, namely the code has to satisfy some notion of optimality--understood as minimum redundancy or as maximum transmitted information. Given the huge variability of natural stimuli, it would seem that attaining an optimal code is almost impossible; however, regularities and symmetries in the stimuli can be used to simplify the task: symmetries allow predicting one part of a stimulus from another, that is, they imply a structured type of redundancy. Optimal coding can only be achieved once the intrinsic symmetries of natural scenes are understood and used to the best performance of the neural encoder. In this paper, we review the concepts of optimal coding and discuss the known redundancies and symmetries that visual scenes have. We discuss in depth the only approach which implements the three of them known so far: translational invariance, scale invariance and multiscaling. Not surprisingly, the resulting code possesses features observed in real visual systems in mammals.", "title": "" }, { "docid": "neg:1840092_18", "text": "BACKGROUND\nbeta-Blockade-induced benefit in heart failure (HF) could be related to baseline heart rate and treatment-induced heart rate reduction, but no such relationships have been demonstrated.\n\n\nMETHODS AND RESULTS\nIn CIBIS II, we studied the relationships between baseline heart rate (BHR), heart rate changes at 2 months (HRC), nature of cardiac rhythm (sinus rhythm or atrial fibrillation), and outcomes (mortality and hospitalization for HF). Multivariate analysis of CIBIS II showed that in addition to beta-blocker treatment, BHR and HRC were both significantly related to survival and hospitalization for worsening HF, the lowest BHR and the greatest HRC being associated with best survival and reduction of hospital admissions. No interaction between the 3 variables was observed, meaning that on one hand, HRC-related improvement in survival was similar at all levels of BHR, and on the other hand, bisoprolol-induced benefit over placebo for survival was observed to a similar extent at any level of both BHR and HRC. Bisoprolol reduced mortality in patients with sinus rhythm (relative risk 0.58, P:<0.001) but not in patients with atrial fibrillation (relative risk 1.16, P:=NS). A similar result was observed for cardiovascular mortality and hospitalization for HF worsening.\n\n\nCONCLUSIONS\nBHR and HRC are significantly related to prognosis in heart failure. beta-Blockade with bisoprolol further improves survival at any level of BHR and HRC and to a similar extent. The benefit of bisoprolol is questionable, however, in patients with atrial fibrillation.", "title": "" }, { "docid": "neg:1840092_19", "text": "Purpose – The purpose of this research is to examine the critical success factors of mobile web site adoption. Design/methodology/approach – Based on the valid responses collected from a questionnaire survey, the structural equation modelling technique was employed to examine the research model. Findings – The results indicate that system quality is the main factor affecting perceived ease of use, whereas information quality is the main factor affecting perceived usefulness. Service quality has significant effects on trust and perceived ease of use. Perceived usefulness, perceived ease of use and trust determine user satisfaction. Practical implications – Mobile service providers need to improve the system quality, information quality and service quality of mobile web sites to enhance user satisfaction. Originality/value – Previous research has mainly focused on e-commerce web site success and seldom examined the factors affecting mobile web site success. This research fills the gap. The research draws on information systems success theory, the technology acceptance model and trust theory as the theoretical bases.", "title": "" } ]
1840093
Vertical Versus Shared Leadership as Predictors of the Effectiveness of Change Management Teams : An Examination of Aversive , Directive , Transactional , Transformational , and Empowering Leader Behaviors
[ { "docid": "pos:1840093_0", "text": "© 1966 by the Massachusetts Institute of Technology. From Leadership and Motivation, Essays of Douglas McGregor, edited by W. G. Bennis and E. H. Schein (Cambridge, MA: MIT Press, 1966): 3–20. Reprinted with permission. I t has become trite to say that the most significant developments of the next quarter century will take place not in the physical but in the social sciences, that industry—the economic organ of society—has the fundamental know-how to utilize physical science and technology for the material benefit of mankind, and that we must now learn how to utilize the social sciences to make our human organizations truly effective. Many people agree in principle with such statements; but so far they represent a pious hope—and little else. Consider with me, if you will, something of what may be involved when we attempt to transform the hope into reality.", "title": "" }, { "docid": "pos:1840093_1", "text": "Current theories and models of leadership seek to explain the influence of the hierarchical superior upon the satisfaction and performance of subordinates. While disagreeing with one another in important respects, these theories and models share an implicit assumption that while the style of leadership likely to be effective may vary according to the situation, some leadership style will be effective regardless of the situation. It has been found, however, that certain individual, task, and organizational variables act as \"substitutes for leadership,\" negating the hierarchical superior's ability to exert either positive or negative influence over subordinate attitudes and effectiveness. This paper identifies a number of such substitutes for leadership, presents scales of questionnaire items for their measurement, and reports some preliminary tests.", "title": "" } ]
[ { "docid": "neg:1840093_0", "text": "Avoiding vehicle-to-pedestrian crashes is a critical requirement for nowadays advanced driver assistant systems (ADAS) and future self-driving vehicles. Accordingly, detecting pedestrians from raw sensor data has a history of more than 15 years of research, with vision playing a central role. During the last years, deep learning has boosted the accuracy of image-based pedestrian detectors. However, detection is just the first step towards answering the core question, namely is the vehicle going to crash with a pedestrian provided preventive actions are not taken? Therefore, knowing as soon as possible if a detected pedestrian has the intention of crossing the road ahead of the vehicle is essential for performing safe and comfortable maneuvers that prevent a crash. However, compared to pedestrian detection, there is relatively little literature on detecting pedestrian intentions. This paper aims to contribute along this line by presenting a new vision-based approach which analyzes the pose of a pedestrian along several frames to determine if he or she is going to enter the road or not. We present experiments showing 750 ms of anticipation for pedestrians crossing the road, which at a typical urban driving speed of 50 km/h can provide 15 additional meters (compared to a pure pedestrian detector) for vehicle automatic reactions or to warn the driver. Moreover, in contrast with state-of-the-art methods, our approach is monocular, neither requiring stereo nor optical flow information.", "title": "" }, { "docid": "neg:1840093_1", "text": "Modern society depends on information technology in nearly every facet of human activity including, finance, transportation, education, government, and defense. Organizations are exposed to various and increasing kinds of risks, including information technology risks. Several standards, best practices, and frameworks have been created to help organizations manage these risks. The purpose of this research work is to highlight the challenges facing enterprises in their efforts to properly manage information security risks when adopting international standards and frameworks. To assist in selecting the best framework to use in risk management, the article presents an overview of the most popular and widely used standards and identifies selection criteria. It suggests an approach to proper implementation as well. A set of recommendations is put forward with further research opportunities on the subject. KeywordsInformation security; risk management; security frameworks; security standards; security management.", "title": "" }, { "docid": "neg:1840093_2", "text": "Word embeddings are crucial to many natural language processing tasks. The quality of embeddings relies on large nonnoisy corpora. Arabic dialects lack large corpora and are noisy, being linguistically disparate with no standardized spelling. We make three contributions to address this noise. First, we describe simple but effective adaptations to word embedding tools to maximize the informative content leveraged in each training sentence. Second, we analyze methods for representing disparate dialects in one embedding space, either by mapping individual dialects into a shared space or learning a joint model of all dialects. Finally, we evaluate via dictionary induction, showing that two metrics not typically reported in the task enable us to analyze our contributions’ effects on low and high frequency words. In addition to boosting performance between 2-53%, we specifically improve on noisy, low frequency forms without compromising accuracy on high frequency forms.", "title": "" }, { "docid": "neg:1840093_3", "text": "Machine learning models are vulnerable to simple model stealing attacks if the adversary can obtain output labels for chosen inputs. To protect against these attacks, it has been proposed to limit the information provided to the adversary by omitting probability scores, significantly impacting the utility of the provided service. In this work, we illustrate how a service provider can still provide useful, albeit misleading, class probability information, while significantly limiting the success of the attack. Our defense forces the adversary to discard the class probabilities, requiring significantly more queries before they can train a model with comparable performance. We evaluate several attack strategies, model architectures, and hyperparameters under varying adversarial models, and evaluate the efficacy of our defense against the strongest adversary. Finally, we quantify the amount of noise injected into the class probabilities to mesure the loss in utility, e.g., adding 1.26 nats per query on CIFAR-10 and 3.27 on MNIST. Our evaluation shows our defense can degrade the accuracy of the stolen model at least 20%, or require up to 64 times more queries while keeping the accuracy of the protected model almost intact.", "title": "" }, { "docid": "neg:1840093_4", "text": "Steganography plays an important role in secret communication in digital worlds and open environments like Internet. Undetectability and imperceptibility of confidential data are major challenges of steganography methods. This article presents a secure steganography method in frequency domain based on partitioning approach. The cover image is partitioned into 8×8 blocks and then integer wavelet transform through lifting scheme is performed for each block. The symmetric RC4 encryption method is applied to secret message to obtain high security and authentication. Tree Scan Order is performed in frequency domain to find proper location for embedding secret message. Secret message is embedded in cover image with minimal degrading of the quality. Experimental results demonstrate that the proposed method has achieved superior performance in terms of high imperceptibility of stego-image and it is secure against statistical attack in comparison with existing methods.", "title": "" }, { "docid": "neg:1840093_5", "text": "Activity recognition in video is dominated by low- and mid-level features, and while demonstrably capable, by nature, these features carry little semantic meaning. Inspired by the recent object bank approach to image representation, we present Action Bank, a new high-level representation of video. Action bank is comprised of many individual action detectors sampled broadly in semantic space as well as viewpoint space. Our representation is constructed to be semantically rich and even when paired with simple linear SVM classifiers is capable of highly discriminative performance. We have tested action bank on four major activity recognition benchmarks. In all cases, our performance is better than the state of the art, namely 98.2% on KTH (better by 3.3%), 95.0% on UCF Sports (better by 3.7%), 57.9% on UCF50 (baseline is 47.9%), and 26.9% on HMDB51 (baseline is 23.2%). Furthermore, when we analyze the classifiers, we find strong transfer of semantics from the constituent action detectors to the bank classifier.", "title": "" }, { "docid": "neg:1840093_6", "text": " Activity recognition strategies assume large amounts of labeled training data which require tedious human labor to label.  They also use hand engineered features, which are not best for all applications, hence required to be done separately for each application.  Several recognition strategies have benefited from deep learning for unsupervised feature selection, which has two important property – fine tuning and incremental update. Question! Can deep learning be leveraged upon for continuous learning of activity models from streaming videos? Contributions", "title": "" }, { "docid": "neg:1840093_7", "text": "Phishing is a web-based attack that uses social engineering techniques to exploit internet users and acquire sensitive data. Most phishing attacks work by creating a fake version of the real site's web interface to gain the user's trust.. We applied different methods for detecting phishing using known as well as new features. In this we used the heuristic-based approach to handle phishing attacks, in this approached several website features are collected and used to identify the type of the website. The heuristic-based approach can recognize newly created fake websites in real-time. One intelligent approach based on genetic algorithm seems a potential solution that may effectively detect phishing websites with high accuracy and prevent it by blocking them.", "title": "" }, { "docid": "neg:1840093_8", "text": "Successful teams are characterized by high levels of trust between team members, allowing the team to learn from mistakes, take risks, and entertain diverse ideas. We investigated a robot's potential to shape trust within a team through the robot's expressions of vulnerability. We conducted a between-subjects experiment (N = 35 teams, 105 participants) comparing the behavior of three human teammates collaborating with either a social robot making vulnerable statements or with a social robot making neutral statements. We found that, in a group with a robot making vulnerable statements, participants responded more to the robot's comments and directed more of their gaze to the robot, displaying a higher level of engagement with the robot. Additionally, we discovered that during times of tension, human teammates in a group with a robot making vulnerable statements were more likely to explain their failure to the group, console team members who had made mistakes, and laugh together, all actions that reduce the amount of tension experienced by the team. These results suggest that a robot's vulnerable behavior can have \"ripple effects\" on their human team members' expressions of trust-related behavior.", "title": "" }, { "docid": "neg:1840093_9", "text": "Protein–protein interactions constitute the regulatory network that coordinates diverse cellular functions. Co-immunoprecipitation (co-IP) is a widely used and effective technique to study protein–protein interactions in living cells. However, the time and cost for the preparation of a highly specific antibody is the major disadvantage associated with this technique. In the present study, a co-IP system was developed to detect protein–protein interactions based on an improved protoplast transient expression system by using commercially available antibodies. This co-IP system eliminates the need for specific antibody preparation and transgenic plant production. Leaf sheaths of rice green seedlings were used for the protoplast transient expression system which demonstrated high transformation and co-transformation efficiencies of plasmids. The transient expression system developed by this study is suitable for subcellular localization and protein detection. This work provides a rapid, reliable, and cost-effective system to study transient gene expression, protein subcellular localization, and characterization of protein–protein interactions in vivo.", "title": "" }, { "docid": "neg:1840093_10", "text": "We propose an unsupervised method for learning multi-stage hierarchies of sparse convolutional features. While sparse coding has become an in creasingly popular method for learning visual features, it is most often traine d at the patch level. Applying the resulting filters convolutionally results in h ig ly redundant codes because overlapping patches are encoded in isolation. By tr aining convolutionally over large image windows, our method reduces the redudancy b etween feature vectors at neighboring locations and improves the efficienc y of the overall representation. In addition to a linear decoder that reconstruct s the image from sparse features, our method trains an efficient feed-forward encod er that predicts quasisparse features from the input. While patch-based training r arely produces anything but oriented edge detectors, we show that convolution al training produces highly diverse filters, including center-surround filters, corner detectors, cross detectors, and oriented grating detectors. We show that using these filters in multistage convolutional network architecture improves perfor mance on a number of visual recognition and detection tasks.", "title": "" }, { "docid": "neg:1840093_11", "text": "A static program checker that performs modular checking can check one program module for errors without needing to analyze the entire program. Modular checking requires that each module be accompanied by annotations that specify the module. To help reduce the cost of writing specifications, this paper presents Houdini, an annotation assistant for the modular checker ESC/Java. To infer suitable ESC/Java annotations for a given program, Houdini generates a large number of candidate annotations and uses ESC/Java to verify or refute each of these annotations. The paper describes the design, implementation, and preliminary evaluation of Houdini.", "title": "" }, { "docid": "neg:1840093_12", "text": "BACKGROUND\nClosed-loop artificial pancreas device (APD) systems are externally worn medical devices that are being developed to enable people with type 1 diabetes to regulate their blood glucose levels in a more automated way. The innovative concept of this emerging technology is that hands-free, continuous, glycemic control can be achieved by using digital communication technology and advanced computer algorithms.\n\n\nMETHODS\nA horizon scanning review of this field was conducted using online sources of intelligence to identify systems in development. The systems were classified into subtypes according to their level of automation, the hormonal and glycemic control approaches used, and their research setting.\n\n\nRESULTS\nEighteen closed-loop APD systems were identified. All were being tested in clinical trials prior to potential commercialization. Six were being studied in the home setting, 5 in outpatient settings, and 7 in inpatient settings. It is estimated that 2 systems may become commercially available in the EU by the end of 2016, 1 during 2017, and 2 more in 2018.\n\n\nCONCLUSIONS\nThere are around 18 closed-loop APD systems progressing through early stages of clinical development. Only a few of these are currently in phase 3 trials and in settings that replicate real life.", "title": "" }, { "docid": "neg:1840093_13", "text": "The composition of the gut microbiota is in constant flow under the influence of factors such as the diet, ingested drugs, the intestinal mucosa, the immune system, and the microbiota itself. Natural variations in the gut microbiota can deteriorate to a state of dysbiosis when stress conditions rapidly decrease microbial diversity and promote the expansion of specific bacterial taxa. The mechanisms underlying intestinal dysbiosis often remain unclear given that combinations of natural variations and stress factors mediate cascades of destabilizing events. Oxidative stress, bacteriophages induction and the secretion of bacterial toxins can trigger rapid shifts among intestinal microbial groups thereby yielding dysbiosis. A multitude of diseases including inflammatory bowel diseases but also metabolic disorders such as obesity and diabetes type II are associated with intestinal dysbiosis. The characterization of the changes leading to intestinal dysbiosis and the identification of the microbial taxa contributing to pathological effects are essential prerequisites to better understand the impact of the microbiota on health and disease.", "title": "" }, { "docid": "neg:1840093_14", "text": "In this paper we address the demand for flexibility and economic efficiency in industrial autonomous guided vehicle (AGV) systems by the use of cloud computing. We propose a cloud-based architecture that moves parts of mapping, localization and path planning tasks to a cloud server. We use a cooperative longterm Simultaneous Localization and Mapping (SLAM) approach which merges environment perception of stationary sensors and mobile robots into a central Holistic Environment Model (HEM). Further, we deploy a hierarchical cooperative path planning approach using Conflict-Based Search (CBS) to find optimal sets of paths which are then provided to the mobile robots. For communication we utilize the Manufacturing Service Bus (MSB) which is a component of the manufacturing cloud platform Virtual Fort Knox (VFK). We demonstrate the feasibility of this approach in a real-life industrial scenario. Additionally, we evaluate the system's communication and the planner for various numbers of agents.", "title": "" }, { "docid": "neg:1840093_15", "text": "The overall context proposed in this paper is part of our long-standing goal to contribute to a group of community that suffers from Autism Spectrum Disorder (ASD); a lifelong developmental disability. The objective of this paper is to present the development of our pilot experiment protocol where children with ASD will be exposed to the humanoid robot NAO. This fully programmable humanoid offers an ideal research platform for human-robot interaction (HRI). This study serves as the platform for fundamental investigation to observe the initial response and behavior of the children in the said environment. The system utilizes external cameras, besides the robot's own visual system. Anticipated results are the real initial response and reaction of ASD children during the HRI with the humanoid robot. This shall leads to adaptation of new procedures in ASD therapy based on HRI, especially for a non-technical-expert person to be involved in the robotics intervention during the therapy session.", "title": "" }, { "docid": "neg:1840093_16", "text": "Given that the synthesis of cumulated knowledge is an essential condition for any field to grow and develop, we believe that the enhanced role of IS reviews requires that this expository form be given careful scrutiny. Over the past decade, several senior scholars have made calls for more review papers in our field. While the number of IS review papers has substantially increased in recent years, no prior research has attempted to develop a general framework to conduct and evaluate the rigor of standalone reviews. In this paper, we fill this gap. More precisely, we present a set of guidelines for guiding and evaluating IS literature reviews and specify to which review types they apply. To do so, we first distinguish between four broad categories of review papers and then propose a set of guidelines that are grouped according to the generic phases and steps of the review process. We hope our work will serve as a valuable source for those conducting, evaluating, and/or interpreting reviews in our field.", "title": "" }, { "docid": "neg:1840093_17", "text": "In this paper, the design methods for four-way power combiners based on eight-port and nine-port mode networks are proposed. The eight-port mode network is fundamentally a two-stage binary four-way power combiner composed of three magic-Ts: two compact H-plane magic-Ts and one magic-T with coplanar arms. The two compact H-plane magic-Ts and the magic-T with coplanar arms function as the first and second stages, respectively. Thus, four-way coaxial-to-coaxial power combiners can be designed. A one-stage four-way power combiner based on a nine-port mode network is also proposed. Two matched coaxial ports and two matched rectangular ports are used to provide high isolation along the E-plane and the H-plane, respectively. The simulations agree well with the measured results. The designed four-way power combiners are superior in terms of their compact cross-sectional areas, a high degree of isolation, low insertion loss, low output-amplitude imbalance, and low phase imbalance, which make them well suited for solid-state power combination.", "title": "" }, { "docid": "neg:1840093_18", "text": "CMOS SRAM cell is very less power consuming and have less read and write time. Higher cell ratios can decrease the read and write time and improve stability. PMOS transistor with less width reduces the power consumption. This paper implements 6T SRAM cell with reduced read and write time, area and power consumption. It has been noticed often that increased memory capacity increases the bit-line parasitic capacitance which in turn slows down voltage sensing and make bit-line voltage swings energy expensive. This result in slower and more energy hungry memories.. In this paper Two SRAM cell is being designed for 4 Kb of memory core with supply voltage 1.8 V. A technique of global bit line is used for reducing the power consumption and increasing the memory capacity.", "title": "" }, { "docid": "neg:1840093_19", "text": "What chiefly distinguishes cerebral cortex from other parts of the central nervous system is the great diversity of its cell types and inter-connexions. It would be astonishing if such a structure did not profoundly modify the response patterns of fibres coming into it. In the cat's visual cortex, the receptive field arrangements of single cells suggest that there is indeed a degree of complexity far exceeding anything yet seen at lower levels in the visual system. In a previous paper we described receptive fields of single cortical cells, observing responses to spots of light shone on one or both retinas (Hubel & Wiesel, 1959). In the present work this method is used to examine receptive fields of a more complex type (Part I) and to make additional observations on binocular interaction (Part II). This approach is necessary in order to understand the behaviour of individual cells, but it fails to deal with the problem of the relationship of one cell to its neighbours. In the past, the technique of recording evoked slow waves has been used with great success in studies of functional anatomy. It was employed by Talbot & Marshall (1941) and by Thompson, Woolsey & Talbot (1950) for mapping out the visual cortex in the rabbit, cat, and monkey. Daniel & Whitteiidge (1959) have recently extended this work in the primate. Most of our present knowledge of retinotopic projections, binocular overlap, and the second visual area is based on these investigations. Yet the method of evoked potentials is valuable mainly for detecting behaviour common to large populations of neighbouring cells; it cannot differentiate functionally between areas of cortex smaller than about 1 mm2. To overcome this difficulty a method has in recent years been developed for studying cells separately or in small groups during long micro-electrode penetrations through nervous tissue. Responses are correlated with cell location by reconstructing the electrode tracks from histological material. These techniques have been applied to CAT VISUAL CORTEX 107 the somatic sensory cortex of the cat and monkey in a remarkable series of studies by Mountcastle (1957) and Powell & Mountcastle (1959). Their results show that the approach is a powerful one, capable of revealing systems of organization not hinted at by the known morphology. In Part III of the present paper we use this method in studying the functional architecture of the visual cortex. It helped us attempt to explain on anatomical …", "title": "" } ]
1840094
A General Framework for Temporal Calibration of Multiple Proprioceptive and Exteroceptive Sensors
[ { "docid": "pos:1840094_0", "text": "Muscle fiber conduction velocity is based on the ti me delay estimation between electromyography recording channels. The aims of this study is to id entify the best estimator of generalized correlati on methods in the case where time delay is constant in order to extent these estimator to the time-varyin g delay case . The fractional part of time delay was c lculated by using parabolic interpolation. The re sults indicate that Eckart filter and Hannan Thomson (HT ) give the best results in the case where the signa l to noise ratio (SNR) is 0 dB.", "title": "" } ]
[ { "docid": "neg:1840094_0", "text": "The world’s population is aging at a phenomenal rate. Certain types of cognitive decline, in particular some forms of memory impairment, occur much more frequently in the elderly. This paper describes Autominder, a cognitive orthotic system intended to help older adults adapt to cognitive decline and continue the satisfactory performance of routine activities, thereby potentially enabling them to remain in their own homes longer. Autominder achieves this goal by providing adaptive, personalized reminders of (basic, instrumental, and extended) activities of daily living. Cognitive orthotic systems on the market today mainly provide alarms for prescribed activities at fixed times that are specified in advance. In contrast, Autominder uses a range of AI techniques to model an individual’s daily plans, observe and reason about the execution of those plans, and make decisions about whether and when it is most appropriate to issue reminders. Autominder is currently deployed on a mobile robot, and is being developed as part of the Initiative on Personal Robotic Assistants for the Elderly (the Nursebot project). © 2003 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "neg:1840094_1", "text": "Advanced driver assistance systems are the newest addition to vehicular technology. Such systems use a wide array of sensors to provide a superior driving experience. Vehicle safety and driver alert are important parts of these system. This paper proposes a driver alert system to prevent and mitigate adjacent vehicle collisions by proving warning information of on-road vehicles and possible collisions. A dynamic Bayesian network (DBN) is utilized to fuse multiple sensors to provide driver awareness. It detects oncoming adjacent vehicles and gathers ego vehicle motion characteristics using an on-board camera and inertial measurement unit (IMU). A histogram of oriented gradient feature based classifier is used to detect any adjacent vehicles. Vehicles front-rear end and side faces were considered in training the classifier. Ego vehicles heading, speed and acceleration are captured from the IMU and feed into the DBN. The network parameters were learned from data via expectation maximization(EM) algorithm. The DBN is designed to provide two type of warning to the driver, a cautionary warning and a brake alert for possible collision with other vehicles. Experiments were completed on multiple public databases, demonstrating successful warnings and brake alerts in most situations.", "title": "" }, { "docid": "neg:1840094_2", "text": "Software APIs often contain too many methods and parameters for developers to memorize or navigate effectively. Instead, developers resort to finding answers through online search engines and systems such as Stack Overflow. However, the process of finding and integrating a working solution is often very time-consuming. Though code search engines have increased in quality, there remain significant language- and workflow-gaps in meeting end-user needs. Novice and intermediate programmers often lack the language to query, and the expertise in transferring found code to their task. To address this problem, we present CodeMend, a system to support finding and integration of code. CodeMend leverages a neural embedding model to jointly model natural language and code as mined from large Web and code datasets. We also demonstrate a novel, mixed-initiative, interface to support query and integration steps. Through CodeMend, end-users describe their goal in natural language. The system makes salient the relevant API functions, the lines in the end-user's program that should be changed, as well as proposing the actual change. We demonstrate the utility and accuracy of CodeMend through lab and simulation studies.", "title": "" }, { "docid": "neg:1840094_3", "text": "This paper is concerned with the problem of finding a sparse graph capturing the conditional dependence between the entries of a Gaussian random vector, where the only available information is a sample correlation matrix. A popular approach is to solve a graphical lasso problem with a sparsity-promoting regularization term. This paper derives a simple condition under which the computationally-expensive graphical lasso behaves the same as the simple heuristic method of thresholding. This condition depends only on the solution of graphical lasso and makes no direct use of the sample correlation matrix or the regularization coefficient. It is also proved that this condition is always satisfied if the solution of graphical lasso is replaced by its first-order Taylor approximation. The condition is tested on several random problems and it is shown that graphical lasso and the thresholding method (based on the correlation matrix) lead to a similar result (if not equivalent), provided the regularization term is high enough to seek a sparse graph.", "title": "" }, { "docid": "neg:1840094_4", "text": "Tasks such as question answering and semantic search are dependent on the ability of querying & reasoning over large-scale commonsense knowledge bases (KBs). However, dealing with commonsense data demands coping with problems such as the increase in schema complexity, semantic inconsistency, incompleteness and scalability. This paper proposes a selective graph navigation mechanism based on a distributional relational semantic model which can be applied to querying & reasoning over heterogeneous knowledge bases (KBs). The approach can be used for approximative reasoning, querying and associational knowledge discovery. In this paper we focus on commonsense reasoning as the main motivational scenario for the approach. The approach focuses on addressing the following problems: (i) providing a semantic selection mechanism for facts which are relevant and meaningful in a specific reasoning & querying context and (ii) allowing coping with information incompleteness in large KBs. The approach is evaluated using ConceptNet as a commonsense KB, and achieved high selectivity, high scalability and high accuracy in the selection of meaningful navigational paths. Distributional semantics is also used as a principled mechanism to cope with information incompleteness.", "title": "" }, { "docid": "neg:1840094_5", "text": "This paper presents a simple method based on sinusoidal-amplitude detector for realizing the resolver-signal demodulator. The proposed demodulator consists of two full-wave rectifiers, two ±unity-gain amplifiers, and two sinusoidal-amplitude detectors with control switches. Two output voltages are proportional to sine and cosine envelopes of resolver-shaft angle without low-pass filter. Experimental results demonstrating characteristic of the proposed circuit are included.", "title": "" }, { "docid": "neg:1840094_6", "text": "Many researches in face recognition have been dealing with the challenge of the great variability in head pose, lighting intensity and direction,facial expression, and aging. The main purpose of this overview is to describe the recent 3D face recognition algorithms. The last few years more and more 2D face recognition algorithms are improved and tested on less than perfect images. However, 3D models hold more information of the face, like surface information, that can be used for face recognition or subject discrimination. Another major advantage is that 3D face recognition is pose invariant. A disadvantage of most presented 3D face recognition methods is that they still treat the human face as a rigid object. This means that the methods aren’t capable of handling facial expressions. Although 2D face recognition still seems to outperform the 3D face recognition methods, it is expected that this will change in the near future.", "title": "" }, { "docid": "neg:1840094_7", "text": "We introduceprogram shepherding, a method for monitoring control flow transfers during program execution to enforce a security policy. Shepherding ensures that malicious code masquerading as data is never executed, thwarting a large class of security attacks. Shepherding can also enforce entry points as the only way to execute shared library code. Furthermore, shepherding guarantees that sandboxing checks around any type of program operation will never be bypassed. We have implemented these capabilities efficiently in a runtime system with minimal or no performance penalties. This system operates on unmodified native binaries, requires no special hardware or operating system support, and runs on existing IA-32 machines.", "title": "" }, { "docid": "neg:1840094_8", "text": "An efficient genetic transformation method for kabocha squash (Cucurbita moschata Duch cv. Heiankogiku) was established by wounding cotyledonary node explants with aluminum borate whiskers prior to inoculation with Agrobacterium. Adventitious shoots were induced from only the proximal regions of the cotyledonary nodes and were most efficiently induced on Murashige–Skoog agar medium with 1 mg/L benzyladenine. Vortexing with 1% (w/v) aluminum borate whiskers significantly increased Agrobacterium infection efficiency in the proximal region of the explants. Transgenic plants were screened at the T0 generation by sGFP fluorescence, genomic PCR, and Southern blot analyses. These transgenic plants grew normally and T1 seeds were obtained. We confirmed stable integration of the transgene and its inheritance in T1 generation plants by sGFP fluorescence and genomic PCR analyses. The average transgenic efficiency for producing kabocha squashes with our method was about 2.7%, a value sufficient for practical use.", "title": "" }, { "docid": "neg:1840094_9", "text": "To cope with the explosion of information in mathematics and physics, we need a unified mathematical language to integrate ideas and results from diverse fields. Clifford Algebra provides the key to a unifled Geometric Calculus for expressing, developing, integrating and applying the large body of geometrical ideas running through mathematics and physics.", "title": "" }, { "docid": "neg:1840094_10", "text": "The standard way to procedurally generate random terrain for video games and other applications is to post-process the output of a fast noise generator such as Perlin noise. Tuning the post-processing to achieve particular types of terrain requires game designers to be reasonably well-trained in mathematics. A well-known variant of Perlin noise called value noise is used in a process accessible to designers trained in geography to generate geotypical terrain based on elevation statistics drawn from widely available sources such as the United States Geographical Service. A step-by-step process for downloading and creating terrain from realworld USGS elevation data is described, and an implementation in C++ is given.", "title": "" }, { "docid": "neg:1840094_11", "text": "Progressive multifocal leukoencephalopathy (PML) is a rare, subacute, demyelinating disease of the central nervous system caused by JC virus. Studies of PML from HIV Clade C prevalent countries are scarce. We sought to study the clinical, neuroimaging, and pathological features of PML in HIV Clade C patients from India. This is a prospective cum retrospective study, conducted in a tertiary care Neurological referral center in India from Jan 2001 to May 2012. Diagnosis was considered “definite” (confirmed by histopathology or JCV PCR in CSF) or “probable” (confirmed by MRI brain). Fifty-five patients of PML were diagnosed between January 2001 and May 2012. Complete data was available in 38 patients [mean age 39 ± 8.9 years; duration of illness—82.1 ± 74.7 days). PML was prevalent in 2.8 % of the HIV cohort seen in our Institute. Hemiparesis was the commonest symptom (44.7 %), followed by ataxia (36.8 %). Definitive diagnosis was possible in 20 cases. Eighteen remained “probable” wherein MRI revealed multifocal, symmetric lesions, hypointense on T1, and hyperintense on T2/FLAIR. Stereotactic biopsy (n = 11) revealed demyelination, enlarged oligodendrocytes with intranuclear inclusions and astrocytosis. Immunohistochemistry revelaed the presence of JC viral antigen within oligodendroglial nuclei and astrocytic cytoplasm. No differences in clinical, radiological, or pathological features were evident from PML associated with HIV Clade B. Clinical suspicion of PML was entertained in only half of the patients. Hence, a high index of suspicion is essential for diagnosis. There are no significant differences between clinical, radiological, and pathological picture of PML between Indian and Western countries.", "title": "" }, { "docid": "neg:1840094_12", "text": "This paper presents the SocioMetric Badges Corpus, a new corpus for social interaction studies collected during a 6 weeks contiguous period in a research institution, monitoring the activity of 53 people. The design of the corpus was inspired by the need to provide researchers and practitioners with: a) raw digital trace data that could be used to directly address the task of investigating, reconstructing and predicting people's actual social behavior in complex organizations, b) information about participants' individual characteristics (e.g., personality traits), along with c) data concerning the general social context (e.g., participants' social networks) and the specific situations they find themselves in.", "title": "" }, { "docid": "neg:1840094_13", "text": "In the 21st century, social media has burgeoned into one of the most used channels of communication in the society. As social media becomes well recognised for its potential as a social communication channel, recent years have witnessed an increased interest of using social media in higher education (Alhazmi, & Abdul Rahman, 2013; Al-rahmi, Othman, & Musa, 2014; Al-rahmi, & Othman, 2013a; Chen, & Bryer, 2012; Selwyn, 2009, 2012 to name a few). A survey by Pearson (Seaman, & Tinti-kane, 2013), The Social Media Survey 2013 shows that 41% of higher education faculty in the U.S.A. population has use social media in teaching in 2013 compared to 34% of them using it in 2012. The survey results also show the increase use of social media for teaching by educators and faculty professionals has increase because they see the potential in applying and integrating social media technology to their teaching. Many higher education institutions and educators are now finding themselves expected to catch up with the world of social media applications and social media users. This creates a growing phenomenon for the educational use of social media to create, engage, and share existing or newly produced information between lecturers and students as well as among the students. Facebook has quickly become the social networking site of choice by university students due to its remarkable adoption rates of Facebook in universities (Muñoz, & Towner, 2009; Roblyer et al., 2010; Sánchez, Cortijo, & Javed, 2014). With this in mind, this paper aims to investigate the use of Facebook closed group by undergraduate students in a private university in the Klang Valley, Malaysia. It is also to analyse the interaction pattern among the students using the Facebook closed group pages.", "title": "" }, { "docid": "neg:1840094_14", "text": "The investigation of human activity patterns from location-based social networks like Twitter is an established approach of how to infer relationships and latent information that characterize urban structures. Researchers from various disciplines have performed geospatial analysis on social media data despite the data’s high dimensionality, complexity and heterogeneity. However, user-generated datasets are of multi-scale nature, which results in limited applicability of commonly known geospatial analysis methods. Therefore in this paper, we propose a geographic, hierarchical self-organizing map (Geo-H-SOM) to analyze geospatial, temporal and semantic characteristics of georeferenced tweets. The results of our method, which we validate in a case study, demonstrate the ability to explore, abstract and cluster high-dimensional geospatial and semantic information from crowdsourced data. ARTICLE HISTORY Received 8 April 2015 Accepted 19 September 2015", "title": "" }, { "docid": "neg:1840094_15", "text": "As video games become increasingly popular pastimes, it becomes more important to understand how different individuals behave when they play these games. Previous research has focused mainly on behavior in massively multiplayer online role-playing games; therefore, in the current study we sought to extend on this research by examining the connections between personality traits and behaviors in video games more generally. Two hundred and nineteen university students completed measures of personality traits, psychopathic traits, and a questionnaire regarding frequency of different behaviors during video game play. A principal components analysis of the video game behavior questionnaire revealed four factors: Aggressing, Winning, Creating, and Helping. Each behavior subscale was significantly correlated with at least one personality trait. Men reported significantly more Aggressing, Winning, and Helping behavior than women. Controlling for participant sex, Aggressing was negatively correlated with Honesty–Humility, Helping was positively correlated with Agreeableness, and Creating was negatively correlated with Conscientiousness. Aggressing was also positively correlated with all psychopathic traits, while Winning and Creating were correlated with one psychopathic trait each. Frequency of playing video games online was positively correlated with the Aggressing, Winning, and Helping scales, but not with the Creating scale. The results of the current study provide support for previous research on personality and behavior in massively multiplayer online role-playing games. 2015 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "neg:1840094_16", "text": "Changes in synaptic connections are considered essential for learning and memory formation. However, it is unknown how neural circuits undergo continuous synaptic changes during learning while maintaining lifelong memories. Here we show, by following postsynaptic dendritic spines over time in the mouse cortex, that learning and novel sensory experience lead to spine formation and elimination by a protracted process. The extent of spine remodelling correlates with behavioural improvement after learning, suggesting a crucial role of synaptic structural plasticity in memory formation. Importantly, a small fraction of new spines induced by novel experience, together with most spines formed early during development and surviving experience-dependent elimination, are preserved and provide a structural basis for memory retention throughout the entire life of an animal. These studies indicate that learning and daily sensory experience leave minute but permanent marks on cortical connections and suggest that lifelong memories are stored in largely stably connected synaptic networks.", "title": "" }, { "docid": "neg:1840094_17", "text": "An approach for capturing and modeling individual entertainment (“fun”) preferences is applied to users of the innovative Playware playground, an interactive physical playground inspired by computer games, in this study. The goal is to construct, using representative statistics computed from children’s physiological signals, an estimator of the degree to which games provided by the playground engage the players. For this purpose children’s heart rate (HR) signals, and their expressed preferences of how much “fun” particular game variants are, are obtained from experiments using games implemented on the Playware playground. A comprehensive statistical analysis shows that children’s reported entertainment preferences correlate well with specific features of the HR signal. Neuro-evolution techniques combined with feature set selection methods permit the construction of user models that predict reported entertainment preferences given HR features. These models are expressed as artificial neural networks and are demonstrated and evaluated on two Playware games and two control tasks requiring physical activity. The best network is able to correctly match expressed preferences in 64% of cases on previously unseen data (p−value 6 · 10−5). The generality of the methodology, its limitations, its usability as a real-time feedback mechanism for entertainment augmentation and as a validation tool are discussed.", "title": "" }, { "docid": "neg:1840094_18", "text": "In this work we describe the scenario of fully-immersive desktop VR, which serves the overall goal to seamlessly integrate with existing workflows and workplaces of data analysts and researchers, such that they can benefit from the gain in productivity when immersed in their data-spaces. Furthermore, we provide a literature review showing the status quo of techniques and methods available for realizing this scenario under the raised restrictions. Finally, we propose a concept of an analysis framework and the decisions made and the decisions still to be taken, to outline how the described scenario and the collected methods are feasible in a real use case.", "title": "" }, { "docid": "neg:1840094_19", "text": "A new method for determining nucleotide sequences in DNA is described. It is similar to the \"plus and minus\" method [Sanger, F. & Coulson, A. R. (1975) J. Mol. Biol. 94, 441-448] but makes use of the 2',3'-dideoxy and arabinonucleoside analogues of the normal deoxynucleoside triphosphates, which act as specific chain-terminating inhibitors of DNA polymerase. The technique has been applied to the DNA of bacteriophage varphiX174 and is more rapid and more accurate than either the plus or the minus method.", "title": "" } ]
1840095
Supervised Attentions for Neural Machine Translation
[ { "docid": "pos:1840095_0", "text": "Attention mechanism advanced state-of-the-art neural machine translation (NMT) by jointly learning to align and translate. However, attentional NMT ignores past alignment information, which leads to over-translation and undertranslation problems. In response to this problem, we maintain a coverage vector to keep track of the attention history. The coverage vector is fed to the attention model to help adjust the future attention, which guides NMT to pay more attention to the untranslated source words. Experiments show that coverage-based NMT significantly improves both alignment and translation quality over NMT without coverage.", "title": "" }, { "docid": "pos:1840095_1", "text": "In order to capture rich language phenomena, neural machine translation models have to use a large vocabulary size, which requires high computing time and large memory usage. In this paper, we alleviate this issue by introducing a sentence-level or batch-level vocabulary, which is only a very small sub-set of the full output vocabulary. For each sentence or batch, we only predict the target words in its sentencelevel or batch-level vocabulary. Thus, we reduce both the computing time and the memory usage. Our method simply takes into account the translation options of each word or phrase in the source sentence, and picks a very small target vocabulary for each sentence based on a wordto-word translation model or a bilingual phrase library learned from a traditional machine translation model. Experimental results on the large-scale English-toFrench task show that our method achieves better translation performance by 1 BLEU point over the large vocabulary neural machine translation system of Jean et al. (2015).", "title": "" }, { "docid": "pos:1840095_2", "text": "We present a simple log-linear reparameterization of IBM Model 2 that overcomes problems arising from Model 1’s strong assumptions and Model 2’s overparameterization. Efficient inference, likelihood evaluation, and parameter estimation algorithms are provided. Training the model is consistently ten times faster than Model 4. On three large-scale translation tasks, systems built using our alignment model outperform IBM Model 4. An open-source implementation of the alignment model described in this paper is available from http://github.com/clab/fast align .", "title": "" } ]
[ { "docid": "neg:1840095_0", "text": "The emergence of various and disparate social media platforms has opened opportunities for the research on cross-platform media analysis. This provides huge potentials to solve many challenging problems which cannot be well explored in one single platform. In this paper, we investigate into cross-platform social relation and behavior information to address the cold-start friend recommendation problem. In particular, we conduct an in-depth data analysis to examine what information can better transfer from one platform to another and the result demonstrates a strong correlation for the bidirectional relation and common contact behavior between our test platforms. Inspired by the observations, we design a random walk-based method to employ and integrate these convinced social information to boost friend recommendation performance. To validate the effectiveness of our cross-platform social transfer learning, we have collected a cross-platform dataset including 3,000 users with recognized accounts in both Flickr and Twitter. We demonstrate the effectiveness of the proposed friend transfer methods by promising results.", "title": "" }, { "docid": "neg:1840095_1", "text": "We consider a multiple-block separable convex programming problem, where the objective function is the sum of m individual convex functions without overlapping variables, and the constraints are linear, aside from side constraints. Based on the combination of the classical Gauss–Seidel and the Jacobian decompositions of the augmented Lagrangian function, we propose a partially parallel splitting method, which differs from existing augmented Lagrangian based splitting methods in the sense that such an approach simplifies the iterative scheme significantly by removing the potentially expensive correction step. Furthermore, a relaxation step, whose computational cost is negligible, can be incorporated into the proposed method to improve its practical performance. Theoretically, we establish global convergence of the new method in the framework of proximal point algorithm and worst-case nonasymptotic O(1/t) convergence rate results in both ergodic and nonergodic senses, where t counts the iteration. The efficiency of the proposed method is further demonstrated through numerical results on robust PCA, i.e., factorizing from incomplete information of an B Junfeng Yang jfyang@nju.edu.cn Liusheng Hou houlsheng@163.com Hongjin He hehjmath@hdu.edu.cn 1 School of Mathematics and Information Technology, Key Laboratory of Trust Cloud Computing and Big Data Analysis, Nanjing Xiaozhuang University, Nanjing 211171, China 2 Department of Mathematics, School of Science, Hangzhou Dianzi University, Hangzhou 310018, China 3 Department of Mathematics, Nanjing University, Nanjing 210093, China", "title": "" }, { "docid": "neg:1840095_2", "text": "Controlled hovering of motor driven flapping wing micro aerial vehicles (FWMAVs) is challenging due to its limited control authority, large inertia, vibration produced by wing strokes, and limited components accuracy due to fabrication methods. In this work, we present a hummingbird inspired FWMAV with 12 grams of weight and 20 grams of maximum lift. We present its full non-linear dynamic model including the full inertia tensor, non-linear input mapping, and damping effect from flapping counter torques (FCTs) and flapping counter forces (FCFs). We also present a geometric flight controller to ensure exponentially stable and globally exponential attractive properties. We experimentally demonstrated the vehicle lifting off and hover with attitude stabilization.", "title": "" }, { "docid": "neg:1840095_3", "text": "The scaling of microchip technologies has enabled large scale systems-on-chip (SoC). Network-on-chip (NoC) research addresses global communication in SoC, involving (i) a move from computation-centric to communication-centric design and (ii) the implementation of scalable communication structures. This survey presents a perspective on existing NoC research. We define the following abstractions: system, network adapter, network, and link to explain and structure the fundamental concepts. First, research relating to the actual network design is reviewed. Then system level design and modeling are discussed. We also evaluate performance analysis techniques. The research shows that NoC constitutes a unification of current trends of intrachip communication rather than an explicit new alternative.", "title": "" }, { "docid": "neg:1840095_4", "text": "Recent deep learning based approaches have shown promising results for the challenging task of inpainting large missing regions in an image. These methods can generate visually plausible image structures and textures, but often create distorted structures or blurry textures inconsistent with surrounding areas. This is mainly due to ineffectiveness of convolutional neural networks in explicitly borrowing or copying information from distant spatial locations. On the other hand, traditional texture and patch synthesis approaches are particularly suitable when it needs to borrow textures from the surrounding regions. Motivated by these observations, we propose a new deep generative model-based approach which can not only synthesize novel image structures but also explicitly utilize surrounding image features as references during network training to make better predictions. The model is a feedforward, fully convolutional neural network which can process images with multiple holes at arbitrary locations and with variable sizes during the test time. Experiments on multiple datasets including faces (CelebA, CelebA-HQ), textures (DTD) and natural images (ImageNet, Places2) demonstrate that our proposed approach generates higher-quality inpainting results than existing ones. Code, demo and models are available at: https://github.com/JiahuiYu/generative_inpainting.", "title": "" }, { "docid": "neg:1840095_5", "text": "Sub-Resolution Assist Feature (SRAF) generation is a very important resolution enhancement technique to improve yield in modern semiconductor manufacturing process. Model- based SRAF generation has been widely used to achieve high accuracy but it is known to be time consuming and it is hard to obtain consistent SRAFs on the same layout pattern configurations. This paper proposes the first ma- chine learning based framework for fast yet consistent SRAF generation with high quality of results. Our technical con- tributions include robust feature extraction, novel feature compaction, model training for SRAF classification and pre- diction, and the final SRAF generation with consideration of practical mask manufacturing constraints. Experimental re- sults demonstrate that, compared with commercial Calibre tool, our machine learning based SRAF generation obtains 10X speed up and comparable performance in terms of edge placement error (EPE) and process variation (PV) band.", "title": "" }, { "docid": "neg:1840095_6", "text": "BACKGROUND\nBiofilm formation is a major virulence factor in different bacteria. Biofilms allow bacteria to resist treatment with antibacterial agents. The biofilm formation on glass and steel surfaces, which are extremely useful surfaces in food industries and medical devices, has always had an important role in the distribution and transmission of infectious diseases.\n\n\nOBJECTIVES\nIn this study, the effect of coating glass and steel surfaces by copper nanoparticles (CuNPs) in inhibiting the biofilm formation by Listeria monocytogenes and Pseudomonas aeruginosa was examined.\n\n\nMATERIALS AND METHODS\nThe minimal inhibitory concentrations (MICs) of synthesized CuNPs were measured against L. monocytogenes and P. aeruginosa by using the broth-dilution method. The cell-surface hydrophobicity of the selected bacteria was assessed using the bacterial adhesion to hydrocarbon (BATH) method. Also, the effect of the CuNP-coated surfaces on the biofilm formation of the selected bacteria was calculated via the surface assay.\n\n\nRESULTS\nThe MICs for the CuNPs according to the broth-dilution method were ≤ 16 mg/L for L. monocytogenes and ≤ 32 mg/L for P. aeruginosa. The hydrophobicity of P. aeruginosa and L. monocytogenes was calculated as 74% and 67%, respectively. The results for the surface assay showed a significant decrease in bacterial attachment and colonization on the CuNP-covered surfaces.\n\n\nCONCLUSIONS\nOur data demonstrated that the CuNPs inhibited bacterial growth and that the CuNP-coated surfaces decreased the microbial count and the microbial biofilm formation. Such CuNP-coated surfaces can be used in medical devices and food industries, although further studies in order to measure their level of toxicity would be necessary.", "title": "" }, { "docid": "neg:1840095_7", "text": "Recent years have seen a growing interest in creating virtual agents to populate the cast of characters for interactive narrative. A key challenge posed by interactive characters for narrative environments is devising expressive dialogue generators. To be effective, character dialogue generators must be able to simultaneously take into account multiple sources of information that bear on dialogue, including character attributes, plot development, and communicative goals. Building on the narrative theory of character archetypes, we propose an archetype-driven character dialogue generator that uses a probabilistic unification framework to generate dialogue motivated by character personality and narrative history to achieve communicative goals. The generator’s behavior is illustrated with character dialogue generation in a narrative-centered learning environment, CRYSTAL ISLAND.", "title": "" }, { "docid": "neg:1840095_8", "text": "PURPOSE\nTo investigate the impact of human papillomavirus (HPV) on the epidemiology of oral squamous cell carcinomas (OSCCs) in the United States, we assessed differences in patient characteristics, incidence, and survival between potentially HPV-related and HPV-unrelated OSCC sites.\n\n\nPATIENTS AND METHODS\nData from nine Surveillance, Epidemiology, and End Results program registries (1973 to 2004) were used to classify OSCCs by anatomic site as potentially HPV-related (n = 17,625) or HPV-unrelated (n = 28,144). Joinpoint regression and age-period-cohort models were used to assess incidence trends. Life-table analyses were used to compare 2-year overall survival for HPV-related and HPV-unrelated OSCCs.\n\n\nRESULTS\nHPV-related OSCCs were diagnosed at younger ages than HPV-unrelated OSCCs (mean ages at diagnosis, 61.0 and 63.8 years, respectively; P < .001). Incidence increased significantly for HPV-related OSCC from 1973 to 2004 (annual percentage change [APC] = 0.80; P < .001), particularly among white men and at younger ages. By contrast, incidence for HPV-unrelated OSCC was stable through 1982 (APC = 0.82; P = .186) and declined significantly during 1983 to 2004 (APC = -1.85; P < .001). When treated with radiation, improvements in 2-year survival across calendar periods were more pronounced for HPV-related OSCCs (absolute increase in survival from 1973 through 1982 to 1993 through 2004 for localized, regional, and distant stages = 9.9%, 23.1%, and 18.6%, respectively) than HPV-unrelated OSCCs (5.6%, 3.1%, and 9.9%, respectively). During 1993 to 2004, for all stages treated with radiation, patients with HPV-related OSCCs had significantly higher survival rates than those with HPV-unrelated OSCCs.\n\n\nCONCLUSION\nThe proportion of OSCCs that are potentially HPV-related increased in the United States from 1973 to 2004, perhaps as a result of changing sexual behaviors. Recent improvements in survival with radiotherapy may be due in part to a shift in the etiology of OSCCs.", "title": "" }, { "docid": "neg:1840095_9", "text": "We review the recent progress of the latest 100G to 1T class coherent PON technology using a simplified DSP suitable for forthcoming 5G era optical access systems. The highlight is the presentation of the first demonstration of 100 Gb/s/λ × 8 (800 Gb/s) based PON.", "title": "" }, { "docid": "neg:1840095_10", "text": "BACKGROUND\nNutrition interventions targeted to individuals are unlikely to significantly shift US dietary patterns as a whole. Environmental and policy interventions are more promising for shifting these patterns. We review interventions that influenced the environment through food availability, access, pricing, or information at the point-of-purchase in worksites, universities, grocery stores, and restaurants.\n\n\nMETHODS\nThirty-eight nutrition environmental intervention studies in adult populations, published between 1970 and June 2003, were reviewed and evaluated on quality of intervention design, methods, and description (e.g., sample size, randomization). No policy interventions that met inclusion criteria were found.\n\n\nRESULTS\nMany interventions were not thoroughly evaluated or lacked important evaluation information. Direct comparison of studies across settings was not possible, but available data suggest that worksite and university interventions have the most potential for success. Interventions in grocery stores appear to be the least effective. The dual concerns of health and taste of foods promoted were rarely considered. Sustainability of environmental change was never addressed.\n\n\nCONCLUSIONS\nInterventions in \"limited access\" sites (i.e., where few other choices were available) had the greatest effect on food choices. Research is needed using consistent methods, better assessment tools, and longer durations; targeting diverse populations; and examining sustainability. Future interventions should influence access and availability, policies, and macroenvironments.", "title": "" }, { "docid": "neg:1840095_11", "text": "motion selection DTU Orbit (12/12/2018) ISSARS: An integrated software environment for structure-specific earthquake ground motion selection Current practice enables the design and assessment of structures in earthquake prone areas by performing time history analysis with the use of appropriately selected strong ground motions. This study presents a Matlab-based software environment, which is integrated with a finite element analysis package, and aims to improve the efficiency of earthquake ground motion selection by accounting for the variability of critical structural response quantities. This additional selection criterion, which is tailored to the specific structure studied, leads to more reliable estimates of the mean structural response quantities used in design, while fulfils the criteria already prescribed by the European and US seismic codes and guidelines. To demonstrate the applicability of the software environment developed, an existing irregular, multi-storey, reinforced concrete building is studied for a wide range of seismic scenarios. The results highlight the applicability of the software developed and the benefits of applying a structure-specific criterion in the process of selecting suites of earthquake motions for the seismic design and assessment. (C) 2013 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "neg:1840095_12", "text": "We apply game theory to a vehicular traffic model to study the effect of driver strategies on traffic flow. The resulting model inherits the realistic dynamics achieved by a two-lane traffic model and aims to incorporate phenomena caused by driver-driver interactions. To achieve this goal, a game-theoretic description of driver interaction was developed. This game-theoretic formalization allows one to model different lane-changing behaviors and to keep track of mobility performance. We simulate the evolution of cooperation, traffic flow, and mobility performance for different modeled behaviors. The analysis of these results indicates a mobility optimization process achieved by drivers' interactions.", "title": "" }, { "docid": "neg:1840095_13", "text": "In this paper, we study the implications of the commonplace assumption that most social media studies make with respect to the nature of message shares (such as retweets) as a predominantly positive interaction. By analyzing two large longitudinal Brazilian Twitter datasets containing 5 years of conversations on two polarizing topics – Politics and Sports, we empirically demonstrate that groups holding antagonistic views can actually retweet each other more often than they retweet other groups. We show that assuming retweets as endorsement interactions can lead to misleading conclusions with respect to the level of antagonism among social communities, and that this apparent paradox is explained in part by the use of retweets to quote the original content creator out of the message’s original temporal context, for humor and criticism purposes. As a consequence, messages diffused on online media can have their polarity reversed over time, what poses challenges for social and computer scientists aiming to classify and track opinion groups on online media. On the other hand, we found that the time users take to retweet a message after it has been originally posted can be a useful signal to infer antagonism in social platforms, and that surges of out-of-context retweets correlate with sentiment drifts triggered by real-world events. We also discuss how such evidences can be embedded in sentiment analysis models.", "title": "" }, { "docid": "neg:1840095_14", "text": "Bitcoin has shown great utility around the world with the drastic increase in its value and global consensus method of proof-of-work (POW). Over the years after the revolution in the digital transaction space, we are looking at major scalability issue with old POW consensus method and bitcoin peak limit of processing only 7 transactions per second. With more companies trying to adopt blockchain to modify their existing systems, blockchain working on old consensus methods and with scalability issues can't deliver the optimal solution. Specifically, with new trends like smart contracts and DAPPs, much better performance is needed to support any actual business applications. Such requirements are pushing the new platforms away from old methods of consensus and adoption of off-chain solutions. In this paper, we discuss various scalability issues with the Bitcoin and Ethereum blockchain and recent proposals like the lighting protocol, sharding, super quadratic sharding, DPoS to solve these issues. We also draw the comparison between these proposals on their ability to overcome scalability limits and highlighting major problems in these approaches. In the end, we propose our solution to suffice the scalability issue and conclude with the fact that with better scalability, blockchain has the potential to outrageously support varied domains of the industry.", "title": "" }, { "docid": "neg:1840095_15", "text": "Introduction. The use of social media is prevalent among college students, and it is important to understand how social media use may impact students' attitudes and behaviour. Prior studies have shown negative outcomes of social media use, but researchers have not fully discovered or fully understand the processes and implications of these negative effects. This research provides additional scientific knowledge by focussing on mediators of social media use and controlling for key confounding variables. Method. Surveys that captured social media use, various attitudes about academics and life, and personal characteristics were completed by 234 undergraduate students at a large U.S. university. Analysis. We used covariance-based structural equation modelling to analyse the response data. Results. Results indicated that after controlling for self-regulation, social media use was negatively associated with academic self-efficacy and academic performance. Additionally, academic self-efficacy mediated the negative relationship between social media use and satisfaction with life. Conclusion. There are negative relationships between social media use and academic performance, as well as with academic self-efficacy beliefs. Academic self-efficacy beliefs mediate the negative relationship between social media use and satisfaction with life. These relationships are present even when controlling for individuals' levels of self-regulation.", "title": "" }, { "docid": "neg:1840095_16", "text": "The two authorsLakoff, a linguist and Nunez, a psychologistpurport to introduce a new field of study, i.e. \"mathematical idea analysis\", with this book. By \"mathematical idea analysis\", they mean to give a scientifically plausible account of mathematical concepts using the apparatus of cognitive science. This approach is meant to be a contribution to academics and possibly education as it helps to illuminate how we cognitise mathematical concepts, which are supposedly undecipherable and abstruse to laymen. The analysis of mathematical ideas, the authors claim, cannot be done within mathematics, for even metamathematicsrecursive theory, model theory, set theory, higherorder logic still requires mathematical idea analysis in itself! Formalism, by its very nature, voids symbols of their meanings and thus cognition is required to imbue meaning. Thus, there is a need for this new field, in which the authors, if successful, would become pioneers.", "title": "" }, { "docid": "neg:1840095_17", "text": "This paper considers innovative marketing within the context of a micro firm, exploring how such firm’s marketing practices can take advantage of digital media. Factors that influence a micro firm’s innovative activities are examined and the development and implementation of digital media in the firm’s marketing practice is explored. Despite the significance of marketing and innovation to SMEs, a lack of literature and theory on innovation in marketing theory exists. Research suggests that small firms’ marketing practitioners and entrepreneurs have identified their marketing focus on the 4Is. This paper builds on knowledge in innovation and marketing and examines the process in a micro firm. A qualitative approach is applied using action research and case study approach. The relevant literature is reviewed as the starting point to diagnose problems and issues anticipated by business practitioners. A longitudinal study is used to illustrate the process of actions taken with evaluations and reflections presented. The exploration illustrates that in practice much of the marketing activities within micro firms are driven by incremental innovation. This research emphasises that integrating Information Communication Technologies (ICTs) successfully in marketing requires marketers to take an active managerial role far beyond their traditional areas of competence and authority.", "title": "" }, { "docid": "neg:1840095_18", "text": "Personalized predictive medicine necessitates the modeling of patient illness and care processes, which inherently have long-term temporal dependencies. Healthcare observations, recorded in electronic medical records, are episodic and irregular in time. We introduce DeepCare, an end-toend deep dynamic neural network that reads medical records, stores previous illness history, infers current illness states and predicts future medical outcomes. At the data level, DeepCare represents care episodes as vectors in space, models patient health state trajectories through explicit memory of historical records. Built on Long Short-Term Memory (LSTM), DeepCare introduces time parameterizations to handle irregular timed events by moderating the forgetting and consolidation of memory cells. DeepCare also incorporates medical interventions that change the course of illness and shape future medical risk. Moving up to the health state level, historical and present health states are then aggregated through multiscale temporal pooling, before passing through a neural network that estimates future outcomes. We demonstrate the efficacy of DeepCare for disease progression modeling, intervention recommendation, and future risk prediction. On two important cohorts with heavy social and economic burden – diabetes and mental health – the results show improved modeling and risk prediction accuracy.", "title": "" }, { "docid": "neg:1840095_19", "text": "Context: Recent research discusses the use of ontologies, dictionaries and thesaurus as a means to improve activity labels of process models. However, the trade-off between quality improvement and extra effort is still an open question. It is suspected that ontology-based support could require additional effort for the modeler. Objective: In this paper, we investigate to which degree ontology-based support potentially increases the effort of modeling. We develop a theoretical perspective grounded in cognitive psychology, which leads us to the definition of three design principles for appropriate ontology-based support. The objective is to evaluate the design principles through empirical experimentation. Method: We tested the effect of presenting relevant content from the ontology to the modeler by means of a quantitative analysis. We performed controlled experiments using a prototype, which generates a simplified and context-aware visual representation of the ontology. It logs every action of the process modeler for analysis. The experiment refers to novice modelers and was performed as between-subject design with vs. without ontology-based support. It was carried out with two different samples. Results: Part of the effort-related variables we measured showed significant statistical difference between the group with and without ontology-based support. Overall, for the collected data, the ontology support achieved good results. Conclusion: We conclude that it is feasible to provide ontology-based support to the modeler in order to improve process modeling without strongly compromising time consumption and cognitive effort.", "title": "" } ]
1840096
Local-Global Vectors to Improve Unigram Terminology Extraction
[ { "docid": "pos:1840096_0", "text": "Keyphrase extraction from a given document is a difficult task that requires not only local statistical information but also extensive background knowledge. In this paper, we propose a graph-based ranking approach that uses information supplied by word embedding vectors as the background knowledge. We first introduce a weighting scheme that computes informativeness and phraseness scores of words using the information supplied by both word embedding vectors and local statistics. Keyphrase extraction is performed by constructing a weighted undirected graph for a document, where nodes represent words and edges are co-occurrence relations of two words within a defined window size. The weights of edges are computed by the afore-mentioned weighting scheme, and a weighted PageRank algorithm is used to compute final scores of words. Keyphrases are formed in post-processing stage using heuristics. Our work is evaluated on various publicly available datasets with documents of varying length. We show that evaluation results are comparable to the state-of-the-art algorithms, which are often typically tuned to a specific corpus to achieve the claimed results.", "title": "" } ]
[ { "docid": "neg:1840096_0", "text": "This paper presents a novel approach for creation of topographical function and object markers used within watershed segmentation. Typically, marker-driven watershed segmentation extracts seeds indicating the presence of objects or background at specific image locations. The marker locations are then set to be regional minima within the topological surface (typically, the gradient of the original input image), and the watershed algorithm is applied. In contrast, our approach uses two classifiers, one trained to produce markers, the other trained to produce object boundaries. As a result of using machine-learned pixel classification, the proposed algorithm is directly applicable to both single channel and multichannel image data. Additionally, rather than flooding the gradient image, we use the inverted probability map produced by the second aforementioned classifier as input to the watershed algorithm. Experimental results demonstrate the superior performance of the classification-driven watershed segmentation algorithm for the tasks of 1) image-based granulometry and 2) remote sensing", "title": "" }, { "docid": "neg:1840096_1", "text": "Industry 4.0 has become more popular due to recent developments in cyber-physical systems, big data, cloud computing, and industrial wireless networks. Intelligent manufacturing has produced a revolutionary change, and evolving applications, such as product lifecycle management, are becoming a reality. In this paper, we propose and implement a manufacturing big data solution for active preventive maintenance in manufacturing environments. First, we provide the system architecture that is used for active preventive maintenance. Then, we analyze the method used for collection of manufacturing big data according to the data characteristics. Subsequently, we perform data processing in the cloud, including the cloud layer architecture, the real-time active maintenance mechanism, and the offline prediction and analysis method. Finally, we analyze a prototype platform and implement experiments to compare the traditionally used method with the proposed active preventive maintenance method. The manufacturing big data method used for active preventive maintenance has the potential to accelerate implementation of Industry 4.0.", "title": "" }, { "docid": "neg:1840096_2", "text": "Feature selection and ensemble classification increase system efficiency and accuracy in machine learning, data mining and biomedical informatics. This research presents an analysis of the effect of removing irrelevant and redundant features with ensemble classifiers using two datasets from UCI machine learning repository. Accuracy and computational time were evaluated by four base classifiers; NaiveBayes, Multilayer Perceptron, Support Vector Machines and Decision Tree. Eliminating irrelevant features improves accuracy and reduces computational time while removing redundant features reduces computational time and reduces accuracy of the ensemble.", "title": "" }, { "docid": "neg:1840096_3", "text": "Negative correlation learning (NCL) aims to produce ensembles with sound generalization capability through controlling the disagreement among base learners’ outputs. Such a learning scheme is usually implemented by using feed-forward neural networks with error back-propagation algorithms (BPNNs). However, it suffers from slow convergence, local minima problem and model uncertainties caused by the initial weights and the setting of learning parameters. To achieve a better solution, this paper employs the random vector functional link (RVFL) networks as base components, and incorporates with the NCL strategy for building neural network ensembles. The basis functions of the base models are generated randomly and the parameters of the RVFL networks can be determined by solving a linear equation system. An analytical solution is derived for these parameters, where a cost function defined for NCL and the well-known least squares method are used. To examine the merits of our proposed algorithm, a comparative study is carried out with nine benchmark datasets. Results indicate that our approach outperforms other ensembling techniques on the testing datasets in terms of both effectiveness and efficiency. Crown Copyright 2013 Published by Elsevier Inc. All rights reserved.", "title": "" }, { "docid": "neg:1840096_4", "text": "Robert M. Golub, MD, Editor The JAMA Patient Page is a public service of JAMA. The information and recommendations appearing on this page are appropriate in most instances, but they are not a substitute for medical diagnosis. For specific information concerning your personal medical condition, JAMA suggests that you consult your physician. This page may be photocopied noncommercially by physicians and other health care professionals to share with patients. To purchase bulk reprints, call 312/464-0776. C H IL D H E A TH The Journal of the American Medical Association", "title": "" }, { "docid": "neg:1840096_5", "text": "The purpose of this study was to investigate the effect of the ultrasonic cavitation versus low level laser therapy in the treatment of abdominal adiposity in female post gastric bypass. Subjects: Sixty female suffering from localized fat deposits at the abdomen area after gastric bypass were divided randomly and equally into three equal groups Group (1): were received low level laser therapy plus bicycle exercises and abdominal exercises for 3 months, Group (2): were received ultrasonic cavitation therapy plus bicycle exercises and abdominal exercises for 3 months, and Group (3): were received bicycle exercises and abdominal exercises for 3 months. Methods: data were obtained for each patient from waist circumferences, skin fold and ultrasonography measurements were done after six weeks postoperative (preexercise) and at three months postoperative. The physical therapy program began, six weeks postoperative for experimental group. Including aerobic exercises performed on the stationary bicycle, for 30 min, 3 sessions per week for three months Results: showed a statistically significant decrease in waist circumferences, skin fold and ultrasonography measurements in the three groups, with a higher rate of reduction in Group (1) and Group (2) .Also there was a non-significant difference between Group (1) and Group (2). Conclusion: these results suggested that bothlow level laser therapy and ultrasonic cavitation had a significant effect on abdominal adiposity after gastric bypass in female.", "title": "" }, { "docid": "neg:1840096_6", "text": "In this paper, we compare two methods for article summarization. The first method is mainly based on term-frequency, while the second method is based on ontology. We build an ontology database for analyzing the main topics of the article. After identifying the main topics and determining their relative significance, we rank the paragraphs based on the relevance between main topics and each individual paragraph. Depending on the ranks, we choose desired proportion of paragraphs as summary. Experimental results indicate that both methods offer similar accuracy in their selections of the paragraphs.", "title": "" }, { "docid": "neg:1840096_7", "text": "The wide availability of sensing devices in the medical domain causes the creation of large and very large data sets. Hence, tasks as the classification in such data sets becomes more and more difficult. Deep Neural Networks (DNNs) are very effective in classification, yet finding the best values for their hyper-parameters is a difficult and time-consuming task. This paper introduces an approach to decrease execution times to automatically find good hyper-parameter values for DNN through Evolutionary Algorithms when classification task is faced. This decrease is obtained through the combination of two mechanisms. The former is constituted by a distributed version for a Differential Evolution algorithm. The latter is based on a procedure aimed at reducing the size of the training set and relying on a decomposition into cubes of the space of the data set attributes. Experiments are carried out on a medical data set about Obstructive Sleep Anpnea. They show that sub-optimal DNN hyper-parameter values are obtained in a much lower time with respect to the case where this reduction is not effected, and that this does not come to the detriment of the accuracy in the classification over the test set items.", "title": "" }, { "docid": "neg:1840096_8", "text": "Brain-computer interaction has already moved from assistive care to applications such as gaming. Improvements in usability, hardware, signal processing, and system integration should yield applications in other nonmedical areas.", "title": "" }, { "docid": "neg:1840096_9", "text": "The present study aimed to examine the effectiveness of advertisements in enhancing consumers’ purchasing intention on Facebook in 2013. It is an applied study in terms of its goals, and a descriptive survey one in terms of methodology. The statistical population included all undergraduate students in Cypriot universities. An 11-item researcher-made questionnaire was used to compare and analyze the effectiveness of advertisements. Data analysis was carried out using SPSS17, the parametric statistical method of t-test, and the non-parametric Friedman test. The results of the study showed that Facebook advertising significantly affected brand image and brand equity, both of which factors contributed to a significant change in purchasing intention. 2015 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "neg:1840096_10", "text": "We present an advanced and robust technology to realize 3D hollow plasmonic nanostructures which are tunable in size, shape, and layout. The presented architectures offer new and unconventional properties such as the realization of 3D plasmonic hollow nanocavities with high electric field confinement and enhancement, finely structured extinction profiles, and broad band optical absorption. The 3D nature of the devices can overcome intrinsic difficulties related to conventional architectures in a wide range of multidisciplinary applications.", "title": "" }, { "docid": "neg:1840096_11", "text": "Genetic deficiency of ectodysplasin A (EDA) causes X-linked hypohidrotic ectodermal dysplasia (XLHED), in which the development of sweat glands is irreversibly impaired, an condition that can lead to life-threatening hyperthermia. We observed normal development of mouse fetuses with Eda mutations after they had been exposed in utero to a recombinant protein that includes the receptor-binding domain of EDA. We administered this protein intraamniotically to two affected human twins at gestational weeks 26 and 31 and to a single affected human fetus at gestational week 26; the infants, born in week 33 (twins) and week 39 (singleton), were able to sweat normally, and XLHED-related illness had not developed by 14 to 22 months of age. (Funded by Edimer Pharmaceuticals and others.).", "title": "" }, { "docid": "neg:1840096_12", "text": "BACKGROUND\nNutrient status of B vitamins, particularly folate and vitamin B-12, may be related to cognitive ageing but epidemiological evidence remains inconclusive.\n\n\nOBJECTIVE\nThe aim of this study was to estimate the association of serum folate and vitamin B-12 concentrations with cognitive function in middle-aged and older adults from three Central and Eastern European populations.\n\n\nMETHODS\nMen and women aged 45-69 at baseline participating in the Health, Alcohol and Psychosocial factors in Eastern Europe (HAPIEE) study were recruited in Krakow (Poland), Kaunas (Lithuania) and six urban centres in the Czech Republic. Tests of immediate and delayed recall, verbal fluency and letter search were administered at baseline and repeated in 2006-2008. Serum concentrations of biomarkers at baseline were measured in a sub-sample of participants. Associations of vitamin quartiles with baseline (n=4166) and follow-up (n=2739) cognitive domain-specific z-scores were estimated using multiple linear regression.\n\n\nRESULTS\nAfter adjusting for confounders, folate was positively associated with letter search and vitamin B-12 with word recall in cross-sectional analyses. In prospective analyses, participants in the highest quartile of folate had higher verbal fluency (p<0.01) and immediate recall (p<0.05) scores compared to those in the bottom quartile. In addition, participants in the highest quartile of vitamin B-12 had significantly higher verbal fluency scores (β=0.12; 95% CI=0.02, 0.21).\n\n\nCONCLUSIONS\nFolate and vitamin B-12 were positively associated with performance in some but not all cognitive domains in older Central and Eastern Europeans. These findings do not lend unequivocal support to potential importance of folate and vitamin B-12 status for cognitive function in older age. Long-term longitudinal studies and randomised trials are required before drawing conclusions on the role of these vitamins in cognitive decline.", "title": "" }, { "docid": "neg:1840096_13", "text": "This paper describes an experimental setup and results of user tests focusing on the perception of temporal characteristics of vibration of a mobile device. The experiment consisted of six vibration stimuli of different length. We asked the subjects to score the subjective perception level in a five point Lickert scale. The results suggest that the optimal duration of the control signal should be between 50 and 200 ms in this specific case. Longer durations were perceived as being irritating.", "title": "" }, { "docid": "neg:1840096_14", "text": "In this paper we argue why it is necessary to associate linguistic information with ontologies and why more expressive models, beyond RDFS, OWL and SKOS, are needed to capture the relation between natural language constructs on the one hand and ontological entities on the other. We argue that in the light of tasks such as ontology-based information extraction, ontology learning and population from text and natural language generation from ontologies, currently available datamodels are not sufficient as they only allow to associate atomic terms without linguistic grounding or structure to ontology elements. Towards realizing a more expressive model for associating linguistic information to ontology elements, we base our work presented here on previously developed models (LingInfo, LexOnto, LMF) and present a new joint model for linguistic grounding of ontologies called LexInfo. LexInfo combines essential design aspects of LingInfo and LexOnto and builds on a sound model for representing computational lexica called LMF which has been recently approved as a standard under ISO.", "title": "" }, { "docid": "neg:1840096_15", "text": "Since the first application of indirect composite resins, numerous advances in adhesive dentistry have been made. Furthermore, improvements in structure, composition and polymerization techniques led to the development of a second-generation of indirect resin composites (IRCs). IRCs have optimal esthetic performance, enhanced mechanical properties and reparability. Due to these characteristics they can be used for a wide range of clinical applications. IRCs can be used for inlays, onlays, crowns’ veneering material, fixed dentures prostheses and removable prostheses (teeth and soft tissue substitution), both on teeth and implants. The purpose of this article is to review the properties of these materials and describe a case series of patients treated with different type of restorations in various indications. *Corresponding author: Aikaterini Petropoulou, Clinical Instructor, Department of Prosthodontics, School of Dentistry, National and Kapodistrian University of Athens, Greece, Tel: +306932989104; E-mail: aikatpetropoulou@gmail.com Received November 10, 2013; Accepted November 28, 2013; Published November 30, 2013 Citation: Petropoulou A, Pantzari F, Nomikos N, Chronopoulos V, Kourtis S (2013) The Use of Indirect Resin Composites in Clinical Practice: A Case Series. Dentistry 3: 173. doi:10.4172/2161-1122.1000173 Copyright: © 2013 Petropoulou A, et al. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.", "title": "" }, { "docid": "neg:1840096_16", "text": "In this work we present, apply, and evaluate a novel, interactive visualization model for comparative analysis of structural variants and rearrangements in human and cancer genomes, with emphasis on data integration and uncertainty visualization. To support both global trend analysis and local feature detection, this model enables explorations continuously scaled from the high-level, complete genome perspective, down to the low-level, structural rearrangement view, while preserving global context at all times. We have implemented these techniques in Gremlin, a genomic rearrangement explorer with multi-scale, linked interactions, which we apply to four human cancer genome data sets for evaluation. Using an insight-based evaluation methodology, we compare Gremlin to Circos, the state-of-the-art in genomic rearrangement visualization, through a small user study with computational biologists working in rearrangement analysis. Results from user study evaluations demonstrate that this visualization model enables more total insights, more insights per minute, and more complex insights than the current state-of-the-art for visual analysis and exploration of genome rearrangements.", "title": "" }, { "docid": "neg:1840096_17", "text": "This paper describes a novel two-degree-of-freedom robotic interface to train opening/closing of the hand and knob manipulation. The mechanical design, based on two parallelogram structures holding an exchangeable button, offers the possibility to adapt the interface to various hand sizes and finger orientations, as well as to right-handed or left-handed subjects. The interaction with the subject is measured by means of position encoders and four force sensors located close to the output measuring grasping and insertion forces. Various knobs can be mounted on the interface, including a cone mechanism to train a complete opening movement from a strongly contracted and closed hand to a large opened position. We describe the design based on measured biomechanics, the redundant safety mechanisms as well as the actuation and control architecture. Preliminary experiments show the performance of this interface and some of the possibilities it offers for the rehabilitation of hand function.", "title": "" }, { "docid": "neg:1840096_18", "text": "This paper aims to provide a brief review of cloud computing, followed by an analysis of cloud computing environment using the PESTEL framework. The future implications and limitations of adopting cloud computing as an effective eco-friendly strategy to reduce carbon footprint are also discussed in the paper. This paper concludes with a recommendation to guide researchers to further examine this phenomenon. Organizations today face tough economic times, especially following the recent global financial crisis and the evidence of catastrophic climate change. International and local businesses find themselves compelled to review their strategies. They need to consider their organizational expenses and priorities and to strategically consider how best to save. Traditionally, Information Technology (IT) department is one area that would be affected negatively in the review. Continuing to fund these strategic technologies during an economic downturn is vital to organizations. It is predicted that in coming years IT resources will only be available online. More and more organizations are looking at operating smarter businesses by investigating technologies such as cloud computing, virtualization and green IT to find ways to cut costs and increase efficiencies.", "title": "" }, { "docid": "neg:1840096_19", "text": "In this work we propose Ask Me Any Rating (AMAR), a novel content-based recommender system based on deep neural networks which is able to produce top-N recommendations leveraging user and item embeddings which are learnt from textual information describing the items. A comprehensive experimental evaluation conducted on stateof-the-art datasets showed a significant improvement over all the baselines taken into account.", "title": "" } ]
1840097
YAMAMA: Yet Another Multi-Dialect Arabic Morphological Analyzer
[ { "docid": "pos:1840097_0", "text": "If two translation systems differ differ in performance on a test set, can we trust that this indicates a difference in true system quality? To answer this question, we describe bootstrap resampling methods to compute statistical significance of test results, and validate them on the concrete example of the BLEU score. Even for small test sizes of only 300 sentences, our methods may give us assurances that test result differences are real.", "title": "" }, { "docid": "pos:1840097_1", "text": "In this paper, we present MADAMIRA, a system for morphological analysis and disambiguation of Arabic that combines some of the best aspects of two previously commonly used systems for Arabic processing, MADA (Habash and Rambow, 2005; Habash et al., 2009; Habash et al., 2013) and AMIRA (Diab et al., 2007). MADAMIRA improves upon the two systems with a more streamlined Java implementation that is more robust, portable, extensible, and is faster than its ancestors by more than an order of magnitude. We also discuss an online demo (see http://nlp.ldeo.columbia.edu/madamira/) that highlights these aspects.", "title": "" } ]
[ { "docid": "neg:1840097_0", "text": "Online personal health record (PHR) enables patients to manage their own medical records in a centralized way, which greatly facilitates the storage, access and sharing of personal health data. With the emergence of cloud computing, it is attractive for the PHR service providers to shift their PHR applications and storage into the cloud, in order to enjoy the elastic resources and reduce the operational cost. However, by storing PHRs in the cloud, the patients lose physical control to their personal health data, which makes it necessary for each patient to encrypt her PHR data before uploading to the cloud servers. Under encryption, it is challenging to achieve fine-grained access control to PHR data in a scalable and efficient way. For each patient, the PHR data should be encrypted so that it is scalable with the number of users having access. Also, since there are multiple owners (patients) in a PHR system and every owner would encrypt her PHR files using a different set of cryptographic keys, it is important to reduce the key distribution complexity in such multi-owner settings. Existing cryptographic enforced access control schemes are mostly designed for the single-owner scenarios. In this paper, we propose a novel framework for access control to PHRs within cloud computing environment. To enable fine-grained and scalable access control for PHRs, we leverage attribute based encryption (ABE) techniques to encrypt each patients’ PHR data. To reduce the key distribution complexity, we divide the system into multiple security domains, where each domain manages only a subset of the users. In this way, each patient has full control over her own privacy, and the key management complexity is reduced dramatically. Our proposed scheme is also flexible, in that it supports efficient and on-demand revocation of user access rights, and break-glass access under emergency scenarios.", "title": "" }, { "docid": "neg:1840097_1", "text": "In this paper we propose a solution to the problem of body part segmentation in noisy silhouette images. In developing this solution we revisit the issue of insufficient labeled training data, by investigating how synthetically generated data can be used to train general statistical models for shape classification. In our proposed solution we produce sequences of synthetically generated images, using three dimensional rendering and motion capture information. Each image in these sequences is labeled automatically as it is generated and this labeling is based on the hand labeling of a single initial image.We use shape context features and Hidden Markov Models trained based on this labeled synthetic data. This model is then used to segment silhouettes into four body parts; arms, legs, body and head. Importantly, in all the experiments we conducted the same model is employed with no modification of any parameters after initial training.", "title": "" }, { "docid": "neg:1840097_2", "text": "Unmanned Aerial Vehicle (UAV) surveillance systems allow for highly advanced and safe surveillance of hazardous locations. Further, multi-purpose drones can be widely deployed for not only gathering information but also analyzing the situation from sensed data. However, mobile drone systems have limited computing resources and battery power which makes it a challenge to use these systems for long periods of time or in fully autonomous modes. In this paper, we propose an Adaptive Computation Offloading Drone System (ACODS) architecture with reliable communication for increasing drone operating time. We design not only the response time prediction module for mission critical task offloading decision but also task offloading management module via the Multipath TCP (MPTCP). Through performance evaluation via our prototype implementation, we show that the proposed algorithm achieves significant increase in drone operation time and significantly reduces the response time.", "title": "" }, { "docid": "neg:1840097_3", "text": "Low-field extremity magnetic resonance imaging (lfMRI) is currently commercially available and has been used clinically to evaluate rheumatoid arthritis (RA). However, one disadvantage of this new modality is that the field of view (FOV) is too small to assess hand and wrist joints simultaneously. Thus, we have developed a new lfMRI system, compacTscan, with a FOV that is large enough to simultaneously assess the entire wrist to proximal interphalangeal joint area. In this work, we examined its clinical value compared to conventional 1.5 tesla (T) MRI. The comparison involved evaluating three RA patients by both 0.3 T compacTscan and 1.5 T MRI on the same day. Bone erosion, bone edema, and synovitis were estimated by our new compact MRI scoring system (cMRIS) and the kappa coefficient was calculated on a joint-by-joint basis. We evaluated a total of 69 regions. Bone erosion was detected in 49 regions by compacTscan and in 48 regions by 1.5 T MRI, while the total erosion score was 77 for compacTscan and 76.5 for 1.5 T MRI. These findings point to excellent agreement between the two techniques (kappa = 0.833). Bone edema was detected in 14 regions by compacTscan and in 19 by 1.5 T MRI, and the total edema score was 36.25 by compacTscan and 47.5 by 1.5 T MRI. Pseudo-negative findings were noted in 5 regions. However, there was still good agreement between the techniques (kappa = 0.640). Total number of evaluated joints was 33. Synovitis was detected in 13 joints by compacTscan and 14 joints by 1.5 T MRI, while the total synovitis score was 30 by compacTscan and 32 by 1.5 T MRI. Thus, although 1 pseudo-positive and 2 pseudo-negative findings resulted from the joint evaluations, there was again excellent agreement between the techniques (kappa = 0.827). Overall, the data obtained by our compacTscan system showed high agreement with those obtained by conventional 1.5 T MRI with regard to diagnosis and the scoring of bone erosion, edema, and synovitis. We conclude that compacTscan is useful for diagnosis and estimation of disease activity in patients with RA.", "title": "" }, { "docid": "neg:1840097_4", "text": "Deep learning algorithms seek to exploit the unknown structure in the input distribution in order to discover good representations, often at multiple levels, with higher-level learned features defined in terms of lower-level features. The objective is to make these higherlevel representations more abstract, with their individual features more invariant to most of the variations that are typically present in the training distribution, while collectively preserving as much as possible of the information in the input. Ideally, we would like these representations to disentangle the unknown factors of variation that underlie the training distribution. Such unsupervised learning of representations can be exploited usefully under the hypothesis that the input distribution P (x) is structurally related to some task of interest, say predicting P (y|x). This paper focusses on why unsupervised pre-training of representations can be useful, and how it can be exploited in the transfer learning scenario, where we care about predictions on examples that are not from the same distribution as the training distribution.", "title": "" }, { "docid": "neg:1840097_5", "text": "BACKGROUND\nFingertip injuries involve varying degree of fractures of the distal phalanx and nail bed or nail plate disruptions. The treatment modalities recommended for these injuries include fracture fixation with K-wire and meticulous repair of nail bed after nail removal and later repositioning of nail or stent substitute into the nail fold by various methods. This study was undertaken to evaluate the functional outcome of vertical figure-of-eight tension band suture for finger nail disruptions with fractures of distal phalanx.\n\n\nMATERIALS AND METHODS\nA series of 40 patients aged between 4 and 58 years, with 43 fingernail disruptions and fracture of distal phalanges, were treated with vertical figure-of-eight tension band sutures without formal fixation of fracture fragments and the results were reviewed. In this method, the injuries were treated by thoroughly cleaning the wound, reducing the fracture fragments, anatomical replacement of nail plate, and securing it by vertical figure-of-eight tension band suture.\n\n\nRESULTS\nAll patients were followed up for a minimum of 3 months. The clinical evaluation of the patients was based on radiological fracture union and painless pinch to determine fingertip stability. Every single fracture united and every fingertip was clinically stable at the time of final followup. We also evaluated our results based on visual analogue scale for pain and range of motion of distal interphalangeal joint. Two sutures had to be revised due to over tensioning and subsequent vascular compromise within minutes of repair; however, this did not affect the final outcome.\n\n\nCONCLUSION\nThis technique is simple, secure, and easily reproducible. It neither requires formal repair of injured nail bed structures nor fixation of distal phalangeal fracture and results in uncomplicated reformation of nail plate and uneventful healing of distal phalangeal fractures.", "title": "" }, { "docid": "neg:1840097_6", "text": "We address personalization issues of image captioning, which have not been discussed yet in previous research. For a query image, we aim to generate a descriptive sentence, accounting for prior knowledge such as the users active vocabularies in previous documents. As applications of personalized image captioning, we tackle two post automation tasks: hashtag prediction and post generation, on our newly collected Instagram dataset, consisting of 1.1M posts from 6.3K users. We propose a novel captioning model named Context Sequence Memory Network (CSMN). Its unique updates over previous memory network models include (i) exploiting memory as a repository for multiple types of context information, (ii) appending previously generated words into memory to capture long-term information without suffering from the vanishing gradient problem, and (iii) adopting CNN memory structure to jointly represent nearby ordered memory slots for better context understanding. With quantitative evaluation and user studies via Amazon Mechanical Turk, we show the effectiveness of the three novel features of CSMN and its performance enhancement for personalized image captioning over state-of-the-art captioning models.", "title": "" }, { "docid": "neg:1840097_7", "text": "Prior research by Kornell and Bjork (2007) and Hartwig and Dunlosky (2012) has demonstrated that college students tend to employ study strategies that are far from optimal. We examined whether individuals in the broader—and typically older—population might hold different beliefs about how best to study and learn, given their more extensive experience outside of formal coursework and deadlines. Via a web-based survey, however, we found striking similarities: Learners’ study decisions tend to be driven by deadlines, and the benefits of activities such as self-testing and reviewing studied materials are elf-regulated learning etacognition indset tudy strategies mostly unappreciated. We also found evidence, however, that one’s mindset with respect to intelligence is related to one’s habits and beliefs: Individuals who believe that intelligence can be increased through effort were more likely to value the pedagogical benefits of self-testing, to restudy, and to be intrinsically motivated to learn, compared to individuals who believe that intelligence is fixed. © 2014 Society for Applied Research in Memory and Cognition. Published by Elsevier Inc. All rights With the world’s knowledge at our fingertips, there are increasng opportunities to learn on our own, not only during the years f formal education, but also across our lifespan as our careers, obbies, and interests change. The rapid pace of technological hange has also made such self-directed learning necessary: the bility to effectively self-regulate one’s learning—monitoring one’s wn learning and implementing beneficial study strategies—is, rguably, more important than ever before. Decades of research have revealed the efficacy of various study trategies (see Dunlosky, Rawson, Marsh, Nathan, & Willingham, 013, for a review of effective—and less effective—study techiques). Bjork (1994) coined the term, “desirable difficulties,” to efer to the set of study conditions or study strategies that appear to low down the acquisition of to-be-learned materials and make the earning process seem more effortful, but then enhance long-term etention and transfer, presumably because contending with those ifficulties engages processes that support learning and retention. xamples of desirable difficulties include generating information or esting oneself (instead of reading or re-reading information—a relPlease cite this article in press as: Yan, V. X., et al. Habits and beliefs Journal of Applied Research in Memory and Cognition (2014), http://dx.d tively passive activity), spacing out repeated study opportunities instead of cramming), and varying conditions of practice (rather han keeping those conditions constant and predictable). ∗ Corresponding author at: 1285 Franz Hall, Department of Psychology, University f California, Los Angeles, CA 90095, United States. Tel.: +1 310 954 6650. E-mail address: veronicayan@ucla.edu (V.X. Yan). ttp://dx.doi.org/10.1016/j.jarmac.2014.04.003 211-3681/© 2014 Society for Applied Research in Memory and Cognition. Published by reserved. Many recent findings, however—both survey-based and experimental—have revealed that learners continue to study in non-optimal ways. Learners do not appear, for example, to understand two of the most robust effects from the cognitive psychology literature—namely, the testing effect (that practicing retrieval leads to better long-term retention, compared even to re-reading; e.g., Roediger & Karpicke, 2006a) and the spacing effect (that spacing repeated study sessions leads to better long-term retention than does massing repetitions; e.g., Cepeda, Pashler, Vul, Wixted, & Rohrer, 2006; Dempster, 1988). A survey of 472 undergraduate students by Kornell and Bjork (2007)—which was replicated by Hartwig and Dunlosky (2012)—showed that students underappreciate the learning benefits of testing. Similarly, Karpicke, Butler, and Roediger (2009) surveyed students’ study strategies and found that re-reading was by far the most popular study strategy and that self-testing tended to be used only to assess whether some level of learning had been achieved, not to enhance subsequent recall. Even when students have some appreciation of effective strategies they often do not implement those strategies. Susser and McCabe (2013), for example, showed that even though students reported understanding the benefits of spaced learning over massed learning, they often do not space their study sessions on a given topic, particularly if their upcoming test is going to have a that guide self-regulated learning: Do they vary with mindset? oi.org/10.1016/j.jarmac.2014.04.003 multiple-choice format, or if they think the material is relatively easy, or if they are simply too busy. In fact, Kornell and Bjork’s (2007) survey showed that students’ study decisions tended to be driven by impending deadlines, rather than by learning goals, Elsevier Inc. All rights reserved.", "title": "" }, { "docid": "neg:1840097_8", "text": "0167-4730/$ see front matter 2008 Elsevier Ltd. A doi:10.1016/j.strusafe.2008.06.002 * Corresponding author. E-mail address: abliel@stanford.edu (A.B. Liel). The primary goal of seismic provisions in building codes is to protect life safety through the prevention of structural collapse. To evaluate the extent to which current and past building code provisions meet this objective, the authors have conducted detailed assessments of collapse risk of reinforced-concrete moment frame buildings, including both ‘ductile’ frames that conform to current building code requirements, and ‘non-ductile’ frames that are designed according to out-dated (pre-1975) building codes. Many aspects of the assessment process can have a significant impact on the evaluated collapse performance; this study focuses on methods of representing modeling parameter uncertainties in the collapse assessment process. Uncertainties in structural component strength, stiffness, deformation capacity, and cyclic deterioration are considered for non-ductile and ductile frame structures of varying heights. To practically incorporate these uncertainties in the face of the computationally intensive nonlinear response analyses needed to simulate collapse, the modeling uncertainties are assessed through a response surface, which describes the median collapse capacity as a function of the model random variables. The response surface is then used in conjunction with Monte Carlo methods to quantify the effect of these modeling uncertainties on the calculated collapse fragilities. Comparisons of the response surface based approach and a simpler approach, namely the first-order second-moment (FOSM) method, indicate that FOSM can lead to inaccurate results in some cases, particularly when the modeling uncertainties cause a shift in the prediction of the median collapse point. An alternate simplified procedure is proposed that combines aspects of the response surface and FOSM methods, providing an efficient yet accurate technique to characterize model uncertainties, accounting for the shift in median response. The methodology for incorporating uncertainties is presented here with emphasis on the collapse limit state, but is also appropriate for examining the effects of modeling uncertainties on other structural limit states. 2008 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "neg:1840097_9", "text": "Eight years ago the journal Transcultural Psychiatry published the results of an epidemiological study (Chandler and Lalonde 1998) in which the highly variable rates of youth suicide among British Columbia’s First Nations were related to six markers of “cultural continuity” – community-level variables meant to document the extent to which each of the province’s almost 200 Aboriginal “bands” had taken steps to preserve their cultural past and to secure future control of their civic lives. Two key findings emerged from these earlier efforts. The first was that, although the province-wide rate of Aboriginal youth suicide was sharply elevated (more than 5 times the national average), this commonly reported summary statistic was labelled an “actuarial fiction” that failed to capture the local reality of even one of the province’s First Nations communities. Counting up all of the deaths by suicide and then simply dividing through by the total number of available Aboriginal youth obscures what is really interesting – the dramatic differences in the incidence of youth suicide that actually distinguish one band or tribal council from the next. In fact, more than half of the province’s bands reported no youth suicides during the 6-year period (1987-1992) covered by this study, while more than 90% of the suicides occurred in less than 10% of the bands. Clearly, our data demonstrated, youth suicide is not an “Aboriginal” problem per se but a problem confined to only some Aboriginal communities. Second, all six of the “cultural continuity” factors originally identified – measures intended to mark the degree to which individual Aboriginal communities had successfully taken steps to secure their cultural past in light of an imagined future – proved to be strongly related to the presence or absence of youth suicide. Every community characterized by all six of these protective factors experienced no youth suicides during the 6-year reporting period, whereas those bands in which none of these factors were present suffered suicide rates more than 10 times the national average. Because these findings were seen by us, and have come to be seen by others,1 not only as clarifying the link between cultural continuity and reduced suicide risk but also as having important policy implications, we have undertaken to replicate and broaden our earlier research efforts. We have done this in three ways. First, we have extended our earlier examination of the community-by-community incidence of Aboriginal youth suicides to include also the additional", "title": "" }, { "docid": "neg:1840097_10", "text": "Pesticides (herbicides, fungicides or insecticides) play an important role in agriculture to control the pests and increase the productivity to meet the demand of foods by a remarkably growing population. Pesticides application thus became one of the important inputs for the high production of corn and wheat in USA and UK, respectively. It also increased the crop production in China and India [1-4]. Although extensive use of pesticides improved in securing enough crop production worldwide however; these pesticides are equally toxic or harmful to nontarget organisms like mammals, birds etc and thus their presence in excess can cause serious health and environmental problems. Pesticides have thus become environmental pollutants as they are often found in soil, water, atmosphere and agricultural products, in harmful levels, posing an environmental threat. Its residual presence in agricultural products and foods can also exhibit acute or chronic toxicity on human health. Even at low levels, it can cause adverse effects on humans, plants, animals and ecosystems. Thus, monitoring of these pesticide and its residues become extremely important to ensure that agricultural products have permitted levels of pesticides [5-6]. Majority of pesticides belong to four classes, namely organochlorines, organophosphates, carbamates and pyrethroids. Organophosphates pesticides are a class of insecticides, of which many are highly toxic [7]. Until the 21st century, they were among the most widely used insecticides which included parathion, malathion, methyl parathion, chlorpyrifos, diazinon, dichlorvos, dimethoate, monocrotophos and profenofos. Organophosphate pesticides cause toxicity by inhibiting acetylcholinesterase enzyme [8]. It acts as a poison to insects and other animals, such as birds, amphibians and mammals, primarily by phosphorylating the acetylcholinesterase enzyme (AChE) present at nerve endings. This leads to the loss of available AChE and because of the excess acetylcholine (ACh, the impulse-transmitting substance), the effected organ becomes over stimulated. The enzyme is critical to control the transmission of nerve impulse from nerve fibers to the smooth and skeletal muscle cells, secretary cells and autonomic ganglia, and within the central nervous system (CNS). Once the enzyme reaches a critical level due to inactivation by phosphorylation, symptoms and signs of cholinergic poisoning get manifested [9].", "title": "" }, { "docid": "neg:1840097_11", "text": "$$\\mathcal{Q}$$ -learning (Watkins, 1989) is a simple way for agents to learn how to act optimally in controlled Markovian domains. It amounts to an incremental method for dynamic programming which imposes limited computational demands. It works by successively improving its evaluations of the quality of particular actions at particular states. This paper presents and proves in detail a convergence theorem for $$\\mathcal{Q}$$ -learning based on that outlined in Watkins (1989). We show that $$\\mathcal{Q}$$ -learning converges to the optimum action-values with probability 1 so long as all actions are repeatedly sampled in all states and the action-values are represented discretely. We also sketch extensions to the cases of non-discounted, but absorbing, Markov environments, and where many $$\\mathcal{Q}$$ values can be changed each iteration, rather than just one.", "title": "" }, { "docid": "neg:1840097_12", "text": "Evidence has accrued to suggest that there are 2 distinct dimensions of narcissism, which are often labeled grandiose and vulnerable narcissism. Although individuals high on either of these dimensions interact with others in an antagonistic manner, they differ on other central constructs (e.g., Neuroticism, Extraversion). In the current study, we conducted an exploratory factor analysis of 3 prominent self-report measures of narcissism (N=858) to examine the convergent and discriminant validity of the resultant factors. A 2-factor structure was found, which supported the notion that these scales include content consistent with 2 relatively distinct constructs: grandiose and vulnerable narcissism. We then compared the similarity of the nomological networks of these dimensions in relation to indices of personality, interpersonal behavior, and psychopathology in a sample of undergraduates (n=238). Overall, the nomological networks of vulnerable and grandiose narcissism were unrelated. The current results support the need for a more explicit parsing of the narcissism construct at the level of conceptualization and assessment.", "title": "" }, { "docid": "neg:1840097_13", "text": "Deep embeddings answer one simple question: How similar are two images? Learning these embeddings is the bedrock of verification, zero-shot learning, and visual search. The most prominent approaches optimize a deep convolutional network with a suitable loss function, such as contrastive loss or triplet loss. While a rich line of work focuses solely on the loss functions, we show in this paper that selecting training examples plays an equally important role. We propose distance weighted sampling, which selects more informative and stable examples than traditional approaches. In addition, we show that a simple margin based loss is sufficient to outperform all other loss functions. We evaluate our approach on the Stanford Online Products, CAR196, and the CUB200-2011 datasets for image retrieval and clustering, and on the LFW dataset for face verification. Our method achieves state-of-the-art performance on all of them.", "title": "" }, { "docid": "neg:1840097_14", "text": "During the last years terrestrial laser scanning became a standard method of data acquisition for various applications in close range domain, like industrial production, forest inventories, plant engineering and construction, car navigation and – one of the most important fields – the recording and modelling of buildings. To use laser scanning data in an adequate way, a quality assessment of the laser scanner is inevitable. In the literature some publications can be found concerning the data quality of terrestrial laser scanners. Most of these papers concentrate on the geometrical accuracy of the scanner (errors of instrument axis, range accuracy using target etc.). In this paper a special aspect of quality assessment will be discussed: the influence of different materials and object colours on the recorded measurements of a TLS. The effects on the geometric accuracy as well as on the simultaneously acquired intensity values are the topics of our investigations. A TRIMBLE GX scanner was used for several test series. The study of different effects refer to materials commonly used at building façades, i.e. grey scaled and coloured sheets, various species of wood, a metal plate, plasters of different particle size, light-transmissive slides and surfaces of different conditions of wetness. The tests concerning a grey wedge show a dependence on the brightness where the mean square error (MSE) decrease from black to white, and therefore, confirm previous results of other research groups. Similar results had been obtained with coloured sheets. In this context an important result is that the accuracy of measurements at night-time has proved to be much better than at day time. While different species of wood and different conditions of wetness have no significant effect on the range accuracy the study of a metal plate delivers MSE values considerably higher than the accuracy of the scanner, if the angle of incidence is approximately orthogonal. Also light-transmissive slides cause enormous MSE values. It can be concluded that high precision measurements should be carried out at night-time and preferable on bright surfaces without specular characteristics.", "title": "" }, { "docid": "neg:1840097_15", "text": "Recently the RoboCup@Work league emerged in the world's largest robotics competition, intended for competitors wishing to compete in the field of mobile robotics for manipulation tasks in industrial environments. This competition consists of several tasks with one reflected in this work (Basic Navigation Test). This project involves the simulation in Virtual Robot Experimentation Platform (V-REP) of the behavior of a KUKA youBot. The goal is to verify that the robots can navigate in their environment, in a standalone mode, in a robust and secure way. To achieve the proposed objectives, it was necessary to create a program in Lua and test it in simulation. This involved the study of robot kinematics and mechanics, Simultaneous Localization And Mapping (SLAM) and perception from sensors. In this work is introduced an algorithm developed for a KUKA youBot platform to perform the SLAM while reaching for the goal position, which works according to the requirements of this competition BNT. This algorithm also minimizes the errors in the built map and in the path travelled by the robot.", "title": "" }, { "docid": "neg:1840097_16", "text": "Technological advances in genomics and imaging have led to an explosion of molecular and cellular profiling data from large numbers of samples. This rapid increase in biological data dimension and acquisition rate is challenging conventional analysis strategies. Modern machine learning methods, such as deep learning, promise to leverage very large data sets for finding hidden structure within them, and for making accurate predictions. In this review, we discuss applications of this new breed of analysis approaches in regulatory genomics and cellular imaging. We provide background of what deep learning is, and the settings in which it can be successfully applied to derive biological insights. In addition to presenting specific applications and providing tips for practical use, we also highlight possible pitfalls and limitations to guide computational biologists when and how to make the most use of this new technology.", "title": "" }, { "docid": "neg:1840097_17", "text": "The increasing dependence on information networks for business operations has focused managerial attention on managing risks posed by failure of these networks. In this paper, we develop models to assess the risk of failure on the availability of an information network due to attacks that exploit software vulnerabilities. Software vulnerabilities arise from software installed on the nodes of the network. When the same software stack is installed on multiple nodes on the network, software vulnerabilities are shared among them. These shared vulnerabilities can result in correlated failure of multiple nodes resulting in longer repair times and greater loss of availability of the network. Considering positive network effects (e.g., compatibility) alone without taking the risks of correlated failure and the resulting downtime into account would lead to overinvestment in homogeneous software deployment. Exploiting characteristics unique to information networks, we present a queuing model that allows us to quantify downtime loss faced by a rm as a function of (1) investment in security technologies to avert attacks, (2) software diversification to limit the risk of correlated failure under attacks, and (3) investment in IT resources to repair failures due to attacks. The novelty of this method is that we endogenize the failure distribution and the node correlation distribution, and show how the diversification strategy and other security measures/investments may impact these two distributions, which in turn determine the security loss faced by the firm. We analyze and discuss the effectiveness of diversification strategy under different operating conditions and in the presence of changing vulnerabilities. We also take into account the benefits and costs of a diversification strategy. Our analysis provides conditions under which diversification strategy is advantageous.", "title": "" }, { "docid": "neg:1840097_18", "text": "Streptococcus milleri was isolated from the active lesions of three patients with perineal hidradenitis suppurativa. In each patient, elimination of this organism by appropriate antibiotic therapy was accompanied by marked clinical improvement.", "title": "" }, { "docid": "neg:1840097_19", "text": "In many real applications, graph data is subject to uncertainties due to incompleteness and imprecision of data. Mining such uncertain graph data is semantically different from and computationally more challenging than mining conventional exact graph data. This paper investigates the problem of mining uncertain graph data and especially focuses on mining frequent subgraph patterns on an uncertain graph database. A novel model of uncertain graphs is presented, and the frequent subgraph pattern mining problem is formalized by introducing a new measure, called expected support. This problem is proved to be NP-hard. An approximate mining algorithm is proposed to find a set of approximately frequent subgraph patterns by allowing an error tolerance on expected supports of discovered subgraph patterns. The algorithm uses efficient methods to determine whether a subgraph pattern can be output or not and a new pruning method to reduce the complexity of examining subgraph patterns. Analytical and experimental results show that the algorithm is very efficient, accurate, and scalable for large uncertain graph databases. To the best of our knowledge, this paper is the first one to investigate the problem of mining frequent subgraph patterns from uncertain graph data.", "title": "" } ]
1840098
A Vision-Based Counting and Recognition System for Flying Insects in Intelligent Agriculture
[ { "docid": "pos:1840098_0", "text": "We describe two new color indexing techniques. The rst one is a more robust version of the commonly used color histogram indexing. In the index we store the cumulative color histograms. The L 1-, L 2-, or L 1-distance between two cumulative color histograms can be used to deene a similarity measure of these two color distributions. We show that while this method produces only slightly better results than color histogram methods, it is more robust with respect to the quantization parameter of the histograms. The second technique is an example of a new approach to color indexing. Instead of storing the complete color distributions, the index contains only their dominant features. We implement this approach by storing the rst three moments of each color channel of an image in the index, i.e., for a HSV image we store only 9 oating point numbers per image. The similarity function which is used for the retrieval is a weighted sum of the absolute diierences between corresponding moments. Our tests clearly demonstrate that a retrieval based on this technique produces better results and runs faster than the histogram-based methods.", "title": "" } ]
[ { "docid": "neg:1840098_0", "text": "20140530 is provided in screen-viewable form for personal use only by members of MIT CogNet. Unauthorized use or dissemination of this information is expressly forbidden. If you have any questions about this material, please contact cognetadmin@cognet.mit.edu.", "title": "" }, { "docid": "neg:1840098_1", "text": "An electronic scanning antenna (ESA) that uses a beam former, such as a Rotman lens, has the ability to form multiple beams for shared-aperture applications. This characteristic makes the antenna suitable for integration into systems exploiting the multi-function radio frequency (MFRF) concept, meeting the needs for a future combat system (FCS) RF sensor. An antenna which electronically scans 45/spl deg/ in azimuth has been built and successfully tested at ARL to demonstrate this multiple-beam, shared-aperture approach at K/sub a/ band. Subsequent efforts are focused on reducing the component size and weight while extending the scanning ability of the antenna to a full hemisphere with both azimuth and elevation scanning. Primary emphasis has been on the beamformer, a Rotman lens or similar device, and the switches used to select the beams. Approaches described include replacing the cavity Rotman lens used in the prototype MFRF system with a dielectrically loaded Rotman lens having a waveguide-fed cavity, a microstrip-fed parallel plate, or a surface-wave configuration in order to reduce the overall size. The paper discusses the challenges and progress in the development of Rotman lens beam formers to support such an antenna.", "title": "" }, { "docid": "neg:1840098_2", "text": "Recent advances in semantic epistemolo-gies and flexible symmetries offer a viable alternative to the lookaside buffer. Here, we verify the analysis of systems. Though such a hypothesis is never an appropriate purpose, it mostly conflicts with the need to provide model checking to scholars. We show that though link-level acknowledge-99] can be made electronic, game-theoretic, and virtual, model checking and architecture can agree to solve this question.", "title": "" }, { "docid": "neg:1840098_3", "text": "This paper studies deep network architectures to address the problem of video classification. A multi-stream framework is proposed to fully utilize the rich multimodal information in videos. Specifically, we first train three Convolutional Neural Networks to model spatial, short-term motion and audio clues respectively. Long Short Term Memory networks are then adopted to explore long-term temporal dynamics. With the outputs of the individual streams, we propose a simple and effective fusion method to generate the final predictions, where the optimal fusion weights are learned adaptively for each class, and the learning process is regularized by automatically estimated class relationships. Our contributions are two-fold. First, the proposed multi-stream framework is able to exploit multimodal features that are more comprehensive than those previously attempted. Second, we demonstrate that the adaptive fusion method using the class relationship as a regularizer outperforms traditional alternatives that estimate the weights in a “free” fashion. Our framework produces significantly better results than the state of the arts on two popular benchmarks, 92.2% on UCF-101 (without using audio) and 84.9% on Columbia Consumer Videos.", "title": "" }, { "docid": "neg:1840098_4", "text": "We present a new o -line electronic cash system based on a problem, called the representation problem, of which little use has been made in literature thus far. Our system is the rst to be based entirely on discrete logarithms. Using the representation problem as a basic concept, some techniques are introduced that enable us to construct protocols for withdrawal and payment that do not use the cut and choose methodology of earlier systems. As a consequence, our cash system is much more e cient in both computation and communication complexity than previously proposed systems. Another important aspect of our system concerns its provability. Contrary to previously proposed systems, its correctness can be mathematically proven to a very great extent. Speci cally, if we make one plausible assumption concerning a single hash-function, the ability to break the system seems to imply that one can break the Di e-Hellman problem. Our system o ers a number of extensions that are hard to achieve in previously known systems. In our opinion the most interesting of these is that the entire cash system (including all the extensions) can be incorporated straightforwardly in a setting based on wallets with observers, which has the important advantage that doublespending can be prevented in the rst place, rather than detecting the identity of a double-spender after the fact. In particular, it can be incorporated even under the most stringent requirements conceivable about the privacy of the user, which seems to be impossible to do with previously proposed systems. Another bene t of our system is that framing attempts by a bank have negligible probability of success (independent of computing power) by a simple mechanism from within the system, which is something that previous solutions lack entirely. Furthermore, the basic cash system can be extended to checks, multi-show cash and divisibility, while retaining its computational e ciency. Although in this paper we only make use of the representation problem in groups of prime order, similar intractable problems hold in RSA-groups (with computational equivalence to factoring and computing RSAroots). We discuss how one can use these problems to construct an e cient cash system with security related to factoring or computation of RSA-roots, in an analogous way to the discrete log based system. Finally, we discuss a decision problem (the decision variant of the Di e-Hellman problem) that is strongly related to undeniable signatures, which to our knowledge has never been stated in literature and of which we do not know whether it is inBPP . A proof of its status would be of interest to discrete log based cryptography in general. Using the representation problem, we show in the appendix how to batch the con rmation protocol of undeniable signatures such that polynomially many undeniable signatures can be veri ed in four moves. AMS Subject Classi cation (1991): 94A60 CR Subject Classi cation (1991): D.4.6", "title": "" }, { "docid": "neg:1840098_5", "text": "A four-layer transmitarray operating at 30 GHz is designed using a dual-resonant double square ring as the unit cell element. The two resonances of the double ring are used to increase the per-layer phase variation while maintaining a wide transmission magnitude bandwidth of the unit cell. The design procedure for both the single-layer unit cell and the cascaded connection of four layers is described and it leads to a 50% increase in the -1 dB gain bandwidth over that of previous transmitarrays. Results of a 7.5% -1 dB gain bandwidth and 47% radiation efficiency are reported.", "title": "" }, { "docid": "neg:1840098_6", "text": "In recent years, much research has been conducted on image super-resolution (SR). To the best of our knowledge, however, few SR methods were concerned with compressed images. The SR of compressed images is a challenging task due to the complicated compression artifacts, while many images suffer from them in practice. The intuitive solution for this difficult task is to decouple it into two sequential but independent subproblems, i.e., compression artifacts reduction (CAR) and SR. Nevertheless, some useful details may be removed in CAR stage, which is contrary to the goal of SR and makes the SR stage more challenging. In this paper, an end-to-end trainable deep convolutional neural network is designed to perform SR on compressed images (CISRDCNN), which reduces compression artifacts and improves image resolution jointly. Experiments on compressed images produced by JPEG (we take the JPEG as an example in this paper) demonstrate that the proposed CISRDCNN yields state-of-the-art SR performance on commonly used test images and imagesets. The results of CISRDCNN on real low quality web images are also very impressive, with obvious quality enhancement. Further, we explore the application of the proposed SR method in low bit-rate image coding, leading to better rate-distortion performance than JPEG.", "title": "" }, { "docid": "neg:1840098_7", "text": "We expose and explore technical and trust issues that arise in acquiring forensic evidence from infrastructure-as-aservice cloud computing and analyze some strategies for addressing these challenges. First, we create a model to show the layers of trust required in the cloud. Second, we present the overarching context for a cloud forensic exam and analyze choices available to an examiner. Third, we provide for the first time an evaluation of popular forensic acquisition tools including Guidance EnCase and AccesData Forensic Toolkit, and show that they can successfully return volatile and non-volatile data from the cloud. We explain, however, that with those techniques judge and jury must accept a great deal of trust in the authenticity and integrity of the data from many layers of the cloud model. In addition, we explore four other solutions for acquisition—Trusted Platform Modules, the management plane, forensics as a service, and legal solutions, which assume less trust but require more cooperation from the cloud service provider. Our work lays a foundation for future development of new acquisition methods for the cloud that will be trustworthy and forensically sound. Our work also helps forensic examiners, law enforcement, and the court evaluate confidence in evidence from the cloud.", "title": "" }, { "docid": "neg:1840098_8", "text": "Iterative refinement reduces the roundoff errors in the computed solution to a system of linear equations. Only one step requires higher precision arithmetic. If sufficiently high precision is used, the final result is shown to be very accurate.", "title": "" }, { "docid": "neg:1840098_9", "text": "Weakly supervised semantic segmentation has been a subject of increased interest due to the scarcity of fully annotated images. We introduce a new approach for solving weakly supervised semantic segmentation with deep Convolutional Neural Networks (CNNs). The method introduces a novel layer which applies simplex projection on the output of a neural network using area constraints of class objects. The proposed method is general and can be seamlessly integrated into any CNN architecture. Moreover, the projection layer allows strongly supervised models to be adapted to weakly supervised models effortlessly by substituting ground truth labels. Our experiments have shown that applying such an operation on the output of a CNN improves the accuracy of semantic segmentation in a weakly supervised setting with image-level labels.", "title": "" }, { "docid": "neg:1840098_10", "text": "The methods proposed recently for specializing word embeddings according to a particular perspective generally rely on external knowledge. In this article, we propose Pseudofit, a new method for specializing word embeddings according to semantic similarity without any external knowledge. Pseudofit exploits the notion of pseudo-sense for building several representations for each word and uses these representations for making the initial embeddings more generic. We illustrate the interest of Pseudofit for acquiring synonyms and study several variants of Pseudofit according to this perspective.", "title": "" }, { "docid": "neg:1840098_11", "text": "OBJECTIVES\nTo investigate the prevalence, location, size and course of the anastomosis between the dental branch of the posterior superior alveolar artery (PSAA), known as alveolar antral artery (AAA), and the infraorbital artery (IOA).\n\n\nMATERIAL AND METHODS\nThe first part of the study was performed on 30 maxillary sinuses deriving from 15 human cadaver heads. In order to visualize such anastomosis, the vascular network afferent to the sinus was injected with liquid latex mixed with green India ink through the external carotid artery. The second part of the study consisted of 100 CT scans from patients scheduled for sinus lift surgery.\n\n\nRESULTS\nAn anastomosis between the AAA and the IOA was found by dissection in the context of the sinus anterolateral wall in 100% of cases, while a well-defined bony canal was detected radiographically in 94 out of 200 sinuses (47% of cases). The mean vertical distance from the lowest point of this bony canal to the alveolar crest was 11.25 ± 2.99 mm (SD) in maxillae examined by CT. The canal diameter was <1 mm in 55.3% of cases, 1-2 mm in 40.4% of cases and 2-3 mm in 4.3% of cases. In 100% of cases, the AAA was found to be partially intra-osseous, that is between the Schneiderian membrane and the lateral bony wall of the sinus, in the area selected for sinus antrostomy.\n\n\nCONCLUSIONS\nA sound knowledge of the maxillary sinus vascular anatomy and its careful analysis by CT scan is essential to prevent complications during surgical interventions involving this region.", "title": "" }, { "docid": "neg:1840098_12", "text": "Text summarization and sentiment classification, in NLP, are two main tasks implemented on text analysis, focusing on extracting the major idea of a text at different levels. Based on the characteristics of both, sentiment classification can be regarded as a more abstractive summarization task. According to the scheme, a Self-Attentive Hierarchical model for jointly improving text Summarization and Sentiment Classification (SAHSSC) is proposed in this paper. This model jointly performs abstractive text summarization and sentiment classification within a hierarchical end-to-end neural framework, in which the sentiment classification layer on top of the summarization layer predicts the sentiment label in the light of the text and the generated summary. Furthermore, a self-attention layer is also proposed in the hierarchical framework, which is the bridge that connects the summarization layer and the sentiment classification layer and aims at capturing emotional information at text-level as well as summary-level. The proposed model can generate a more relevant summary and lead to a more accurate summary-aware sentiment prediction. Experimental results evaluated on SNAP amazon online review datasets show that our model outperforms the state-of-the-art baselines on both abstractive text summarization and sentiment classification by a considerable margin.", "title": "" }, { "docid": "neg:1840098_13", "text": "In information retrieval, pseudo-relevance feedback (PRF) refers to a strategy for updating the query model using the top retrieved documents. PRF has been proven to be highly effective in improving the retrieval performance. In this paper, we look at the PRF task as a recommendation problem: the goal is to recommend a number of terms for a given query along with weights, such that the final weights of terms in the updated query model better reflect the terms' contributions in the query. To do so, we propose RFMF, a PRF framework based on matrix factorization which is a state-of-the-art technique in collaborative recommender systems. Our purpose is to predict the weight of terms that have not appeared in the query and matrix factorization techniques are used to predict these weights. In RFMF, we first create a matrix whose elements are computed using a weight function that shows how much a term discriminates the query or the top retrieved documents from the collection. Then, we re-estimate the created matrix using a matrix factorization technique. Finally, the query model is updated using the re-estimated matrix. RFMF is a general framework that can be employed with any retrieval model. In this paper, we implement this framework for two widely used document retrieval frameworks: language modeling and the vector space model. Extensive experiments over several TREC collections demonstrate that the RFMF framework significantly outperforms competitive baselines. These results indicate the potential of using other recommendation techniques in this task.", "title": "" }, { "docid": "neg:1840098_14", "text": "Network representations of systems from various scientific and societal domains are neither completely random nor fully regular, but instead appear to contain recurring structural building blocks [1]. These features tend to be shared by networks belonging to the same broad class, such as the class of social networks or the class of biological networks. At a finer scale of classification within each such class, networks describing more similar systems tend to have more similar features. This occurs presumably because networks representing similar purposes or constructions would be expected to be generated by a shared set of domain specific mechanisms, and it should therefore be possible to classify these networks into categories based on their features at various structural levels. Here we describe and demonstrate a new, hybrid approach that combines manual selection of features of potential interest with existing automated classification methods. In particular, selecting well-known and well-studied features that have been used throughout social network analysis and network science [2, 3] and then classifying with methods such as random forests [4] that are of special utility in the presence of feature collinearity, we find that we achieve higher accuracy, in shorter computation time, with greater interpretability of the network classification results. Past work in the area of network classification has primarily focused on distinguishing networks from different categories using two different broad classes of approaches. In the first approach , network classification is carried out by examining certain specific structural features and investigating whether networks belonging to the same category are similar across one or more dimensions as defined by these features [5, 6, 7, 8]. In other words, in this approach the investigator manually chooses the structural characteristics of interest and more or less manually (informally) determines the regions of the feature space that correspond to different classes. These methods are scalable to large networks and yield results that are easily interpreted in terms of the characteristics of interest, but in practice they tend to lead to suboptimal classification accuracy. In the second approach, network classification is done by using very flexible machine learning classi-fiers that, when presented with a network as an input, classify its category or class as an output To somewhat oversimplify, the first approach relies on manual feature specification followed by manual selection of a classification system, whereas the second approach is its opposite, relying on automated feature detection followed by automated classification. While …", "title": "" }, { "docid": "neg:1840098_15", "text": "Bayesian networks (BNs) provide a means for representing, displaying, and making available in a usable form the knowledge of experts in a given Weld. In this paper, we look at the performance of an expert constructed BN compared with other machine learning (ML) techniques for predicting the outcome (win, lose, or draw) of matches played by Tottenham Hotspur Football Club. The period under study was 1995–1997 – the expert BN was constructed at the start of that period, based almost exclusively on subjective judgement. Our objective was to determine retrospectively the comparative accuracy of the expert BN compared to some alternative ML models that were built using data from the two-year period. The additional ML techniques considered were: MC4, a decision tree learner; Naive Bayesian learner; Data Driven Bayesian (a BN whose structure and node probability tables are learnt entirely from data); and a K-nearest neighbour learner. The results show that the expert BN is generally superior to the other techniques for this domain in predictive accuracy. The results are even more impressive for BNs given that, in a number of key respects, the study assumptions place them at a disadvantage. For example, we have assumed that the BN prediction is ‘incorrect’ if a BN predicts more than one outcome as equally most likely (whereas, in fact, such a prediction would prove valuable to somebody who could place an ‘each way’ bet on the outcome). Although the expert BN has now long been irrelevant (since it contains variables relating to key players who have retired or left the club) the results here tend to conWrm the excellent potential of BNs when they are built by a reliable domain expert. The ability to provide accurate predictions without requiring much learning data are an obvious bonus in any domain where data are scarce. Moreover, the BN was relatively simple for the expert to build and its structure could be used again in this and similar types of problems. © 2006 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "neg:1840098_16", "text": "We present an approach to combining distributional semantic representations induced from text corpora with manually constructed lexical-semantic networks. While both kinds of semantic resources are available with high lexical coverage, our aligned resource combines the domain specificity and availability of contextual information from distributional models with the conciseness and high quality of manually crafted lexical networks. We start with a distributional representation of induced senses of vocabulary terms, which are accompanied with rich context information given by related lexical items. We then automatically disambiguate such representations to obtain a full-fledged proto-conceptualization, i.e. a typed graph of induced word senses. In a final step, this proto-conceptualization is aligned to a lexical ontology, resulting in a hybrid aligned resource. Moreover, unmapped induced senses are associated with a semantic type in order to connect them to the core resource. Manual evaluations against ground-truth judgments for different stages of our method as well as an extrinsic evaluation on a knowledge-based Word Sense Disambiguation benchmark all indicate the high quality of the new hybrid resource. Additionally, we show the benefits of enriching top-down lexical knowledge resources with bottom-up distributional information from text for addressing high-end knowledge acquisition tasks such as cleaning hypernym graphs and learning taxonomies from scratch.", "title": "" }, { "docid": "neg:1840098_17", "text": "Web Intelligence is a direction for scientific research that explores practical applications of Artificial Intelligence to the next generation of Web-empowered systems. In this paper, we present a Web-based intelligent tutoring system for computer programming. The decision making process conducted in our intelligent system is guided by Bayesian networks, which are a formal framework for uncertainty management in Artificial Intelligence based on probability theory. Whereas many tutoring systems are static HTML Web pages of a class textbook or lecture notes, our intelligent system can help a student navigate through the online course materials, recommend learning goals, and generate appropriate reading sequences.", "title": "" }, { "docid": "neg:1840098_18", "text": "Regression test case selection techniques attempt to increase the testing effectiveness based on the measurement capabilities, such as cost, coverage, and fault detection. This systematic literature review presents state-of-the-art research in effective regression test case selection techniques. We examined 47 empirical studies published between 2007 and 2015. The selected studies are categorized according to the selection procedure, empirical study design, and adequacy criteria with respect to their effectiveness measurement capability and methods used to measure the validity of these results.\n The results showed that mining and learning-based regression test case selection was reported in 39% of the studies, unit level testing was reported in 18% of the studies, and object-oriented environment (Java) was used in 26% of the studies. Structural faults, the most common target, was used in 55% of the studies. Overall, only 39% of the studies conducted followed experimental guidelines and are reproducible.\n There are 7 different cost measures, 13 different coverage types, and 5 fault-detection metrics reported in these studies. It is also observed that 70% of the studies being analyzed used cost as the effectiveness measure compared to 31% that used fault-detection capability and 16% that used coverage.", "title": "" } ]
1840099
Prevalence and Predictors of Video Game Addiction: A Study Based on a National Representative Sample of Gamers
[ { "docid": "pos:1840099_0", "text": "This study assessed how problem video game playing (PVP) varies with game type, or \"genre,\" among adult video gamers. Participants (n=3,380) were adults (18+) who reported playing video games for 1 hour or more during the past week and completed a nationally representative online survey. The survey asked about characteristics of video game use, including titles played in the past year and patterns of (problematic) use. Participants self-reported the extent to which characteristics of PVP (e.g., playing longer than intended) described their game play. Five percent of our sample reported moderate to extreme problems. PVP was concentrated among persons who reported playing first-person shooter, action adventure, role-playing, and gambling games most during the past year. The identification of a subset of game types most associated with problem use suggests new directions for research into the specific design elements and reward mechanics of \"addictive\" video games and those populations at greatest risk of PVP with the ultimate goal of better understanding, preventing, and treating this contemporary mental health problem.", "title": "" } ]
[ { "docid": "neg:1840099_0", "text": "Software Testing plays a important role in Software development because it can minimize the development cost. We Propose a Technique for Test Sequence Generation using UML Model Sequence Diagram.UML models give a lot of information that should not be ignored in testing. In This paper main features extract from Sequence Diagram after that we can write the Java Source code for that Features According to ModelJunit Library. ModelJUnit is a extended library of JUnit Library. By using that Source code we can Generate Test Case Automatic and Test Coverage. This paper describes a systematic Test Case Generation Technique performed on model based testing (MBT) approaches By Using Sequence Diagram.", "title": "" }, { "docid": "neg:1840099_1", "text": "Various lines of evidence indicate that men generally experience greater sexual arousal (SA) to erotic stimuli than women. Yet, little is known regarding the neurobiological processes underlying such a gender difference. To investigate this issue, functional magnetic resonance imaging was used to compare the neural correlates of SA in 20 male and 20 female subjects. Brain activity was measured while male and female subjects were viewing erotic film excerpts. Results showed that the level of perceived SA was significantly higher in male than in female subjects. When compared to viewing emotionally neutral film excerpts, viewing erotic film excerpts was associated, for both genders, with bilateral blood oxygen level dependent (BOLD) signal increases in the anterior cingulate, medial prefrontal, orbitofrontal, insular, and occipitotemporal cortices, as well as in the amygdala and the ventral striatum. Only for the group of male subjects was there evidence of a significant activation of the thalamus and hypothalamus, a sexually dimorphic area of the brain known to play a pivotal role in physiological arousal and sexual behavior. When directly compared between genders, hypothalamic activation was found to be significantly greater in male subjects. Furthermore, for male subjects only, the magnitude of hypothalamic activation was positively correlated with reported levels of SA. These findings reveal the existence of similarities and dissimilarities in the way the brain of both genders responds to erotic stimuli. They further suggest that the greater SA generally experienced by men, when viewing erotica, may be related to the functional gender difference found here with respect to the hypothalamus.", "title": "" }, { "docid": "neg:1840099_2", "text": "OBJECTIVES\nThe goal of this survey is to discuss the impact of the growing availability of electronic health record (EHR) data on the evolving field of Clinical Research Informatics (CRI), which is the union of biomedical research and informatics.\n\n\nRESULTS\nMajor challenges for the use of EHR-derived data for research include the lack of standard methods for ensuring that data quality, completeness, and provenance are sufficient to assess the appropriateness of its use for research. Areas that need continued emphasis include methods for integrating data from heterogeneous sources, guidelines (including explicit phenotype definitions) for using these data in both pragmatic clinical trials and observational investigations, strong data governance to better understand and control quality of enterprise data, and promotion of national standards for representing and using clinical data.\n\n\nCONCLUSIONS\nThe use of EHR data has become a priority in CRI. Awareness of underlying clinical data collection processes will be essential in order to leverage these data for clinical research and patient care, and will require multi-disciplinary teams representing clinical research, informatics, and healthcare operations. Considerations for the use of EHR data provide a starting point for practical applications and a CRI research agenda, which will be facilitated by CRI's key role in the infrastructure of a learning healthcare system.", "title": "" }, { "docid": "neg:1840099_3", "text": "Three-dimensional (3D) kinematic models are widely-used in videobased figure tracking. We show that these models can suffer from singularities when motion is directed along the viewing axis of a single camera. The single camera case is important because it arises in many interesting applications, such as motion capture from movie footage, video surveillance, and vision-based user-interfaces. We describe a novel two-dimensional scaled prismatic model (SPM) for figure registration. In contrast to 3D kinematic models, the SPM has fewer singularity problems and does not require detailed knowledge of the 3D kinematics. We fully characterize the singularities in the SPM and demonstrate tracking through singularities using synthetic and real examples. We demonstrate the application of our model to motion capture from movies. Fred Astaire is tracked in a clip from the film “Shall We Dance”. We also present the use of monocular hand tracking in a 3D user-interface. These results demonstrate the benefits of the SPM in tracking with a single source of video. KEY WORDS—AUTHOR: PLEASE PROVIDE", "title": "" }, { "docid": "neg:1840099_4", "text": "Risk assessment is a systematic process for integrating professional judgments about relevant risk factors, their relative significance and probable adverse conditions and/or events leading to identification of auditable activities (IIA, 1995, SIAS No. 9). Internal auditors utilize risk measures to allocate critical audit resources to compliance, operational, or financial activities within the organization (Colbert, 1995). In information rich environments, risk assessment involves recognizing patterns in the data, such as complex data anomalies and discrepancies, that perhaps conceal one or more error or hazard conditions (e.g. Coakley and Brown, 1996; Bedard and Biggs, 1991; Libby, 1985). This research investigates whether neural networks can help enhance auditors’ risk assessments. Neural networks, an emerging artificial intelligence technology, are a powerful non-linear optimization and pattern recognition tool (Haykin, 1994; Bishop, 1995). Several successful, real-world business neural network application decision aids have already been built (Burger and Traver, 1996). Neural network modeling may prove invaluable in directing internal auditor attention to those aspects of financial, operating, and compliance data most informative of high-risk audit areas, thus enhancing audit efficiency and effectiveness. This paper defines risk in an internal auditing context, describes contemporary approaches to performing risk assessments, provides an overview of the backpropagation neural network architecture, outlines the methodology adopted for conducting this research project including a Delphi study and comparison with statistical approaches, and presents preliminary results, which indicate that internal auditors could benefit from using neural network technology for assessing risk. Copyright  1999 John Wiley & Sons, Ltd.", "title": "" }, { "docid": "neg:1840099_5", "text": "It is difficult to train a personalized task-oriented dialogue system because the data collected from each individual is often insufficient. Personalized dialogue systems trained on a small dataset can overfit and make it difficult to adapt to different user needs. One way to solve this problem is to consider a collection of multiple users’ data as a source domain and an individual user’s data as a target domain, and to perform a transfer learning from the source to the target domain. By following this idea, we propose the “PETAL” (PErsonalized Task-oriented diALogue), a transfer learning framework based on POMDP to learn a personalized dialogue system. The system first learns common dialogue knowledge from the source domain and then adapts this knowledge to the target user. This framework can avoid the negative transfer problem by considering differences between source and target users. The policy in the personalized POMDP can learn to choose different actions appropriately for different users. Experimental results on a real-world coffee-shopping data and simulation data show that our personalized dialogue system can choose different optimal actions for different users, and thus effectively improve the dialogue quality under the personalized setting.", "title": "" }, { "docid": "neg:1840099_6", "text": "Educational games and intelligent tutoring systems (ITS) both support learning by doing, although often in different ways. The current classroom experiment compared a popular commercial game for equation solving, DragonBox and a research-based ITS, Lynnette with respect to desirable educational outcomes. The 190 participating 7th and 8th grade students were randomly assigned to work with either system for 5 class periods. We measured out-of-system transfer of learning with a paper and pencil pre- and post-test of students’ equation-solving skill. We measured enjoyment and accuracy of self-assessment with a questionnaire. The students who used DragonBox solved many more problems and enjoyed the experience more, but the students who used Lynnette performed significantly better on the post-test. Our analysis of the design features of both systems suggests possible explanations and spurs ideas for how the strengths of the two systems might be combined. The study shows that intuitions about what works, educationally, can be fallible. Therefore, there is no substitute for rigorous empirical evaluation of educational technologies.", "title": "" }, { "docid": "neg:1840099_7", "text": "We systematically reviewed school-based skills building behavioural interventions for the prevention of sexually transmitted infections. References were sought from 15 electronic resources, bibliographies of systematic reviews/included studies and experts. Two authors independently extracted data and quality-assessed studies. Fifteen randomized controlled trials (RCTs), conducted in the United States, Africa or Europe, met the inclusion criteria. They were heterogeneous in terms of intervention length, content, intensity and providers. Data from 12 RCTs passed quality assessment criteria and provided evidence of positive changes in non-behavioural outcomes (e.g. knowledge and self-efficacy). Intervention effects on behavioural outcomes, such as condom use, were generally limited and did not demonstrate a negative impact (e.g. earlier sexual initiation). Beneficial effect on at least one, but never all behavioural outcomes assessed was reported by about half the studies, but this was sometimes limited to a participant subgroup. Sexual health education for young people is important as it increases knowledge upon which to make decisions about sexual behaviour. However, a number of factors may limit intervention impact on behavioural outcomes. Further research could draw on one of the more effective studies reviewed and could explore the effectiveness of 'booster' sessions as young people move from adolescence to young adulthood.", "title": "" }, { "docid": "neg:1840099_8", "text": "In this paper, we present a system that automatically extracts the pros and cons from online reviews. Although many approaches have been developed for extracting opinions from text, our focus here is on extracting the reasons of the opinions, which may themselves be in the form of either fact or opinion. Leveraging online review sites with author-generated pros and cons, we propose a system for aligning the pros and cons to their sentences in review texts. A maximum entropy model is then trained on the resulting labeled set to subsequently extract pros and cons from online review sites that do not explicitly provide them. Our experimental results show that our resulting system identifies pros and cons with 66% precision and 76% recall.", "title": "" }, { "docid": "neg:1840099_9", "text": "BACKGROUND\nTo identify sources of race/ethnic differences related to post-traumatic stress disorder (PTSD), we compared trauma exposure, risk for PTSD among those exposed to trauma, and treatment-seeking among Whites, Blacks, Hispanics and Asians in the US general population.\n\n\nMETHOD\nData from structured diagnostic interviews with 34 653 adult respondents to the 2004-2005 wave of the National Epidemiologic Survey on Alcohol and Related Conditions (NESARC) were analysed.\n\n\nRESULTS\nThe lifetime prevalence of PTSD was highest among Blacks (8.7%), intermediate among Hispanics and Whites (7.0% and 7.4%) and lowest among Asians (4.0%). Differences in risk for trauma varied by type of event. Whites were more likely than the other groups to have any trauma, to learn of a trauma to someone close, and to learn of an unexpected death, but Blacks and Hispanics had higher risk of child maltreatment, chiefly witnessing domestic violence, and Asians, Black men, and Hispanic women had higher risk of war-related events than Whites. Among those exposed to trauma, PTSD risk was slightly higher among Blacks [adjusted odds ratio (aOR) 1.22] and lower among Asians (aOR 0.67) compared with Whites, after adjustment for characteristics of trauma exposure. All minority groups were less likely to seek treatment for PTSD than Whites (aOR range: 0.39-0.61), and fewer than half of minorities with PTSD sought treatment (range: 32.7-42.0%).\n\n\nCONCLUSIONS\nWhen PTSD affects US race/ethnic minorities, it is usually untreated. Large disparities in treatment indicate a need for investment in accessible and culturally sensitive treatment options.", "title": "" }, { "docid": "neg:1840099_10", "text": "New invention of advanced technology, enhanced capacity of storage media, maturity of information technology and popularity of social media, business intelligence and Scientific invention, produces huge amount of data which made ample set of information that is responsible for birth of new concept well known as big data. Big data analytics is the process of examining large amounts of data. The analysis is done on huge amount of data which is structure, semi structure and unstructured. In big data, data is generated at exponentially for reason of increase use of social media, email, document and sensor data. The growth of data has affected all fields, whether it is business sector or the world of science. In this paper, the process of system is reviewed for managing &quot;Big Data&quot; and today&apos;s activities on big data tools and techniques.", "title": "" }, { "docid": "neg:1840099_11", "text": "With the rapid growth of multimedia information, the font library has become a part of people’s work life. Compared to the Western alphabet language, it is difficult to create new font due to huge quantity and complex shape. At present, most of the researches on automatic generation of fonts use traditional methods requiring a large number of rules and parameters set by experts, which are not widely adopted. This paper divides Chinese characters into strokes and generates new font strokes by fusing the styles of two existing font strokes and assembling them into new fonts. This approach can effectively improve the efficiency of font generation, reduce the costs of designers, and is able to inherit the style of existing fonts. In the process of learning to generate new fonts, the popular of deep learning areas, Generative Adversarial Nets has been used. Compared with the traditional method, it can generate higher quality fonts without well-designed and complex loss function.", "title": "" }, { "docid": "neg:1840099_12", "text": "OBJECTIVE\nThe current study aimed to compare the Philadelphia collar and an open-design cervical collar with regard to user satisfaction and cervical range of motion in asymptomatic adults.\n\n\nDESIGN\nSeventy-two healthy subjects (36 women, 36 men) aged 18 to 29 yrs were recruited for this study. Neck movements, including active flexion, extension, right/left lateral flexion, and right/left axial rotation, were assessed in each subject under three conditions--without wearing a collar and while wearing two different cervical collars--using a dual digital inclinometer. Subject satisfaction was assessed using a five-item self-administered questionnaire.\n\n\nRESULTS\nBoth Philadelphia and open-design collars significantly reduced cervical motions (P < 0.05). Compared with the Philadelphia collar, the open-design collar more greatly reduced cervical motions in three planes and the differences were statistically significant except for limiting flexion. Satisfaction scores for Philadelphia and open-design collars were 15.89 (3.87) and 19.94 (3.11), respectively.\n\n\nCONCLUSION\nBased on the data of the 72 subjects presented in this study, the open-design collar adequately immobilized the cervical spine as a semirigid collar and was considered cosmetically acceptable, at least for subjects aged younger than 30 yrs.", "title": "" }, { "docid": "neg:1840099_13", "text": "This study explores how customer relationship management (CRM) systems support customer knowledge creation processes [48], including socialization, externalization, combination and internalization. CRM systems are categorized as collaborative, operational and analytical. An analysis of CRM applications in three organizations reveals that analytical systems strongly support the combination process. Collaborative systems provide the greatest support for externalization. Operational systems facilitate socialization with customers, while collaborative systems are used for socialization within an organization. Collaborative and analytical systems both support the internalization process by providing learning opportunities. Three-way interactions among CRM systems, types of customer knowledge, and knowledge creation processes are explored. 2013 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "neg:1840099_14", "text": "This paper presents an automatic method to track soccer players in soccer video recorded from a single camera where the occurrence of pan-tilt-zoom can take place. The automatic object tracking is intended to support texture extraction in a free viewpoint video authoring application for soccer video. To ensure that the identity of the tracked object can be correctly obtained, background segmentation is performed and automatically removes commercial billboards whenever it overlaps with the soccer player. Next, object tracking is performed by an attribute matching algorithm for all objects in the temporal domain to find and maintain the correlation of the detected objects. The attribute matching process finds the best match between two objects in different frames according to their pre-determined attributes: position, size, dominant color and motion information. Utilizing these attributes, the experimental results show that the tracking process can handle occlusion problems such as occlusion involving more than three objects and occluded objects with similar color and moving direction, as well as correctly identify objects in the presence of camera movements. key words: free viewpoint, attribute matching, automatic object tracking, soccer video", "title": "" }, { "docid": "neg:1840099_15", "text": "Antigen-presenting, major histocompatibility complex (MHC) class II-rich dendritic cells are known to arise from bone marrow. However, marrow lacks mature dendritic cells, and substantial numbers of proliferating less-mature cells have yet to be identified. The methodology for inducing dendritic cell growth that was recently described for mouse blood now has been modified to MHC class II-negative precursors in marrow. A key step is to remove the majority of nonadherent, newly formed granulocytes by gentle washes during the first 2-4 d of culture. This leaves behind proliferating clusters that are loosely attached to a more firmly adherent \"stroma.\" At days 4-6 the clusters can be dislodged, isolated by 1-g sedimentation, and upon reculture, large numbers of dendritic cells are released. The latter are readily identified on the basis of their distinct cell shape, ultrastructure, and repertoire of antigens, as detected with a panel of monoclonal antibodies. The dendritic cells express high levels of MHC class II products and act as powerful accessory cells for initiating the mixed leukocyte reaction. Neither the clusters nor mature dendritic cells are generated if macrophage colony-stimulating factor rather than granulocyte/macrophage colony-stimulating factor (GM-CSF) is applied. Therefore, GM-CSF generates all three lineages of myeloid cells (granulocytes, macrophages, and dendritic cells). Since > 5 x 10(6) dendritic cells develop in 1 wk from precursors within the large hind limb bones of a single animal, marrow progenitors can act as a major source of dendritic cells. This feature should prove useful for future molecular and clinical studies of this otherwise trace cell type.", "title": "" }, { "docid": "neg:1840099_16", "text": "• Develop and implement an internally consistent set of goals and functional policies (this is, a solution to the agency problem) • These internally consistent set of goals and policies aligns the firm’s strengths and weaknesses with external (industry) opportunities and threats (SWOT) in a dynamic balance • The firm’s strategy has to be concerned with the exploitation of its “distinctive competences” (early reference to RBV)", "title": "" }, { "docid": "neg:1840099_17", "text": "The energy consumption problem in the mobile industry has become crucial. For the sustainable growth of the mobile industry, energy efficiency (EE) of wireless systems has to be significantly improved. Plenty of efforts have been invested in achieving green wireless communications. This article provides an overview of network energy saving studies currently conducted in the 3GPP LTE standard body. The aim is to gain a better understanding of energy consumption and identify key EE research problems in wireless access networks. Classifying network energy saving technologies into the time, frequency, and spatial domains, the main solutions in each domain are described briefly. As presently the attention is mainly focused on solutions involving a single radio base station, we believe network solutions involving multiple networks/systems will be the most promising technologies toward green wireless access networks.", "title": "" }, { "docid": "neg:1840099_18", "text": "We propose a new regularization method based on virtual adversarial loss: a new measure of local smoothness of the conditional label distribution given input. Virtual adversarial loss is defined as the robustness of the conditional label distribution around each input data point against local perturbation. Unlike adversarial training, our method defines the adversarial direction without label information and is hence applicable to semi-supervised learning. Because the directions in which we smooth the model are only \"virtually\" adversarial, we call our method virtual adversarial training (VAT). The computational cost of VAT is relatively low. For neural networks, the approximated gradient of virtual adversarial loss can be computed with no more than two pairs of forward- and back-propagations. In our experiments, we applied VAT to supervised and semi-supervised learning tasks on multiple benchmark datasets. With a simple enhancement of the algorithm based on the entropy minimization principle, our VAT achieves state-of-the-art performance for semi-supervised learning tasks on SVHN and CIFAR-10.", "title": "" }, { "docid": "neg:1840099_19", "text": "In this paper, we propose a new framework named Data Augmentation for Domain-Invariant Learning (DADIL). In the field of manufacturing, labeling sensor data as normal or abnormal is helpful for improving productivity and avoiding problems. In practice, however, the status of equipment may change due to changes in maintenance and settings (referred to as a “domain change”), which makes it difficult to collect sufficient homogeneous data. Therefore, it is important to develop a discriminative model that can use a limited number of data samples. Moreover, real data might contain noise that could have a negative impact. We focus on the following aspect: The difficulties of a domain change are also due to the limited data. Although the number of data samples in each domain is low, we make use of data augmentation which is a promising way to mitigate the influence of noise and enhance the performance of discriminative models. In our data augmentation method, we generate “pseudo data” by combining the data for each label regardless of the domain and extract a domain-invariant representation for classification. We experimentally show that this representation is effective for obtaining the label precisely using real datasets.", "title": "" } ]
1840100
Manuka: A Batch-Shading Architecture for Spectral Path Tracing in Movie Production
[ { "docid": "pos:1840100_0", "text": "Monte Carlo integration is firmly established as the basis for most practical realistic image synthesis algorithms because of its flexibility and generality. However, the visual quality of rendered images often suffers from estimator variance, which appears as visually distracting noise. Adaptive sampling and reconstruction algorithms reduce variance by controlling the sampling density and aggregating samples in a reconstruction step, possibly over large image regions. In this paper we survey recent advances in this area. We distinguish between “a priori” methods that analyze the light transport equations and derive sampling rates and reconstruction filters from this analysis, and “a posteriori” methods that apply statistical techniques to sets of samples to drive the adaptive sampling and reconstruction process. They typically estimate the errors of several reconstruction filters, and select the best filter locally to minimize error. We discuss advantages and disadvantages of recent state-of-the-art techniques, and provide visual and quantitative comparisons. Some of these techniques are proving useful in real-world applications, and we aim to provide an overview for practitioners and researchers to assess these approaches. In addition, we discuss directions for potential further improvements.", "title": "" } ]
[ { "docid": "neg:1840100_0", "text": "Algorithms and decision making based on Big Data have become pervasive in all aspects of our daily lives lives (offline and online), as they have become essential tools in personal finance, health care, hiring, housing, education, and policies. It is therefore of societal and ethical importance to ask whether these algorithms can be discriminative on grounds such as gender, ethnicity, or health status. It turns out that the answer is positive: for instance, recent studies in the context of online advertising show that ads for high-income jobs are presented to men much more often than to women [Datta et al., 2015]; and ads for arrest records are significantly more likely to show up on searches for distinctively black names [Sweeney, 2013]. This algorithmic bias exists even when there is no discrimination intention in the developer of the algorithm. Sometimes it may be inherent to the data sources used (software making decisions based on data can reflect, or even amplify, the results of historical discrimination), but even when the sensitive attributes have been suppressed from the input, a well trained machine learning algorithm may still discriminate on the basis of such sensitive attributes because of correlations existing in the data. These considerations call for the development of data mining systems which are discrimination-conscious by-design. This is a novel and challenging research area for the data mining community.\n The aim of this tutorial is to survey algorithmic bias, presenting its most common variants, with an emphasis on the algorithmic techniques and key ideas developed to derive efficient solutions. The tutorial covers two main complementary approaches: algorithms for discrimination discovery and discrimination prevention by means of fairness-aware data mining. We conclude by summarizing promising paths for future research.", "title": "" }, { "docid": "neg:1840100_1", "text": "The Danish and Swedish male top football divisions were studied prospectively from January to June 2001. Exposure to football and injury incidence, severity and distribution were compared between the countries. Swedish players had greater exposure to training (171 vs. 123 h per season, P<0.001), whereas exposure to matches did not differ between the countries. There was a higher risk for injury during training in Denmark than in Sweden (11.8 vs. 6.0 per 1000 h, P<0.01), whereas for match play there was no difference (28.2 vs. 26.2 per 1000 h). The risk for incurring a major injury (absence from football more than 4 weeks) was greater in Denmark (1.8 vs. 0.7 per 1000 h, P = 0.002). The distribution of injuries according to type and location was similar in both countries. Of all injuries in Denmark and Sweden, overuse injury accounted for 39% and 38% (NS), and re-injury for 30% and 24% (P = 0.032), respectively. The greater training exposure and the long pre-season period in Sweden may explain some of the reported differences.", "title": "" }, { "docid": "neg:1840100_2", "text": "The purpose of this paper is to explore applications of blockchain technology related to the 4th Industrial Revolution (Industry 4.0) and to present an example where blockchain is employed to facilitate machine-to-machine (M2M) interactions and establish a M2M electricity market in the context of the chemical industry. The presented scenario includes two electricity producers and one electricity consumer trading with each other over a blockchain. The producers publish exchange offers of energy (in kWh) for currency (in USD) in a data stream. The consumer reads the offers, analyses them and attempts to satisfy its energy demand at a minimum cost. When an offer is accepted it is executed as an atomic exchange (multiple simultaneous transactions). Additionally, this paper describes and discusses the research and application landscape of blockchain technology in relation to the Industry 4.0. It concludes that this technology has significant under-researched potential to support and enhance the efficiency gains of the revolution and identifies areas for future research. Producer 2 • Issue energy • Post purchase offers (as atomic transactions) Consumer • Look through the posted offers • Choose cheapest and satisfy its own demand Blockchain Stream Published offers are visible here Offer sent", "title": "" }, { "docid": "neg:1840100_3", "text": "Embedded systems are ubiquitous in society and can contain information that could be used in criminal cases for example in a serious road traffic accident where the car management systems could provide vital forensic information concerning the engine speed etc. A critical review of a number of methods and procedures for the analysis of embedded systems were compared against a ‘standard’ methodology for use in a Forensic Computing Investigation. A Unified Forensic Methodology (UFM) has been developed that is forensically sound and capable of dealing with the analysis of a wide variety of Embedded Systems.", "title": "" }, { "docid": "neg:1840100_4", "text": "We propose a novel method capable of retrieving clips from untrimmed videos based on natural language queries. This cross-modal retrieval task plays a key role in visual-semantic understanding, and requires localizing clips in time and computing their similarity to the query sentence. Current methods generate sentence and video embeddings and then compare them using a late fusion approach, but this ignores the word order in queries and prevents more fine-grained comparisons. Motivated by the need for fine-grained multi-modal feature fusion, we propose a novel early fusion embedding approach that combines video and language information at the word level. Furthermore, we use the inverse task of dense video captioning as a side-task to improve the learned embedding. Our full model combines these components with an efficient proposal pipeline that performs accurate localization of potential video clips. We present a comprehensive experimental validation on two large-scale text-to-clip datasets (Charades-STA and DiDeMo) and attain state-ofthe-art retrieval results with our model.", "title": "" }, { "docid": "neg:1840100_5", "text": "Additional contents in web pages, such as navigation panels, advertisements, copyrights and disclaimer notices, are typically not related to the main subject and may hamper the performance of Web data mining. They are traditionally taken as noises and need to be removed properly. To achieve this, two intuitive and crucial kinds of information—the textual information and the visual information of web pages—is considered in this paper. Accordingly, Text Density and Visual Importance are defined for the Document Object Model (DOM) nodes of a web page. Furthermore, a content extraction method with these measured values is proposed. It is a fast, accurate and general method for extracting content from diverse web pages. And with the employment of DOM nodes, the original structure of the web page can be preserved. Evaluated with the CleanEval benchmark and with randomly selected pages from well-known Web sites, where various web domains and styles are tested, the effect of the method is demonstrated. The average F1-scores with our method were 8.7 % higher than the best scores among several alternative methods.", "title": "" }, { "docid": "neg:1840100_6", "text": "Nowadays, many methods have been applied for data transmission of MWD system. Magnetic induction is one of the alternative technique. In this paper, detailed discussion on magnetic induction communication system is provided. The optimal coil configuration is obtained by theoretical analysis and software simulations. Based on this coil arrangement, communication characteristics of path loss and bit error rate are derived.", "title": "" }, { "docid": "neg:1840100_7", "text": "Process-driven spreadsheet queuing simulation is a better vehicle for understanding queue behavior than queuing theory or dedicated simulation software. Spreadsheet queuing simulation has many pedagogical benefits in a business school end-user modeling course, including developing students' intuition , giving them experience with active modeling skills, and providing access to tools. Spreadsheet queuing simulations are surprisingly easy to program, even for queues with balking and reneging. The ease of prototyping in spreadsheets invites thoughtless design, so careful spreadsheet programming practice is important. Spreadsheet queuing simulation is inferior to dedicated simulation software for analyzing queues but is more likely to be available to managers and students. Q ueuing theory has always been a staple in survey courses on management science. Although it is a powerful tool for computing certain steady-state performance measures, queuing theory is a poor vehicle for teaching students about what transpires in queues. Process-driven spreadsheet queuing simulation is a much better vehicle. Although Evans and Olson [1998, p. 170] state that \" a serious limitation of spreadsheets for waiting-line models is that it is not possible to include behavior such as balking \" and Liberatore and Ny-dick [forthcoming] indicate that a limitation of spreadsheet simulation is the in", "title": "" }, { "docid": "neg:1840100_8", "text": "This article investigates the vulnerabilities of Supervisory Control and Data Acquisition (SCADA) systems which monitor and control the modern day irrigation canal systems. This type of monitoring and control infrastructure is also common for many other water distribution systems. We present a linearized shallow water partial differential equation (PDE) system that can model water flow in a network of canal pools which are equipped with lateral offtakes for water withdrawal and are connected by automated gates. The knowledge of the system dynamics enables us to develop a deception attack scheme based on switching the PDE parameters and proportional (P) boundary control actions, to withdraw water from the pools through offtakes. We briefly discuss the limits on detectability of such attacks. We use a known formulation based on low frequency approximation of the PDE model and an associated proportional integral (PI) controller, to create a stealthy deception scheme capable of compromising the performance of the closed-loop system. We test the proposed attack scheme in simulation, using a shallow water solver; and show that the attack is indeed realizable in practice by implementing it on a physical canal in Southern France: the Gignac canal. A successful field experiment shows that the attack scheme enables us to steal water stealthily from the canal until the end of the attack.", "title": "" }, { "docid": "neg:1840100_9", "text": "This paper advocates the exploration of the full state of recorded real-time strategy (RTS) games, by human or robotic players, to discover how to reason about tactics and strategy. We present a dataset of StarCraft games encompassing the most of the games’ state (not only player’s orders). We explain one of the possible usages of this dataset by clustering armies on their compositions. This reduction of armies compositions to mixtures of Gaussian allow for strategic reasoning at the level of the components. We evaluated this clustering method by predicting the outcomes of battles based on armies compositions’ mixtures components.", "title": "" }, { "docid": "neg:1840100_10", "text": "Removing undesired reflections from a photo taken in front of a glass is of great importance for enhancing the efficiency of visual computing systems. Various approaches have been proposed and shown to be visually plausible on small datasets collected by their authors. A quantitative comparison of existing approaches using the same dataset has never been conducted due to the lack of suitable benchmark data with ground truth. This paper presents the first captured Single-image Reflection Removal dataset ‘SIR2’ with 40 controlled and 100 wild scenes, ground truth of background and reflection. For each controlled scene, we further provide ten sets of images under varying aperture settings and glass thicknesses. We perform quantitative and visual quality comparisons for four state-of-the-art single-image reflection removal algorithms using four error metrics. Open problems for improving reflection removal algorithms are discussed at the end.", "title": "" }, { "docid": "neg:1840100_11", "text": "Purpose-This study attempts to investigate (1) the effect of meditation experience on employees’ self-directed learning (SDL) readiness and organizational innovative (OI) ability as well as organizational performance (OP), and (2) the relationships among SDL, OI, and OP. Design/methodology/approach-This study conducts an empirical study of 15 technological companies (n = 412) in Taiwan, utilizing the collected survey data to test the relationships among the three dimensions. Findings-Results show that: (1) The employees’ meditation experience significantly and positively influenced employees’ SDL readiness, companies’ OI capability and OP; (2) The study found that SDL has a direct and significant impact on OI; and OI has direct and significant influences on OP. Research limitation/implications-The generalization of the present study is constrained by (1) the existence of possible biases of the participants, (2) the variations of length, type and form of meditation demonstrated by the employees in these high tech companies, and (3) the fact that local data collection in Taiwan may present different cultural characteristics which may be quite different from those in other areas or countries. Managerial implications are presented at the end of the work. Practical implications-The findings indicate that SDL can only impact organizational innovation through employees “openness to a challenge”, “inquisitive nature”, self-understanding and acceptance of responsibility for learning. Such finding implies better organizational innovative capability under such conditions, thus organizations may encourage employees to take risks or accept new opportunities through various incentives, such as monetary rewards or public recognitions. More specifically, the present study discovers that while administration innovation is the most important element influencing an organization’s financial performance, market innovation is the key component in an organization’s market performance. Social implications-The present study discovers that meditation experience positively", "title": "" }, { "docid": "neg:1840100_12", "text": "We present our approach for developing a laboratory information management system (LIMS) software by combining Björners software triptych methodology (from domain models via requirements to software) with Arlow and Neustadt archetypes and archetype patterns based initiative. The fundamental hypothesis is that through this Archetypes Based Development (ABD) approach to domains, requirements and software, it is possible to improve the software development process as well as to develop more dependable software. We use ADB in developing LIMS software for the Clinical and Biomedical Proteomics Group (CBPG), University of Leeds.", "title": "" }, { "docid": "neg:1840100_13", "text": "Exposing the weaknesses of neural models is crucial for improving their performance and robustness in real-world applications. One common approach is to examine how input perturbations affect the output. Our analysis takes this to an extreme on natural language processing tasks by removing as many words as possible from the input without changing the model prediction. For question answering and natural language inference, this often reduces the inputs to just one or two words, while model confidence remains largely unchanged. This is an undesireable behavior: the model gets the Right Answer for the Wrong Reason (RAWR). We introduce a simple training technique that mitigates this problem while maintaining performance on regular examples.", "title": "" }, { "docid": "neg:1840100_14", "text": "Object detection methods fall into two categories, i.e., two-stage and single-stage detectors. The former is characterized by high detection accuracy while the latter usually has considerable inference speed. Hence, it is imperative to fuse their metrics for a better accuracy vs. speed trade-off. To this end, we propose a dual refinement network (DRN) to boost the performance of the single-stage detector. Inheriting from the advantages of two-stage approaches (i.e., two-step regression and accurate features for detection), anchor refinement and feature offset refinement are conducted in anchor-offset detection, where the detection head is comprised of deformable convolutions. Moreover, to leverage contextual information for describing objects, we design a multi-deformable head, in which multiple detection paths with different receptive field sizes devote themselves to detecting objects. Extensive experiments on PASCAL VOC and ImageNet VID datasets are conducted, and we achieve the state-of-the-art results and a better accuracy vs. speed trade-off, i.e., 81.4% mAP vs. 42.3 FPS on VOC2007 test set. Codes will be publicly available.", "title": "" }, { "docid": "neg:1840100_15", "text": "The principle of control signal amplification is found in all actuation systems, from engineered devices through to the operation of biological muscles. However, current engineering approaches require the use of hard and bulky external switches or valves, incompatible with both the properties of emerging soft artificial muscle technology and those of the bioinspired robotic systems they enable. To address this deficiency a biomimetic molecular-level approach is developed that employs light, with its excellent spatial and temporal control properties, to actuate soft, pH-responsive hydrogel artificial muscles. Although this actuation is triggered by light, it is largely powered by the resulting excitation and runaway chemical reaction of a light-sensitive acid autocatalytic solution in which the actuator is immersed. This process produces actuation strains of up to 45% and a three-fold chemical amplification of the controlling light-trigger, realising a new strategy for the creation of highly functional soft actuating systems.", "title": "" }, { "docid": "neg:1840100_16", "text": "We present a hierarchical control approach that can be used to fulfill autonomous flight, including vertical takeoff, landing, hovering, transition, and level flight, of a quadrotor tail-sitter vertical takeoff and landing unmanned aerial vehicle (VTOL UAV). A unified attitude controller, together with a moment allocation scheme between elevons and motor differential thrust, is developed for all flight modes. A comparison study via real flight tests is performed to verify the effectiveness of using elevons in addition to motor differential thrust. With the well-designed switch scheme proposed in this paper, the aircraft can transit between different flight modes with negligible altitude drop or gain. Intensive flight tests have been performed to verify the effectiveness of the proposed control approach in both manual and fully autonomous flight mode.", "title": "" }, { "docid": "neg:1840100_17", "text": "We use computational techniques to extract a large number of different features from the narrative speech of individuals with primary progressive aphasia (PPA). We examine several different types of features, including part-of-speech, complexity, context-free grammar, fluency, psycholinguistic, vocabulary richness, and acoustic, and discuss the circumstances under which they can be extracted. We consider the task of training a machine learning classifier to determine whether a participant is a control, or has the fluent or nonfluent variant of PPA. We first evaluate the individual feature sets on their classification accuracy, then perform an ablation study to determine the optimal combination of feature sets. Finally, we rank the features in four practical scenarios: given audio data only, given unsegmented transcripts only, given segmented transcripts only, and given both audio and segmented transcripts. We find that psycholinguistic features are highly discriminative in most cases, and that acoustic, context-free grammar, and part-of-speech features can also be important in some circumstances.", "title": "" }, { "docid": "neg:1840100_18", "text": "We model a degraded image as an original image that has been subject to linear frequency distortion and additive noise injection. Since the psychovisual effects of frequency distortion and noise injection are independent, we decouple these two sources of degradation and measure their effect on the human visual system. We develop a distortion measure (DM) of the effect of frequency distortion, and a noise quality measure (NQM) of the effect of additive noise. The NQM, which is based on Peli's (1990) contrast pyramid, takes into account the following: 1) variation in contrast sensitivity with distance, image dimensions, and spatial frequency; 2) variation in the local luminance mean; 3) contrast interaction between spatial frequencies; 4) contrast masking effects. For additive noise, we demonstrate that the nonlinear NQM is a better measure of visual quality than peak signal-to noise ratio (PSNR) and linear quality measures. We compute the DM in three steps. First, we find the frequency distortion in the degraded image. Second, we compute the deviation of this frequency distortion from an allpass response of unity gain (no distortion). Finally, we weight the deviation by a model of the frequency response of the human visual system and integrate over the visible frequencies. We demonstrate how to decouple distortion and additive noise degradation in a practical image restoration system.", "title": "" }, { "docid": "neg:1840100_19", "text": "Life often presents us with situations in which it is important to assess the “true” qualities of a person or object, but in which some factor(s) might have affected (or might yet affect) our initial perceptions in an undesired way. For example, in the Reginald Denny case following the 1993 Los Angeles riots, jurors were asked to determine the guilt or innocence of two African-American defendants who were charged with violently assaulting a Caucasion truck driver. Some of the jurors in this case might have been likely to realize that in their culture many of the popular media portrayals of African-Americans are violent in nature. Yet, these jurors ideally would not want those portrayals to influence their perceptions of the particular defendants in the case. In fact, the justice system is based on the assumption that such portrayals will not influence jury verdicts. In our work on bias correction, we have been struck by the variety of potentially biasing factors that can be identified-including situational influences such as media, social norms, and general culture, and personal influences such as transient mood states, motives (e.g., to manage impressions or agree with liked others), and salient beliefs-and we have been impressed by the apparent ubiquity of correction phenomena (which appear to span many areas of psychological inquiry). Yet, systematic investigations of bias correction are in their early stages. Although various researchers have discussed the notion of effortful cognitive processes overcoming initial (sometimes “automatic”) biases in a variety of settings (e.g., Brewer, 1988; Chaiken, Liberman, & Eagly, 1989; Devine, 1989; Kruglanski & Freund, 1983; Neuberg & Fiske, 1987; Petty & Cacioppo, 1986), little attention has been given, until recently, to the specific processes by which biases are overcome when effort is targeted toward “correction of bias.” That is, when", "title": "" } ]
1840101
Aesthetics and credibility in web site design
[ { "docid": "pos:1840101_0", "text": "An experiment was conducted to test the relationships between users' perceptions of a computerized system's beauty and usability. The experiment used a computerized application as a surrogate for an Automated Teller Machine (ATM). Perceptions were elicited before and after the participants used the system. Pre-experimental measures indicate strong correlations between system's perceived aesthetics and perceived usability. Post-experimental measures indicated that the strong correlation remained intact. A multivariate analysis of covariance revealed that the degree of system's aesthetics affected the post-use perceptions of both aesthetics and usability, whereas the degree of actual usability had no such effect. The results resemble those found by social psychologists regarding the effect of physical attractiveness on the valuation of other personality attributes. The ®ndings stress the importance of studying the aesthetic aspect of human±computer interaction (HCI) design and its relationships to other design dimensions. q 2000 Elsevier Science B.V. All rights reserved.", "title": "" } ]
[ { "docid": "neg:1840101_0", "text": "Quantum cryptography is an emerging technology in which two parties can secure network communications by applying the phenomena of quantum physics. The security of these transmissions is based on the inviolability of the laws of quantum mechanics. Quantum cryptography was born in the early seventies when Steven Wiesner wrote \"Conjugate Coding\", which took more than ten years to end this paper. The quantum cryptography relies on two important elements of quantum mechanics - the Heisenberg Uncertainty principle and the principle of photon polarization. The Heisenberg Uncertainty principle states that, it is not possible to measure the quantum state of any system without distributing that system. The principle of photon polarization states that, an eavesdropper can not copy unknown qubits i.e. unknown quantum states, due to no-cloning theorem which was first presented by Wootters and Zurek in 1982. This research paper concentrates on the theory of  quantum cryptography, and how this technology contributes to the network security. This research paper summarizes the current state of quantum cryptography, and the real–world application implementation of this technology, and finally the future direction in which the quantum cryptography is headed forwards.", "title": "" }, { "docid": "neg:1840101_1", "text": "Personality was studied as a conditioner of the effects of stressful life events on illness onset. Two groups of middle and upper level executives had comparably high degrees of stressful life events in the previous 3 years, as measured by the Holmes and Rahe Schedule of Recent Life Events. One group (n = 86) suffered high stress without falling ill, whereas the other (n = 75) reported becoming sick after their encounter with stressful life events. Illness was measured by the Wyler, Masuda, and Holmes Seriousness of Illness Survey. Discriminant function analysis, run on half of the subjects in each group and cross-validated on the remaining cases, supported the prediction that high stress/low illness executives show, by comparison with high stress/high illness executives, more hardiness, that is, have a stronger commitment to self, an attitude of vigorousness toward the environment, a sense of meaningfulness, and an internal locus of control.", "title": "" }, { "docid": "neg:1840101_2", "text": "More and more medicinal mushrooms have been widely used as a miraculous herb for health promotion, especially by cancer patients. Here we report screening thirteen mushrooms for anti-cancer cell activities in eleven different cell lines. Of the herbal products tested, we found that the extract of Amauroderma rude exerted the highest activity in killing most of these cancer cell lines. Amauroderma rude is a fungus belonging to the Ganodermataceae family. The Amauroderma genus contains approximately 30 species widespread throughout the tropical areas. Since the biological function of Amauroderma rude is unknown, we examined its anti-cancer effect on breast carcinoma cell lines. We compared the anti-cancer activity of Amauroderma rude and Ganoderma lucidum, the most well-known medicinal mushrooms with anti-cancer activity and found that Amauroderma rude had significantly higher activity in killing cancer cells than Ganoderma lucidum. We then examined the effect of Amauroderma rude on breast cancer cells and found that at low concentrations, Amauroderma rude could inhibit cancer cell survival and induce apoptosis. Treated cancer cells also formed fewer and smaller colonies than the untreated cells. When nude mice bearing tumors were injected with Amauroderma rude extract, the tumors grew at a slower rate than the control. Examination of these tumors revealed extensive cell death, decreased proliferation rate as stained by Ki67, and increased apoptosis as stained by TUNEL. Suppression of c-myc expression appeared to be associated with these effects. Taken together, Amauroderma rude represented a powerful medicinal mushroom with anti-cancer activities.", "title": "" }, { "docid": "neg:1840101_3", "text": "Communication at millimeter wave (mmWave) frequencies is defining a new era of wireless communication. The mmWave band offers higher bandwidth communication channels versus those presently used in commercial wireless systems. The applications of mmWave are immense: wireless local and personal area networks in the unlicensed band, 5G cellular systems, not to mention vehicular area networks, ad hoc networks, and wearables. Signal processing is critical for enabling the next generation of mmWave communication. Due to the use of large antenna arrays at the transmitter and receiver, combined with radio frequency and mixed signal power constraints, new multiple-input multiple-output (MIMO) communication signal processing techniques are needed. Because of the wide bandwidths, low complexity transceiver algorithms become important. There are opportunities to exploit techniques like compressed sensing for channel estimation and beamforming. This article provides an overview of signal processing challenges in mmWave wireless systems, with an emphasis on those faced by using MIMO communication at higher carrier frequencies.", "title": "" }, { "docid": "neg:1840101_4", "text": "Single-trial classification of Event-Related Potentials (ERPs) is needed in many real-world brain-computer interface (BCI) applications. However, because of individual differences, the classifier needs to be calibrated by using some labeled subject specific training samples, which may be inconvenient to obtain. In this paper we propose a weighted adaptation regularization (wAR) approach for offline BCI calibration, which uses data from other subjects to reduce the amount of labeled data required in offline single-trial classification of ERPs. Our proposed model explicitly handles class-imbalance problems which are common in many real-world BCI applications. War can improve the classification performance, given the same number of labeled subject-specific training samples, or, equivalently, it can reduce the number of labeled subject-specific training samples, given a desired classification accuracy. To reduce the computational cost of wAR, we also propose a source domain selection (SDS) approach. Our experiments show that wARSDS can achieve comparable performance with wAR but is much less computationally intensive. We expect wARSDS to find broad applications in offline BCI calibration.", "title": "" }, { "docid": "neg:1840101_5", "text": "S Music and the Moving Image Conference May 27th 29th, 2016 1. Loewe Friday, May 27, 2016, 9:30AM – 11:00AM MUSIC EDITING: PROCESS TO PRACTICE—BRIDGING THE VARIOUS PERSPECTIVES IN FILMMAKING AND STORY-TELLING Nancy Allen, Film Music Editor While the technical aspects of music editing and film-making continue to evolve, the fundamental nature of story-telling remains the same. Ideally, the role of the music editor exists at an intersection between the Composer, Director, and Picture Editor, where important creative decisions are made. This privileged position allows the Music Editor to better explore how to tell the story through music and bring the evolving vision of the film into tighter focus. 2. Loewe Friday, May 27, 2016, 11:30 AM – 1:00 PM GREAT EXPECTATIONS? THE CHANGING ROLE OF AUDIOVISUAL INCONGRUENCE IN CONTEMPORARY MULTIMEDIA Dave Ireland, University of Leeds Film-music moments that are perceived to be incongruent, misfitting or inappropriate have often been described as highly memorable. These claims can in part be explained by the separate processing of sonic and visual information that can occur when incongruent combinations subvert expectations of an audiovisual pairing in which the constituent components share a greater number of properties. Drawing upon a sequence from the TV sitcom Modern Family in which images of violent destruction are juxtaposed with performance of tranquil classical music, this paper highlights the increasing prevalence of such uses of audiovisual difference in contemporary multimedia. Indeed, such principles even now underlie a form of Internet meme entitled ‘Whilst I play unfitting music’. Such examples serve to emphasize the evolving functions of incongruence, emphasizing the ways in which such types of audiovisual pairing now also serve as a marker of authorial style and a source of intertextual parody. Drawing upon psychological theories of expectation and ideas from semiotics that facilitate consideration of the potential disjunction between authorial intent and perceiver response, this paper contends that such forms of incongruence should be approached from a psycho-semiotic perspective. Through consideration of the aforementioned examples, it will be demonstrated that this approach allows for: more holistic understanding of evolving expectations and attitudes towards audiovisual incongruence that may shape perceiver response; and a more nuanced mode of analyzing factors that may influence judgments of film-music fit and appropriateness. MUSICAL META-MORPHOSIS: BREAKING THE FOURTH WALL THROUGH DIEGETIC-IZING AND METACAESURA Rebecca Eaton, Texas State University In “The Fantastical Gap,” Stilwell suggests that metadiegetic music—which puts the audience “inside a character’s head”— begets such a strong spectator bond that it becomes “a kind of musical ‘direct address,’ threatening to break the fourth wall that is the screen.” While Stillwell theorizes a breaking of the fourth wall through audience over-identification, in this paper I define two means of film music transgression that potentially unsuture an audience, exposing film qua film: “diegetic-izing” and “metacaesura.” While these postmodern techniques 1) reveal film as a constructed artifact, and 2) thus render the spectator a more, not less, “troublesome viewing subject,” my analyses demonstrate that these breaches of convention still further the narrative aims of their respective films. Both Buhler and Stilwell analyze music that gradually dissolves from non-diegetic to diegetic. “Diegeticizing” unexpectedly reveals what was assumed to be nondiegetic as diegetic, subverting Gorbman’s first principle of invisibility. In parodies including Blazing Saddles and Spaceballs, this reflexive uncloaking plays for laughs. The Truman Show and the Hunger Games franchise skewer live soundtrack musicians and timpani—ergo, film music itself—as tools of emotional manipulation or propaganda. “Metacaesura” serves as another means of breaking the fourth wall. Metacaesura arises when non-diegetic music cuts off in media res. While diegeticizing renders film music visible, metacaesura renders it audible (if only in hindsight). In Honda’s “Responsible You,” Pleasantville, and The Truman Show, the dramatic cessation of nondiegetic music compels the audience to acknowledge the constructedness of both film and their own worlds. Partial Bibliography Brown, Tom. Breaking the Fourth Wall: Direct Address in the Cinema. Edinburgh: Edinburgh University Press, 2012. Buhler, James. “Analytical and Interpretive Approaches to Film Music (II): Interpreting Interactions of Music and Film.” In Film Music: An Anthology of Critical Essays, edited by K.J. Donnelly, 39-61. Edinburgh University Press, 2001. Buhler, James, Anahid Kassabian, David Neumeyer, and Robynn Stillwell. “Roundtable on Film Music.” Velvet Light Trap 51 (Spring 2003): 73-91. Buhler, James, Caryl Flinn, and David Neumeyer, eds. Music and Cinema. Hanover: Wesleyan/University Press of New England, 2000. Eaton, Rebecca M. Doran. “Unheard Minimalisms: The Function of the Minimalist Technique in Film Scores.” PhD diss., The University of Texas at Austin, 2008. Gorbman, Claudia. Unheard Melodies: Narrative Film Music. Bloomington: University of Indiana Press, 1987. Harries, Dan. Film Parody. London: British Film Institute, 2000. Kassabian, Anahid. Hearing Film: Tracking Identifications in Contemporary Hollywood Film Music. New York: Routledge, 2001. Neumeyer, David. “Diegetic/nondiegetic: A Theoretical Model.” Music and the Moving Image 2.1 (2009): 26–39. Stilwell, Robynn J. “The Fantastical Gap Between Diegetic and Nondiegetic.” In Beyond the Soundtrack, edited by Daniel Goldmark, Lawrence Kramer, and Richard Leppert, 184202. Berkeley: The University of California Press, 2007. REDEFINING PERSPECTIVE IN ATONEMENT: HOW MUSIC SET THE STAGE FOR MODERN MEDIA CONSUMPTION Lillie McDonough, New York University One of the most striking narrative devices in Joe Wright’s film adaptation of Atonement (2007) is in the way Dario Marianelli’s original score dissolves the boundaries between diagetic and non-diagetic music at key moments in the drama. I argue that these moments carry us into a liminal state where the viewer is simultaneously in the shoes of a first person character in the world of the film and in the shoes of a third person viewer aware of the underscore as a hallmark of the fiction of a film in the first place. This reflects the experience of Briony recalling the story, both as participant and narrator, at the metalevel of the audience. The way the score renegotiates the customary musical playing space creates a meta-narrative that resembles one of the fastest growing forms of digital media of today: videogames. At their core, video games work by placing the player in a liminal state of both a viewer who watches the story unfold and an agent who actively takes part in the story’s creation. In fact, the growing trend towards hyperrealism and virtual reality intentionally progressively erodes the boundaries between the first person agent in real the world and agent on screen in the digital world. Viewed through this lens, the philosophy behind the experience of Atonement’s score and sound design appears to set the stage for way our consumption of media has developed since Atonement’s release in 2007. Mainly, it foreshadows and highlights a prevalent desire to progressively blur the lines between media and life. 3. Room 303, Friday, May 27, 2016, 11:30 AM – 1:00 PM HOLLYWOOD ORCHESTRATORS AND GHOSTWRITERS OF THE 1960s AND 1970s: THE CASE OF MOACIR SANTOS Lucas Bonetti, State University of Campinas In Hollywood in the 1960s and 1970s, freelance film composers trying to break into the market saw ghostwriting as opportunities to their professional networks. Meanwhile, more renowned composers saw freelancers as means of easing their work burdens. The phenomenon was so widespread that freelancers even sometimes found themselves ghostwriting for other ghostwriters. Ghostwriting had its limitations, though: because freelancers did not receive credit, they could not grow their resumes. Moreover, their music often had to follow such strict guidelines that they were not able to showcase their own compositional voices. Being an orchestrator raised fewer questions about authorship, and orchestrators usually did not receive credit for their work. Typically, composers provided orchestrators with detailed sketches, thereby limiting their creative possibilities. This story would suggest that orchestrators were barely more than copyists—though with more intense workloads. This kind of thankless work was especially common in scoring for episodic television series of the era, where the fast pace of the industry demanded more agility and productivity. Brazilian composer Moacir Santos worked as a Hollywood ghostwriter and orchestrator starting in 1968. His experiences exemplify the difficulties of these professions during this era. In this paper I draw on an interview-based research I conducted in the Los Angeles area to show how Santos’s experiences showcase the difficulties of being a Hollywood outsider at the time. In particular, I examine testimony about racial prejudice experienced by Santos, and how misinformation about his ghostwriting activity has led to misunderstandings among scholars about his contributions. SING A SONG!: CHARITY BAILEY AND INTERRACIAL MUSIC EDUCATION ON 1950s NYC TELEVISION Melinda Russell, Carleton College Rhode Island native Charity Bailey (1904-1978) helped to define a children’s music market in print and recordings; in each instance the contents and forms she developed are still central to American children’s musical culture and practice. After study at Juilliard and Dalcroze, Bailey taught music at the Little Red School House in Greenwich Village from 1943-1954, where her students included Mary Travers and Eric Weissberg. Bailey’s focus on African, African-American, and Car", "title": "" }, { "docid": "neg:1840101_6", "text": "Recent advance of large scale similarity search involves using deeply learned representations to improve the search accuracy and use vector quantization methods to increase the search speed. However, how to learn deep representations that strongly preserve similarities between data pairs and can be accurately quantized via vector quantization remains a challenging task. Existing methods simply leverage quantization loss and similarity loss, which result in unexpectedly biased back-propagating gradients and affect the search performances. To this end, we propose a novel gradient snapping layer (GSL) to directly regularize the back-propagating gradient towards a neighboring codeword, the generated gradients are un-biased for reducing similarity loss and also propel the learned representations to be accurately quantized. Joint deep representation and vector quantization learning can be easily performed by alternatively optimize the quantization codebook and the deep neural network. The proposed framework is compatible with various existing vector quantization approaches. Experimental results demonstrate that the proposed framework is effective, flexible and outperforms the state-of-the-art large scale similarity search methods.", "title": "" }, { "docid": "neg:1840101_7", "text": "Automated negotiation systems with self interested agents are becoming increas ingly important One reason for this is the technology push of a growing standardized communication infrastructure Internet WWW NII EDI KQML FIPA Concor dia Voyager Odyssey Telescript Java etc over which separately designed agents belonging to di erent organizations can interact in an open environment in real time and safely carry out transactions The second reason is strong application pull for computer support for negotiation at the operative decision making level For example we are witnessing the advent of small transaction electronic commerce on the Internet for purchasing goods information and communication bandwidth There is also an industrial trend toward virtual enterprises dynamic alliances of small agile enterprises which together can take advantage of economies of scale when available e g respond to more diverse orders than individual agents can but do not su er from diseconomies of scale Multiagent technology facilitates such negotiation at the operative decision mak ing level This automation can save labor time of human negotiators but in addi tion other savings are possible because computational agents can be more e ective at nding bene cial short term contracts than humans are in strategically and com binatorially complex settings This chapter discusses multiagent negotiation in situations where agents may have di erent goals and each agent is trying to maximize its own good without concern for the global good Such self interest naturally prevails in negotiations among independent businesses or individuals In building computer support for negotiation in such settings the issue of self interest has to be dealt with In cooperative distributed problem solving the system designer imposes an interaction protocol and a strategy a mapping from state history to action a", "title": "" }, { "docid": "neg:1840101_8", "text": "This paper discloses development and evaluation of die attach material using base metals (Cu and Sn) by three different type of composite. Mixing them into paste or sheet shape for die attach, we have confirmed that one of Sn-Cu components having IMC network near its surface has major role to provide robust interconnect especially for high temperature applications beyond 200°C after sintering.", "title": "" }, { "docid": "neg:1840101_9", "text": "In information theory, Fisher information and Shannon information (entropy) are respectively used to quantify the uncertainty associated with the distribution modeling and the uncertainty in specifying the outcome of given variables. These two quantities are complementary and are jointly applied to information behavior analysis in most cases. The uncertainty property in information asserts a fundamental trade-off between Fisher information and Shannon information, which enlightens us the relationship between the encoder and the decoder in variational auto-encoders (VAEs). In this paper, we investigate VAEs in the FisherShannon plane, and demonstrate that the representation learning and the log-likelihood estimation are intrinsically related to these two information quantities. Through extensive qualitative and quantitative experiments, we provide with a better comprehension of VAEs in tasks such as highresolution reconstruction, and representation learning in the perspective of Fisher information and Shannon information. We further propose a variant of VAEs, termed as Fisher auto-encoder (FAE), for practical needs to balance Fisher information and Shannon information. Our experimental results have demonstrated its promise in improving the reconstruction accuracy and avoiding the non-informative latent code as occurred in previous works.", "title": "" }, { "docid": "neg:1840101_10", "text": "We formulate an equivalence between machine learning and the formulation of statistical data assimilation as used widely in physical and biological sciences. The correspondence is that layer number in a feedforward artificial network setting is the analog of time in the data assimilation setting. This connection has been noted in the machine learning literature. We add a perspective that expands on how methods from statistical physics and aspects of Lagrangian and Hamiltonian dynamics play a role in how networks can be trained and designed. Within the discussion of this equivalence, we show that adding more layers (making the network deeper) is analogous to adding temporal resolution in a data assimilation framework. Extending this equivalence to recurrent networks is also discussed. We explore how one can find a candidate for the global minimum of the cost functions in the machine learning context using a method from data assimilation. Calculations on simple models from both sides of the equivalence are reported. Also discussed is a framework in which the time or layer label is taken to be continuous, providing a differential equation, the Euler-Lagrange equation and its boundary conditions, as a necessary condition for a minimum of the cost function. This shows that the problem being solved is a two-point boundary value problem familiar in the discussion of variational methods. The use of continuous layers is denoted “deepest learning.” These problems respect a symplectic symmetry in continuous layer phase space. Both Lagrangian versions and Hamiltonian versions of these problems are presented. Their well-studied implementation in a discrete time/layer, while respecting the symplectic structure, is addressed. The Hamiltonian version provides a direct rationale for backpropagation as a solution method for a certain two-point boundary value problem.", "title": "" }, { "docid": "neg:1840101_11", "text": "Power electronic transformer (PET) technology is one of the promising technology for medium/high power conversion systems. With the cutting-edge improvements in the power electronics and magnetics, makes it possible to substitute conventional line frequency transformer traction (LFTT) technology with the PET technology. Over the past years, research and field trial studies are conducted to explore the technical challenges associated with the operation, functionalities, and control of PET-based traction systems. This paper aims to review the essential requirements, technical challenges, and the existing state of the art of PET traction system architectures. Finally, this paper discusses technical considerations and introduces the new research possibilities especially in the power conversion stages, PET design, and the power switching devices.", "title": "" }, { "docid": "neg:1840101_12", "text": "In recent decades, the interactive whiteboard (IWB) has become a relatively common educational tool in Western schools. The IWB is essentially a large touch screen, that enables the user to interact with digital content in ways that are not possible with an ordinary computer-projector-canvas setup. However, the unique possibilities of IWBs are rarely leveraged to enhance teaching and learning beyond the primary school level. This is particularly noticeable in high school physics. We describe how a high school physics teacher learned to use an IWB in a new way, how she planned and implemented a lesson on the topic of orbital motion of planets, and what tensions arose in the process. We used an ethnographic approach to account for the teacher’s and involved students’ perspectives throughout the process of teacher preparation, lesson planning, and the implementation of the lesson. To interpret the data, we used the conceptual framework of activity theory. We found that an entrenched culture of traditional white/blackboard use in physics instruction interferes with more technologically innovative and more student-centered instructional approaches that leverage the IWB’s unique instructional potential. Furthermore, we found that the teacher’s confidence in the mastery of the IWB plays a crucial role in the teacher’s willingness to transfer agency within the lesson to the students.", "title": "" }, { "docid": "neg:1840101_13", "text": "The work presented in this paper is targeted at the first phase of the test and measurements product life cycle, namely standardisation. During this initial phase of any product, the emphasis is on the development of standards that support new technologies while leaving the scope of implementations as open as possible. To allow the engineer to freely create and invent tools that can quickly help him simulate or emulate his ideas are paramount. Within this scope, a traffic generation system has been developed for IEC 61850 Sampled Values which will help in the evaluation of the data models, data acquisition, data fusion, data integration and data distribution between the various devices and components that use this complex set of evolving standards in Smart Grid systems.", "title": "" }, { "docid": "neg:1840101_14", "text": "Distributed generators (DGs) sometimes provide the lowest cost solution to handling low-voltage or overload problems. In conjunction with handling such problems, a DG can be placed for optimum efficiency or optimum reliability. Such optimum placements of DGs are investigated. The concept of segments, which has been applied in previous reliability studies, is used in the DG placement. The optimum locations are sought for time-varying load patterns. It is shown that the circuit reliability is a function of the loading level. The difference of DG placement between optimum efficiency and optimum reliability varies under different load conditions. Observations and recommendations concerning DG placement for optimum reliability and efficiency are provided in this paper. Economic considerations are also addressed.", "title": "" }, { "docid": "neg:1840101_15", "text": "We tackle the problem of multi-label classification of fashion images, learning from noisy data with minimal human supervision. We present a new dataset of full body poses, each with a set of 66 binary labels corresponding to the information about the garments worn in the image obtained in an automatic manner. As the automatically-collected labels contain significant noise, we manually correct the labels for a small subset of the data, and use these correct labels for further training and evaluation. We build upon a recent approach that both cleans the noisy labels and learns to classify, and introduce simple changes that can significantly improve the performance.", "title": "" }, { "docid": "neg:1840101_16", "text": "During present study the antibacterial activity of black pepper (Piper nigrum Linn.) and its mode of action on bacteria were done. The extracts of black pepper were evaluated for antibacterial activity by disc diffusion method. The minimum inhibitory concentration (MIC) was determined by tube dilution method and mode of action was studied on membrane leakage of UV260 and UV280 absorbing material spectrophotometrically. The diameter of the zone of inhibition against various Gram positive and Gram negative bacteria was measured. The MIC was found to be 50-500ppm. Black pepper altered the membrane permeability resulting the leakage of the UV260 and UV280 absorbing material i.e., nucleic acids and proteins into the extra cellular medium. The results indicate excellent inhibition on the growth of Gram positive bacteria like Staphylococcus aureus, followed by Bacillus cereus and Streptococcus faecalis. Among the Gram negative bacteria Pseudomonas aeruginosa was more susceptible followed by Salmonella typhi and Escherichia coli.", "title": "" }, { "docid": "neg:1840101_17", "text": "ion level. It is especially useful in the case of expert-based estimation, where it is easier for experts to embrace and estimate smaller pieces of project work. Moreover, the increased level of detail during estimation—for instance, by breaking down software products and processes—implies higher transparency of estimates. In practice, there is a good chance that the bottom estimates would be mixed below and above the actual effort. As a consequence, estimation errors at the bottom level will cancel each other out, resulting in smaller estimation error than if a top-down approach were used. This phenomenon is related to the mathematical law of large numbers. However, the more granular the individual estimates, the more time-consuming the overall estimation process becomes. In industrial practice, a top-down strategy usually provides reasonably accurate estimates at relatively low overhead and without too much technical expertise. Although bottom-up estimation usually provides more accurate estimates, it requires the estimators involved to have expertise regarding the bottom activities and related product components that they estimate directly. In principle, applying bottom-up estimation pays off when the decomposed tasks can be estimated more accurately than the whole task. For instance, a bottom-up strategy proved to provide better results when applied to high-uncertainty or complex estimation tasks, which are usually underestimated when considered as a whole. Furthermore, it is often easy to forget activities and/or underestimate the degree of unexpected events, which leads to underestimation of total effort. However, from the mathematical point of view (law of large numbers mentioned), dividing the project into smaller work packages provides better data for estimation and reduces overall estimation error. Experiences presented by Jørgensen (2004b) suggest that in the context of expert-based estimation, software companies should apply a bottom-up strategy unless the estimators have experience from, or access to, very similar projects. In the context of estimation based on human judgment, typical threats of individual and group estimation should be considered. Refer to Sect. 6.4 for an overview of the strengths and weaknesses of estimation based on human judgment.", "title": "" }, { "docid": "neg:1840101_18", "text": "Neural networks provide state-of-the-art results for most machine learning tasks. Unfortunately, neural networks are vulnerable to adversarial examples: given an input x and any target classification t, it is possible to find a new input x' that is similar to x but classified as t. This makes it difficult to apply neural networks in security-critical areas. Defensive distillation is a recently proposed approach that can take an arbitrary neural network, and increase its robustness, reducing the success rate of current attacks' ability to find adversarial examples from 95% to 0.5%.In this paper, we demonstrate that defensive distillation does not significantly increase the robustness of neural networks by introducing three new attack algorithms that are successful on both distilled and undistilled neural networks with 100% probability. Our attacks are tailored to three distance metrics used previously in the literature, and when compared to previous adversarial example generation algorithms, our attacks are often much more effective (and never worse). Furthermore, we propose using high-confidence adversarial examples in a simple transferability test we show can also be used to break defensive distillation. We hope our attacks will be used as a benchmark in future defense attempts to create neural networks that resist adversarial examples.", "title": "" } ]
1840102
DAGER: Deep Age, Gender and Emotion Recognition Using Convolutional Neural Network
[ { "docid": "pos:1840102_0", "text": "Human age provides key demographic information. It is also considered as an important soft biometric trait for human identification or search. Compared to other pattern recognition problems (e.g., object classification, scene categorization), age estimation is much more challenging since the difference between facial images with age variations can be more subtle and the process of aging varies greatly among different individuals. In this work, we investigate deep learning techniques for age estimation based on the convolutional neural network (CNN). A new framework for age feature extraction based on the deep learning model is built. Compared to previous models based on CNN, we use feature maps obtained in different layers for our estimation work instead of using the feature obtained at the top layer. Additionally, a manifold learning algorithm is incorporated in the proposed scheme and this improves the performance significantly. Furthermore, we also evaluate different classification and regression schemes in estimating age using the deep learned aging pattern (DLA). To the best of our knowledge, this is the first time that deep learning technique is introduced and applied to solve the age estimation problem. Experimental results on two datasets show that the proposed approach is significantly better than the state-of-the-art.", "title": "" } ]
[ { "docid": "neg:1840102_0", "text": "Computer-based multimedia learning environments — consisting of pictures (such as animation) and words (such as narration) — offer a potentially powerful venue for improving student understanding. How can we use words and pictures to help people understand how scientific systems work, such as how a lightning storm develops, how the human respiratory system operates, or how a bicycle tire pump works? This paper presents a cognitive theory of multimedia learning which draws on dual coding theory, cognitive load theory, and constructivist learning theory. Based on the theory, principles of instructional design for fostering multimedia learning are derived and tested. The multiple representation principle states that it is better to present an explanation in words and pictures than solely in words. The contiguity principle is that it is better to present corresponding words and pictures simultaneously rather than separately when giving a multimedia explanation. The coherence principle is that multimedia explanations are better understood when they include few rather than many extraneous words and sounds. The modality principle is that it is better to present words as auditory narration than as visual on-screen text. The redundancy principle is that it is better to present animation and narration than to present animation, narration, and on-screen text. By beginning with a cognitive theory of how learners process multimedia information, we have been able to conduct focused research that yields some preliminary principles of instructional design for multimedia messages.  2001 Elsevier Science Ltd. All rights reserved.", "title": "" }, { "docid": "neg:1840102_1", "text": "As mentioned in the paper, the direct optimization of group assignment variables with reduced gradients yields faster convergence than optimization via softmax reparametrization. Figure 1 shows the distribution plots, which are provided by TensorFlow, of class-to-group assignments using two methods. Despite starting with lower variance, when the distribution of group assignment variables diverged to", "title": "" }, { "docid": "neg:1840102_2", "text": "Treatment of high-strength phenolic wastewater by a novel two-step method was investigated in the present study. The two-step treatment method consisted of chemical coagulation of the wastewater by metal chloride followed by further phenol reduction by resin adsorption. The present combined treatment was found to be highly efficient in removing the phenol concentration from the aqueous solution and was proved capable of lowering the initial phenol concentration from over 10,000 mg/l to below direct discharge level (1mg/l). In the experimental tests, appropriate conditions were identified for optimum treatment operation. Theoretical investigations were also performed for batch equilibrium adsorption and column adsorption of phenol by macroreticular resin. The empirical Freundlich isotherm was found to represent well the equilibrium phenol adsorption. The column model with appropriately identified model parameters could accurately predict the breakthrough times.", "title": "" }, { "docid": "neg:1840102_3", "text": "Ayahuasca is a hallucinogenic beverage that combines the action of the 5-HT2A/2C agonist N,N-dimethyltryptamine (DMT) from Psychotria viridis with the monoamine oxidase inhibitors (MAOIs) induced by beta-carbonyls from Banisteriopsis caapi. Previous investigations have highlighted the involvement of ayahuasca with the activation of brain regions known to be involved with episodic memory, contextual associations and emotional processing after ayahuasca ingestion. Moreover long term users show better performance in neuropsychological tests when tested in off-drug condition. This study evaluated the effects of long-term administration of ayahuasca on Morris water maze (MWM), fear conditioning and elevated plus maze (EPM) performance in rats. Behavior tests started 48h after the end of treatment. Freeze-dried ayahuasca doses of 120, 240 and 480 mg/kg were used, with water as the control. Long-term administration consisted of a daily oral dose for 30 days by gavage. The behavioral data indicated that long-term ayahuasca administration did not affect the performance of animals in MWM and EPM tasks. However the dose of 120 mg/kg increased the contextual conditioned fear response for both background and foreground fear conditioning. The tone conditioned response was not affected after long-term administration. In addition, the increase in the contextual fear response was maintained during the repeated sessions several weeks after training. Taken together, these data showed that long-term ayahuasca administration in rats can interfere with the contextual association of emotional events, which is in agreement with the fact that the beverage activates brain areas related to these processes.", "title": "" }, { "docid": "neg:1840102_4", "text": "Two clinically distinct forms of Blount disease (early-onset and late-onset), based on whether the lower-limb deformity develops before or after the age of four years, have been described. Although the etiology of Blount disease may be multifactorial, the strong association with childhood obesity suggests a mechanical basis. A comprehensive analysis of multiplanar deformities in the lower extremity reveals tibial varus, procurvatum, and internal torsion along with limb shortening. Additionally, distal femoral varus is commonly noted in the late-onset form. When a patient has early-onset disease, a realignment tibial osteotomy before the age of four years decreases the risk of recurrent deformity. Gradual correction with distraction osteogenesis is an effective means of achieving an accurate multiplanar correction, especially in patients with late-onset disease.", "title": "" }, { "docid": "neg:1840102_5", "text": "In this paper we present Sentimentor, a tool for sentiment analysis of Twitter data. Sentimentor utilises the naive Bayes Classifier to classify Tweets into positive, negative or objective sets. We present experimental evaluation of our dataset and classification results, our findings are not contridictory with existing work.", "title": "" }, { "docid": "neg:1840102_6", "text": "Enhancing the quality of image is a continuous process in image processing related research activities. For some applications it becomes essential to have best quality of image such as in forensic department, where in order to retrieve maximum possible information, image has to be enlarged in terms of size, with higher resolution and other features associated with it. Such obtained high quality images have also a concern in satellite imaging, medical science, High Definition Television (HDTV), etc. In this paper a novel approach of getting high resolution image from a single low resolution image is discussed. The Non Sub-sampled Contourlet Transform (NSCT) based learning is used to learn the NSCT coefficients at the finer scale of the unknown high-resolution image from a dataset of high resolution images. The cost function consisting of a data fitting term and a Gabor prior term is optimized using an Iterative Back Projection (IBP). By making use of directional decomposition property of the NSCT and the Gabor filter bank with various orientations, the proposed method is capable to reconstruct an image with less edge artifacts. The validity of the proposed approach is proven through simulation on several images. RMS measures, PSNR measures and illustrations show the success of the proposed method.", "title": "" }, { "docid": "neg:1840102_7", "text": "The research challenge addressed in this paper is to devise effective techniques for identifying task-based sessions, i.e. sets of possibly non contiguous queries issued by the user of a Web Search Engine for carrying out a given task. In order to evaluate and compare different approaches, we built, by means of a manual labeling process, a ground-truth where the queries of a given query log have been grouped in tasks. Our analysis of this ground-truth shows that users tend to perform more than one task at the same time, since about 75% of the submitted queries involve a multi-tasking activity. We formally define the Task-based Session Discovery Problem (TSDP) as the problem of best approximating the manually annotated tasks, and we propose several variants of well known clustering algorithms, as well as a novel efficient heuristic algorithm, specifically tuned for solving the TSDP. These algorithms also exploit the collaborative knowledge collected by Wiktionary and Wikipedia for detecting query pairs that are not similar from a lexical content point of view, but actually semantically related. The proposed algorithms have been evaluated on the above ground-truth, and are shown to perform better than state-of-the-art approaches, because they effectively take into account the multi-tasking behavior of users.", "title": "" }, { "docid": "neg:1840102_8", "text": "3 Cooccurrence and frequency counts 11 12 3.1 Surface cooccurrence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 13 3.2 Textual cooccurrence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 14 3.3 Syntactic cooccurrence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 15 3.4 Comparison . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 16", "title": "" }, { "docid": "neg:1840102_9", "text": "The Sapienza University Networking framework for underwater Simulation Emulation and real-life Testing (SUNSET) is a toolkit for the implementation and testing of protocols for underwater sensor networks. SUNSET enables a radical new way of performing experimental research on underwater communications. It allows protocol designers and implementors to easily realize their solutions and to evaluate their performance through simulation, in-lab emulation and trials at sea in a direct and transparent way, and independently of specific underwater hardware platforms. SUNSET provides a complete toolchain of predeployment and deployment time tools able to identify risks, malfunctioning and under-performing solutions before incurring the expense of going to sea. Novel underwater systems can therefore be rapidly and easily investigated. Heterogeneous underwater communication technologies from different vendors can be used, allowing the evaluation of the impact of different combinations of hardware and software on the overall system performance. Using SUNSET, underwater devices can be reconfigured and controlled remotely in real time, using acoustic links. This allows the performance investigation of underwater systems under different settings and configurations and significantly reduces the cost and complexity of at-sea trials. This paper describes the architectural concept of SUNSET and presents some exemplary results of its use in the field. The SUNSET framework has been extensively validated during more than fifteen at-sea experimental campaigns in the past four years. Several of these have been conducted jointly with the NATO STO Centre for Maritime Research and Experimentation (CMRE) under a collaboration between the University of Rome and CMRE.", "title": "" }, { "docid": "neg:1840102_10", "text": "The Elliptic Curve Digital Signature Algorithm (ECDSA) is the elliptic curve analogue of the Digital Signature Algorithm (DSA). It was accepted in 1999 as an ANSI standard and in 2000 as IEEE and NIST standards. It was also accepted in 1998 as an ISO standard and is under consideration for inclusion in some other ISO standards. Unlike the ordinary discrete logarithm problem and the integer factorization problem, no subexponential-time algorithm is known for the elliptic curve discrete logarithm problem. For this reason, the strength-per-key-bit is substantially greater in an algorithm that uses elliptic curves. This paper describes the ANSI X9.62 ECDSA, and discusses related security, implementation, and interoperability issues.", "title": "" }, { "docid": "neg:1840102_11", "text": "We present a novel method for detecting occlusions and in-painting unknown areas of a light field photograph, based on previous work in obstruction-free photography and light field completion. An initial guess at separating the occluder from the rest of the photograph is computed by aligning backgrounds of the images and using this information to generate an occlusion mask. The masked pixels are then synthesized using a patch-based texture synthesis algorithm, with the median image as the source of each patch.", "title": "" }, { "docid": "neg:1840102_12", "text": "This paper describes the development of a human airbag system which is designed to reduce the impact force from slippage falling-down. A micro inertial measurement unit (muIMU) which is based on MEMS accelerometers and gyro sensors is developed as the motion sensing part of the system. A weightless recognition algorithm is used for real-time falling determination. With the algorithm, the microcontroller integrated with muIMU can discriminate falling-down motion from normal human motions and trigger an airbag system when a fall occurs. Our airbag system is designed to be fast response with moderate input pressure, i.e., the experimental response time is less than 0.3 second under 0.4 MPa (gage pressure). Also, we present our progress on development of the inflator and the airbags", "title": "" }, { "docid": "neg:1840102_13", "text": "Adolescence is characterized by making risky decisions. Early lesion and neuroimaging studies in adults pointed to the ventromedial prefrontal cortex and related structures as having a key role in decision-making. More recent studies have fractionated decision-making processes into its various components, including the representation of value, response selection (including inter-temporal choice and cognitive control), associative learning, and affective and social aspects. These different aspects of decision-making have been the focus of investigation in recent studies of the adolescent brain. Evidence points to a dissociation between the relatively slow, linear development of impulse control and response inhibition during adolescence versus the nonlinear development of the reward system, which is often hyper-responsive to rewards in adolescence. This suggests that decision-making in adolescence may be particularly modulated by emotion and social factors, for example, when adolescents are with peers or in other affective ('hot') contexts.", "title": "" }, { "docid": "neg:1840102_14", "text": "This paper develops techniques using which humans can be visually recognized. While face recognition would be one approach to this problem, we believe that it may not be always possible to see a person?s face. Our technique is complementary to face recognition, and exploits the intuition that human motion patterns and clothing colors can together encode several bits of information. Treating this information as a \"temporary fingerprint\", it may be feasible to recognize an individual with reasonable consistency, while allowing her to turn off the fingerprint at will.\n One application of visual fingerprints relates to augmented reality, in which an individual looks at other people through her camera-enabled glass (e.g., Google Glass) and views information about them. Another application is in privacy-preserving pictures ? Alice should be able to broadcast her \"temporary fingerprint\" to all cameras in the vicinity along with a privacy preference, saying \"remove me\". If a stranger?s video happens to include Alice, the device can recognize her fingerprint in the video and erase her completely. This paper develops the core visual fingerprinting engine ? InSight ? on the platform of Android smartphones and a backend server running MATLAB and OpenCV. Results from real world experiments show that 12 individuals can be discriminated with 90% accuracy using 6 seconds of video/motion observations. Video based emulation confirms scalability up to 40 users.", "title": "" }, { "docid": "neg:1840102_15", "text": "CONTEXT-AWARE ARGUMENT MINING AND ITS APPLICATIONS IN EDUCATION", "title": "" }, { "docid": "neg:1840102_16", "text": "The purpose of text clustering in information retrieval is to discover groups of semantically related documents. Accurate and comprehensible cluster descriptions (labels) let the user comprehend the collection’s content faster and are essential for various document browsing interfaces. The task of creating descriptive, sensible cluster labels is difficult—typical text clustering algorithms focus on optimizing proximity between documents inside a cluster and rely on keyword representation for describing discovered clusters. In the approach called Description Comes First (DCF) cluster labels are as important as document groups—DCF promotes machine discovery of comprehensible candidate cluster labels later used to discover related document groups. In this paper we describe an application of DCF to the k-Means algorithm, including results of experiments performed on the 20-newsgroups document collection. Experimental evaluation showed that DCF does not decrease the metrics used to assess the quality of document assignment and offers good cluster labels in return. The algorithm utilizes search engine’s data structures directly to scale to large document collections. Introduction Organizing unstructured collections of textual content into semantically related groups, from now on referred to as text clustering or clustering, provides unique ways of digesting large amounts of information. In the context of information retrieval and text mining, a general definition of clustering is the following: given a large set of documents, automatically discover diverse subsets of documents that share a similar topic. In typical applications input documents are first transformed into a mathematical model where each document is described by certain features. The most popular representation for text is the vector space model [Salton, 1989]. In the VSM, documents are expressed as rows in a matrix, where columns represent unique terms (features) and the intersection of a column and a row indicates the importance of a given word to the document. A model such as the VSM helps in calculation of similarity between documents (angle between document vectors) and thus facilitates application of various known (or modified) numerical clustering algorithms. While this is sufficient for many applications, problems arise when one needs to construct some representation of the discovered groups of documents—a label, a symbolic description for each cluster, something to represent the information that makes documents inside a cluster similar to each other and that would convey this information to the user. Cluster labeling problems are often present in modern text and Web mining applications with document browsing interfaces. The process of returning from the mathematical model of clusters to comprehensible, explanatory labels is difficult because text representation used for clustering rarely preserves the inflection and syntax of the original text. Clustering algorithms presented in literature usually fall back to the simplest form of cluster representation—a list of cluster’s keywords (most “central” terms in the cluster). Unfortunately, keywords are stripped from syntactical information and force the user to manually find the underlying concept which is often confusing. Motivation and Related Works The user of a retrieval system judges the clustering algorithm by what he sees in the output— clusters’ descriptions, not the final model which is usually incomprehensible for humans. The experiences with the text clustering framework Carrot (www.carrot2.org) resulted in posing a slightly different research problem (aligned with clustering but not exactly the same). We shifted the emphasis of a clustering method to providing comprehensible and accurate cluster labels in addition to discovery of document groups. We call this problem descriptive clustering: discovery of diverse groups of semantically related documents associated with a meaningful, comprehensible and compact text labels. This definition obviously leaves a great deal of freedom for interpretation because terms such as meaningful or accurate are very vague. We narrowed the set of requirements of descriptive clustering to the following ones: — comprehensibility understood as grammatical correctness (word order, inflection, agreement between words if applicable); — conciseness of labels. Phrases selected for a cluster label should minimize its total length (without sacrificing its comprehensibility); — transparency of the relationship between cluster label and cluster content, best explained by ability to answer questions as: “Why was this label selected for these documents?” and “Why is this document in a cluster labeled X?”. Little research has been done to address the requirements above. In the STC algorithm authors employed frequently recurring phrases as both document similarity feature and final cluster description [Zamir and Etzioni, 1999]. A follow-up work [Ferragina and Gulli, 2004] showed how to avoid certain STC limitations and use non-contiguous phrases (so-called approximate sentences). A different idea of ‘label-driven’ clustering appeared in clustering with committees algorithm [Pantel and Lin, 2002], where strongly associated terms related to unambiguous concepts were evaluated using semantic relationships from WordNet. We introduced the DCF approach in our previous work [Osiński and Weiss, 2005] and showed its feasibility using an algorithm called Lingo. Lingo used singular value decomposition of the term-document matrix to select good cluster labels among candidates extracted from the text (frequent phrases). The algorithm was designed to cluster results from Web search engines (short snippets and fragmented descriptions of original documents) and proved to provide diverse meaningful cluster labels. Lingo’s weak point is its limited scalability to full or even medium sized documents. In this", "title": "" }, { "docid": "neg:1840102_17", "text": "Multimodal semantic representation is an evolving area of research in natural language processing as well as computer vision. Combining or integrating perceptual information, such as visual features, with linguistic features is recently being actively studied. This paper presents a novel bimodal autoencoder model for multimodal representation learning: the autoencoder learns in order to enhance linguistic feature vectors by incorporating the corresponding visual features. During the runtime, owing to the trained neural network, visually enhanced multimodal representations can be achieved even for words for which direct visual-linguistic correspondences are not learned. The empirical results obtained with standard semantic relatedness tasks demonstrate that our approach is generally promising. We further investigate the potential efficacy of the enhanced word embeddings in discriminating antonyms and synonyms from vaguely related words.", "title": "" }, { "docid": "neg:1840102_18", "text": "Online learning algorithms often have to operate in the presence of concept drift (i.e., the concepts to be learned can change with time). This paper presents a new categorization for concept drift, separating drifts according to different criteria into mutually exclusive and nonheterogeneous categories. Moreover, although ensembles of learning machines have been used to learn in the presence of concept drift, there has been no deep study of why they can be helpful for that and which of their features can contribute or not for that. As diversity is one of these features, we present a diversity analysis in the presence of different types of drifts. We show that, before the drift, ensembles with less diversity obtain lower test errors. On the other hand, it is a good strategy to maintain highly diverse ensembles to obtain lower test errors shortly after the drift independent on the type of drift, even though high diversity is more important for more severe drifts. Longer after the drift, high diversity becomes less important. Diversity by itself can help to reduce the initial increase in error caused by a drift, but does not provide the faster recovery from drifts in long-term.", "title": "" } ]
1840103
Academic advising system using data mining method for decision making support
[ { "docid": "pos:1840103_0", "text": "This paper explores the socio-demographic variables (age, gender, ethnicity, education, work status, and disability) and study environment (course programme and course block), that may influence persistence or dropout of students at the Open Polytechnic of New Zealand. We examine to what extent these factors, i.e. enrolment data help us in pre-identifying successful and unsuccessful students. The data stored in the Open Polytechnic student management system from 2006 to 2009, covering over 450 students who enrolled to 71150 Information Systems course was used to perform a quantitative analysis of study outcome. Based on a data mining techniques (such as feature selection and classification trees), the most important factors for student success and a profile of the typical successful and unsuccessful students are identified. The empirical results show the following: (i) the most important factors separating successful from unsuccessful students are: ethnicity, course programme and course block; (ii) among classification tree growing methods Classification and Regression Tree (CART) was the most successful in growing the tree with an overall percentage of correct classification of 60.5%; and (iii) both the risk estimated by the cross-validation and the gain diagram suggests that all trees, based only on enrolment data are not quite good in separating successful from unsuccessful students. The implications of these results for academic and administrative staff are discussed.", "title": "" } ]
[ { "docid": "neg:1840103_0", "text": "In this paper, we propose a new joint dictionary learning method for example-based image super-resolution (SR), using sparse representation. The low-resolution (LR) dictionary is trained from a set of LR sample image patches. Using the sparse representation coefficients of these LR patches over the LR dictionary, the high-resolution (HR) dictionary is trained by minimizing the reconstruction error of HR sample patches. The error criterion used here is the mean square error. In this way we guarantee that the HR patches have the same sparse representation over HR dictionary as the LR patches over the LR dictionary, and at the same time, these sparse representations can well reconstruct the HR patches. Simulation results show the effectiveness of our method compared to the state-of-art SR algorithms.", "title": "" }, { "docid": "neg:1840103_1", "text": "In this paper, we present the Functional Catalogue (FunCat), a hierarchically structured, organism-independent, flexible and scalable controlled classification system enabling the functional description of proteins from any organism. FunCat has been applied for the manual annotation of prokaryotes, fungi, plants and animals. We describe how FunCat is implemented as a highly efficient and robust tool for the manual and automatic annotation of genomic sequences. Owing to its hierarchical architecture, FunCat has also proved to be useful for many subsequent downstream bioinformatic applications. This is illustrated by the analysis of large-scale experiments from various investigations in transcriptomics and proteomics, where FunCat was used to project experimental data into functional units, as 'gold standard' for functional classification methods, and also served to compare the significance of different experimental methods. Over the last decade, the FunCat has been established as a robust and stable annotation scheme that offers both, meaningful and manageable functional classification as well as ease of perception.", "title": "" }, { "docid": "neg:1840103_2", "text": "Testing of network protocols and distributed applications has become increasingly complex, as the diversity of networks and underlying technologies increase, and the adaptive behavior of applications becomes more sophisticated. In this paper, we present NIST Net, a tool to facilitate testing and experimentation with network code through emulation. NIST Net enables experimenters to model and effect arbitrary performance dynamics (packet delay, jitter, bandwidth limitations, congestion, packet loss and duplication) on live IP packets passing through a commodity Linux-based PC router. We describe the emulation capabilities of NIST Net; examine its architecture; and discuss some of the implementation challenges encountered in building such a tool to operate at very high network data rates while imposing minimal processing overhead. Calibration results are provided to quantify the fidelity and performance of NIST Net over a wide range of offered loads (up to 1 Gbps), and a diverse set of emulated performance dynamics.", "title": "" }, { "docid": "neg:1840103_3", "text": "Since the introduction of passive commercial capsule endoscopes, researchers have been pursuing methods to control and localize these devices, many utilizing magnetic fields [1, 2]. An advantage of magnetics is the ability to both actuate and localize using the same technology. Prior work from our group [3] developed a method to actuate screw-type magnetic capsule endoscopes in the intestines using a single rotating magnetic dipole located at any position with respect to the capsule. This paper presents a companion localization method that uses the same rotating dipole field for full 6-D pose estimation of a capsule endoscope embedded with a small permanet magnet and an array of magnetic-field sensors. Although several magnetic localization algorithms have been previously published, many are not compatible with magnetic actuation [4, 5]. Those that are require the addition of an accelerometer [6, 7], need a priori knowledge of the capsule’s orientation [7], provide only 3-D information [6], or must manipulate the position of the external magnetic source during localization [8, 9]. Kim et al. presented an iterative method for use with rotating magnetic fields, but the method contains errors [10]. Our proposed algorithm is less sensitive to data synchronization issues and sensor noise than our previous non-iterative method [11] because the data from the magnetic sensors is incorporated independently (rather than first using sensor data to estimate the field at the center of the capsule’s magnet), and the full pose is solved simultaneously (instead of position and orientation sequentially).", "title": "" }, { "docid": "neg:1840103_4", "text": "RATIONALE\nCardiac lipotoxicity, characterized by increased uptake, oxidation, and accumulation of lipid intermediates, contributes to cardiac dysfunction in obesity and diabetes mellitus. However, mechanisms linking lipid overload and mitochondrial dysfunction are incompletely understood.\n\n\nOBJECTIVE\nTo elucidate the mechanisms for mitochondrial adaptations to lipid overload in postnatal hearts in vivo.\n\n\nMETHODS AND RESULTS\nUsing a transgenic mouse model of cardiac lipotoxicity overexpressing ACSL1 (long-chain acyl-CoA synthetase 1) in cardiomyocytes, we show that modestly increased myocardial fatty acid uptake leads to mitochondrial structural remodeling with significant reduction in minimum diameter. This is associated with increased palmitoyl-carnitine oxidation and increased reactive oxygen species (ROS) generation in isolated mitochondria. Mitochondrial morphological changes and elevated ROS generation are also observed in palmitate-treated neonatal rat ventricular cardiomyocytes. Palmitate exposure to neonatal rat ventricular cardiomyocytes initially activates mitochondrial respiration, coupled with increased mitochondrial polarization and ATP synthesis. However, long-term exposure to palmitate (>8 hours) enhances ROS generation, which is accompanied by loss of the mitochondrial reticulum and a pattern suggesting increased mitochondrial fission. Mechanistically, lipid-induced changes in mitochondrial redox status increased mitochondrial fission by increased ubiquitination of AKAP121 (A-kinase anchor protein 121) leading to reduced phosphorylation of DRP1 (dynamin-related protein 1) at Ser637 and altered proteolytic processing of OPA1 (optic atrophy 1). Scavenging mitochondrial ROS restored mitochondrial morphology in vivo and in vitro.\n\n\nCONCLUSIONS\nOur results reveal a molecular mechanism by which lipid overload-induced mitochondrial ROS generation causes mitochondrial dysfunction by inducing post-translational modifications of mitochondrial proteins that regulate mitochondrial dynamics. These findings provide a novel mechanism for mitochondrial dysfunction in lipotoxic cardiomyopathy.", "title": "" }, { "docid": "neg:1840103_5", "text": "We address the problem of making online, parallel query plans fault-tolerant: i.e., provide intra-query fault-tolerance without blocking. We develop an approach that not only achieves this goal but does so through the use of different fault-tolerance techniques at different operators within a query plan. Enabling each operator to use a different fault-tolerance strategy leads to a space of fault-tolerance plans amenable to cost-based optimization. We develop FTOpt, a cost-based fault-tolerance optimizer that automatically selects the best strategy for each operator in a query plan in a manner that minimizes the expected processing time with failures for the entire query. We implement our approach in a prototype parallel query-processing engine. Our experiments demonstrate that (1) there is no single best fault-tolerance strategy for all query plans, (2) often hybrid strategies that mix-and-match recovery techniques outperform any uniform strategy, and (3) our optimizer correctly identifies winning fault-tolerance configurations.", "title": "" }, { "docid": "neg:1840103_6", "text": "Optimistic estimates suggest that only 30-70% of waste generated in cities of developing countries is collected for disposal. As a result, uncollected waste is often disposed of into open dumps, along the streets or into water bodies. Quite often, this practice induces environmental degradation and public health risks. Notwithstanding, such practices also make waste materials readily available for itinerant waste pickers. These 'scavengers' as they are called, therefore perceive waste as a resource, for income generation. Literature suggests that Informal Sector Recycling (ISR) activity can bring other benefits such as, economic growth, litter control and resources conservation. This paper critically reviews trends in ISR activities in selected developing and transition countries. ISR often survives in very hostile social and physical environments largely because of negative Government and public attitude. Rather than being stigmatised, the sector should be recognised as an important element for achievement of sustainable waste management in developing countries. One solution to this problem could be the integration of ISR into the formal waste management system. To achieve ISR integration, this paper highlights six crucial aspects from literature: social acceptance, political will, mobilisation of cooperatives, partnerships with private enterprises, management and technical skills, as well as legal protection measures. It is important to note that not every country will have the wherewithal to achieve social inclusion and so the level of integration must be 'flexible'. In addition, the structure of the ISR should not be based on a 'universal' model but should instead take into account local contexts and conditions.", "title": "" }, { "docid": "neg:1840103_7", "text": "Recent advances in Deep Neural Networks (DNNs) have led to the development of DNN-driven autonomous cars that, using sensors like camera, LiDAR, etc., can drive without any human intervention. Most major manufacturers including Tesla, GM, Ford, BMW, and Waymo/Google are working on building and testing different types of autonomous vehicles. The lawmakers of several US states including California, Texas, and New York have passed new legislation to fast-track the process of testing and deployment of autonomous vehicles on their roads.\n However, despite their spectacular progress, DNNs, just like traditional software, often demonstrate incorrect or unexpected corner-case behaviors that can lead to potentially fatal collisions. Several such real-world accidents involving autonomous cars have already happened including one which resulted in a fatality. Most existing testing techniques for DNN-driven vehicles are heavily dependent on the manual collection of test data under different driving conditions which become prohibitively expensive as the number of test conditions increases.\n In this paper, we design, implement, and evaluate DeepTest, a systematic testing tool for automatically detecting erroneous behaviors of DNN-driven vehicles that can potentially lead to fatal crashes. First, our tool is designed to automatically generated test cases leveraging real-world changes in driving conditions like rain, fog, lighting conditions, etc. DeepTest systematically explore different parts of the DNN logic by generating test inputs that maximize the numbers of activated neurons. DeepTest found thousands of erroneous behaviors under different realistic driving conditions (e.g., blurring, rain, fog, etc.) many of which lead to potentially fatal crashes in three top performing DNNs in the Udacity self-driving car challenge.", "title": "" }, { "docid": "neg:1840103_8", "text": "Marfan syndrome is a connective-tissue disease inherited in an autosomal dominant manner and caused mainly by mutations in the gene FBN1. This gene encodes fibrillin-1, a glycoprotein that is the main constituent of the microfibrils of the extracellular matrix. Most mutations are unique and affect a single amino acid of the protein. Reduced or abnormal fibrillin-1 leads to tissue weakness, increased transforming growth factor β signaling, loss of cell–matrix interactions, and, finally, to the different phenotypic manifestations of Marfan syndrome. Since the description of FBN1 as the gene affected in patients with this disorder, great advances have been made in the understanding of its pathogenesis. The development of several mouse models has also been crucial to our increased understanding of this disease, which is likely to change the treatment and the prognosis of patients in the coming years. Among the many different clinical manifestations of Marfan syndrome, cardiovascular involvement deserves special consideration, owing to its impact on prognosis. However, the diagnosis of patients with Marfan syndrome should be made according to Ghent criteria and requires a comprehensive clinical assessment of multiple organ systems. Genetic testing can be useful in the diagnosis of selected cases.", "title": "" }, { "docid": "neg:1840103_9", "text": "Fractional Fourier transform (FRFT) is a generalization of the Fourier transform, rediscovered many times over the past 100 years. In this paper, we provide an overview of recent contributions pertaining to the FRFT. Specifically, the paper is geared toward signal processing practitioners by emphasizing the practical digital realizations and applications of the FRFT. It discusses three major topics. First, the manuscripts relates the FRFT to other mathematical transforms. Second, it discusses various approaches for practical realizations of the FRFT. Third, we overview the practical applications of the FRFT. From these discussions, we can clearly state that the FRFT is closely related to other mathematical transforms, such as time–frequency and linear canonical transforms. Nevertheless, we still feel that major contributions are expected in the field of the digital realizations and its applications, especially, since many digital realizations of a b Purchase Export Previous article Next article Check if you have access through your login credentials or your institution.", "title": "" }, { "docid": "neg:1840103_10", "text": "Multiple sclerosis (MS) is a chronic inflammatory demyelinating disease of the central nervous system, which is heterogenous with respect to clinical manifestations and response to therapy. Identification of biomarkers appears desirable for an improved diagnosis of MS as well as for monitoring of disease activity and treatment response. MicroRNAs (miRNAs) are short non-coding RNAs, which have been shown to have the potential to serve as biomarkers for different human diseases, most notably cancer. Here, we analyzed the expression profiles of 866 human miRNAs. In detail, we investigated the miRNA expression in blood cells of 20 patients with relapsing-remitting MS (RRMS) and 19 healthy controls using a human miRNA microarray and the Geniom Real Time Analyzer (GRTA) platform. We identified 165 miRNAs that were significantly up- or downregulated in patients with RRMS as compared to healthy controls. The best single miRNA marker, hsa-miR-145, allowed discriminating MS from controls with a specificity of 89.5%, a sensitivity of 90.0%, and an accuracy of 89.7%. A set of 48 miRNAs that was evaluated by radial basis function kernel support vector machines and 10-fold cross validation yielded a specificity of 95%, a sensitivity of 97.6%, and an accuracy of 96.3%. While 43 of the 165 miRNAs deregulated in patients with MS have previously been related to other human diseases, the remaining 122 miRNAs are so far exclusively associated with MS. The implications of our study are twofold. The miRNA expression profiles in blood cells may serve as a biomarker for MS, and deregulation of miRNA expression may play a role in the pathogenesis of MS.", "title": "" }, { "docid": "neg:1840103_11", "text": "Cable-driven parallel robots (CDPR) are efficient manipulators able to carry heavy payloads across large workspaces. Therefore, the dynamic parameters such as the mobile platform mass and center of mass location may considerably vary. Without any adaption, the erroneous parametric estimate results in mismatch terms added to the closed-loop system, which may decrease the robot performances. In this paper, we introduce an adaptive dual-space motion control scheme for CDPR. The proposed method aims at increasing the robot tracking performances, while keeping all the cable tensed despite uncertainties and changes in the robot dynamic parameters. Reel-time experimental tests, performed on a large redundantly actuated CDPR prototype, validate the efficiency of the proposed control scheme. These results are compared to those obtained with a non-adaptive dual-space feedforward control scheme.", "title": "" }, { "docid": "neg:1840103_12", "text": "The paper presents proposal of practical implementation simple IoT gateway based on Arduino microcontroller, dedicated to use in home IoT environment. Authors are concentrated on research of performance and security aspects of created system. By performed load tests and denial of service attack were investigated performance and capacity limits of implemented gateway.", "title": "" }, { "docid": "neg:1840103_13", "text": "MIMO is a technology that utilizes multiple antennas at transmitter/receiver to improve the throughput, capacity and coverage of wireless system. Massive MIMO where Base Station is equipped with orders of magnitude more antennas have shown over 10 times spectral efficiency increase over MIMO with simpler signal processing algorithms. Massive MIMO has benefits of enhanced capacity, spectral and energy efficiency and it can be built by using low cost and low power components. Despite its potential benefits, this paper also summarizes some challenges faced by massive MIMO such as antenna spatial correlation and mutual coupling as well as non-linear hardware impairments. These challenges encountered in massive MIMO uncover new problems that need further investigation.", "title": "" }, { "docid": "neg:1840103_14", "text": "Peer and self-assessment offer an opportunity to scale both assessment and learning to global classrooms. This article reports our experiences with two iterations of the first large online class to use peer and self-assessment. In this class, peer grades correlated highly with staff-assigned grades. The second iteration had 42.9% of students’ grades within 5% of the staff grade, and 65.5% within 10%. On average, students assessed their work 7% higher than staff did. Students also rated peers’ work from their own country 3.6% higher than those from elsewhere. We performed three experiments to improve grading accuracy. We found that giving students feedback about their grading bias increased subsequent accuracy. We introduce short, customizable feedback snippets that cover common issues with assignments, providing students more qualitative peer feedback. Finally, we introduce a data-driven approach that highlights high-variance items for improvement. We find that rubrics that use a parallel sentence structure, unambiguous wording, and well-specified dimensions have lower variance. After revising rubrics, median grading error decreased from 12.4% to 9.9%.", "title": "" }, { "docid": "neg:1840103_15", "text": "In this brief, we propose a variable structure based nonlinear missile guidance/autopilot system with highly maneuverable actuators, mainly consisting of thrust vector control and divert control system, for the task of intercepting of a theater ballistic missile. The aim of the present work is to achieve bounded target interception under the mentioned 5 degree-of-freedom (DOF) control such that the distance between the missile and the target will enter the range of triggering the missile's explosion. First, a 3-DOF sliding-mode guidance law of the missile considering external disturbances and zero-effort-miss (ZEM) is designed to minimize the distance between the center of the missile and that of the target. Next, a quaternion-based sliding-mode attitude controller is developed to track the attitude command while coping with variation of missile's inertia and uncertain aerodynamic force/wind gusts. The stability of the overall system and ZEM-phase convergence are analyzed thoroughly via Lyapunov stability theory. Extensive simulation results are obtained to validate the effectiveness of the proposed integrated guidance/autopilot system by use of the 5-DOF inputs.", "title": "" }, { "docid": "neg:1840103_16", "text": "The assumption that there are innate integrative or actualizing tendencies underlying personality and social development is reexamined. Rather than viewing such processes as either nonexistent or as automatic, I argue that they are dynamic and dependent upon social-contextual supports pertaining to basic human psychological needs. To develop this viewpoint, I conceptually link the notion of integrative tendencies to specific developmental processes, namely intrinsic motivation; internalization; and emotional integration. These processes are then shown to be facilitated by conditions that fulfill psychological needs for autonomy, competence, and relatedness, and forestalled within contexts that frustrate these needs. Interactions between psychological needs and contextual supports account, in part, for the domain and situational specificity of motivation, experience, and relative integration. The meaning of psychological needs (vs. wants) is directly considered, as are the relations between concepts of integration and autonomy and those of independence, individualism, efficacy, and cognitive models of \"multiple selves.\"", "title": "" }, { "docid": "neg:1840103_17", "text": "In ancient times, people exchanged their goods and services to obtain what they needed (such as clothes and tools) from other people. This system of bartering compensated for the lack of currency. People offered goods/services and received in kind other goods/services. Now, despite the existence of multiple currencies and the progress of humanity from the Stone Age to the Byte Age, people still barter but in a different way. Mainly, people use money to pay for the goods they purchase and the services they obtain.", "title": "" }, { "docid": "neg:1840103_18", "text": "DB2 for Linux, UNIX, and Windows Version 9.1 introduces the Self-Tuning Memory Manager (STMM), which provides adaptive self tuning of both database memory heaps and cumulative database memory allocation. This technology provides state-of-the-art memory tuning combining control theory, runtime simulation modeling, cost-benefit analysis, and operating system resource analysis. In particular, the nove use of cost-benefit analysis and control theory techniques makes STMM a breakthrough technology in database memory management. The cost-benefit analysis allows STMM to tune memory between radically different memory consumers such as compiled statement cache, sort, and buffer pools. These methods allow for the fast convergence of memory settings while also providing stability in the presence of system noise. The tuning mode has been found in numerous experiments to tune memory allocation as well as expert human administrators, including OLTP, DSS, and mixed environments. We believe this is the first known use of cost-benefit analysis and control theory in database memory tuning across heterogeneous memory consumers.", "title": "" }, { "docid": "neg:1840103_19", "text": "The defect detection on manufactures is extremely important in the optimization of industrial processes; particularly, the visual inspection plays a fundamental role. The visual inspection is often carried out by a human expert. However, new technology features have made this inspection unreliable. For this reason, many researchers have been engaged to develop automatic analysis processes of manufactures and automatic optical inspections in the industrial production of printed circuit boards. Among the defects that could arise in this industrial process, those of the solder joints are very important, because they can lead to an incorrect functioning of the board; moreover, the amount of the solder paste can give some information on the quality of the industrial process. In this paper, a neural network-based automatic optical inspection system for the diagnosis of solder joint defects on printed circuit boards assembled in surface mounting technology is presented. The diagnosis is handled as a pattern recognition problem with a neural network approach. Five types of solder joints have been classified in respect to the amount of solder paste in order to perform the diagnosis with a high recognition rate and a detailed classification able to give information on the quality of the manufacturing process. The images of the boards under test are acquired and then preprocessed to extract the region of interest for the diagnosis. Three types of feature vectors are evaluated from each region of interest, which are the images of the solder joints under test, by exploiting the properties of the wavelet transform and the geometrical characteristics of the preprocessed images. The performances of three different classifiers which are a multilayer perceptron, a linear vector quantization, and a K-nearest neighbor classifier are compared. The n-fold cross-validation has been exploited to select the best architecture for the neural classifiers, while a number of experiments have been devoted to estimating the best value of K in the K-NN. The results have proved that the MLP network fed with the GW-features has the best recognition rate. This approach allows to carry out the diagnosis burden on image processing, feature extraction, and classification algorithms, reducing the cost and the complexity of the acquisition system. In fact, the experimental results suggest that the reason for the high recognition rate in the solder joint classification is due to the proper preprocessing steps followed as well as to the information contents of the features", "title": "" } ]
1840104
Crowd Map: Accurate Reconstruction of Indoor Floor Plans from Crowdsourced Sensor-Rich Videos
[ { "docid": "pos:1840104_0", "text": "There are billions of photographs on the Internet, comprising the largest and most diverse photo collection ever assembled. How can computer vision researchers exploit this imagery? This paper explores this question from the standpoint of 3D scene modeling and visualization. We present structure-from-motion and image-based rendering algorithms that operate on hundreds of images downloaded as a result of keyword-based image search queries like “Notre Dame” or “Trevi Fountain.” This approach, which we call Photo Tourism, has enabled reconstructions of numerous well-known world sites. This paper presents these algorithms and results as a first step towards 3D modeling of the world’s well-photographed sites, cities, and landscapes from Internet imagery, and discusses key open problems and challenges for the research community.", "title": "" }, { "docid": "pos:1840104_1", "text": "While WiFi-based indoor localization is attractive, the need for a significant degree of pre-deployment effort is a key challenge. In this paper, we ask the question: can we perform indoor localization with no pre-deployment effort? Our setting is an indoor space, such as an office building or a mall, with WiFi coverage but where we do not assume knowledge of the physical layout, including the placement of the APs. Users carrying WiFi-enabled devices such as smartphones traverse this space in normal course. The mobile devices record Received Signal Strength (RSS) measurements corresponding to APs in their view at various (unknown) locations and report these to a localization server. Occasionally, a mobile device will also obtain and report a location fix, say by obtaining a GPS lock at the entrance or near a window. The centerpiece of our work is the EZ Localization algorithm, which runs on the localization server. The key intuition is that all of the observations reported to the server, even the many from unknown locations, are constrained by the physics of wireless propagation. EZ models these constraints and then uses a genetic algorithm to solve them. The results from our deployment in two different buildings are promising. Despite the absence of any explicit pre-deployment calibration, EZ yields a median localization error of 2m and 7m, respectively, in a small building and a large building, which is only somewhat worse than the 0.7m and 4m yielded by the best-performing but calibration-intensive Horus scheme [29] from prior work.", "title": "" } ]
[ { "docid": "neg:1840104_0", "text": "Additive manufacturing, commonly referred to as 3D printing, is a technology that builds three-dimensional structures and components layer by layer. Bioprinting is the use of 3D printing technology to fabricate tissue constructs for regenerative medicine from cell-laden bio-inks. 3D printing and bioprinting have huge potential in revolutionizing the field of tissue engineering and regenerative medicine. This paper reviews the application of 3D printing and bioprinting in the field of pediatrics.", "title": "" }, { "docid": "neg:1840104_1", "text": "Recommender systems represent user preferences for the purpose of suggesting items to purchase or examine. They have become fundamental applications in electronic commerce and information access, providing suggestions that effectively prune large information spaces so that users are directed toward those items that best meet their needs and preferences. A variety of techniques have been proposed for performing recommendation, including content-based, collaborative, knowledge-based and other techniques. To improve performance, these methods have sometimes been combined in hybrid recommenders. This paper surveys the landscape of actual and possible hybrid recommenders, and introduces a novel hybrid, system that combines content-based recommendation and collaborative filtering to recommend restaurants.", "title": "" }, { "docid": "neg:1840104_2", "text": "This paper presents a trajectory generator and an active compliance control scheme, unified in a framework to synthesize dynamic, feasible and compliant trot-walking locomotion cycles for a stiff-by-nature hydraulically actuated quadruped robot. At the outset, a CoP-based trajectory generator that is constructed using an analytical solution is implemented to obtain feasible and dynamically balanced motion references in a systematic manner. Initial conditions are uniquely determined for symmetrical motion patterns, enforcing that trajectories are seamlessly connected both in position, velocity and acceleration levels, regardless of the given support phase. The active compliance controller, used simultaneously, is responsible for sufficient joint position/force regulation. An admittance block is utilized to compute joint displacements that correspond to joint force errors. In addition to position feedback, these joint displacements are inserted to the position control loop as a secondary feedback term. In doing so, active compliance control is achieved, while the position/force trade-off is modulated via the virtual admittance parameters. Various trot-walking experiments are conducted with the proposed framework using HyQ, a ~ 75kg hydraulically actuated quadruped robot. We present results of repetitive, continuous, and dynamically equilibrated trot-walking locomotion cycles, both on level surface and uneven surface walking experiments.", "title": "" }, { "docid": "neg:1840104_3", "text": "Recently, deep learning and deep neural networks have attracted considerable attention and emerged as one predominant field of research in the artificial intelligence community. The developed techniques have also gained widespread use in various domains with good success, such as automatic speech recognition, information retrieval and text classification, etc. Among them, long short-term memory (LSTM) networks are well suited to such tasks, which can capture long-range dependencies among words efficiently, meanwhile alleviating the gradient vanishing or exploding problem during training effectively. Following this line of research, in this paper we explore a novel use of a Siamese LSTM based method to learn more accurate document representation for text categorization. Such a network architecture takes a pair of documents with variable lengths as the input and utilizes pairwise learning to generate distributed representations of documents that can more precisely render the semantic distance between any pair of documents. In doing so, documents associated with the same semantic or topic label could be mapped to similar representations having a relatively higher semantic similarity. Experiments conducted on two benchmark text categorization tasks, viz. IMDB and 20Newsgroups, show that using a three-layer deep neural network based classifier that takes a document representation learned from the Siamese LSTM sub-networks as the input can achieve competitive performance in relation to several state-of-the-art methods.", "title": "" }, { "docid": "neg:1840104_4", "text": "Recognizing 3-D objects in cluttered scenes is a challenging task. Common approaches find potential feature correspondences between a scene and candidate models by matching sampled local shape descriptors and select a few correspondences with the highest descriptor similarity to identify models that appear in the scene. However, real scans contain various nuisances, such as noise, occlusion, and featureless object regions. This makes selected correspondences have a certain portion of false positives, requiring adopting the time-consuming model verification many times to ensure accurate recognition. This paper proposes a 3-D object recognition approach with three key components. First, we construct a Signature of Geometric Centroids descriptor that is descriptive and robust, and apply it to find high-quality potential feature correspondences. Second, we measure geometric compatibility between a pair of potential correspondences based on isometry and three angle-preserving components. Third, we perform effective correspondence selection by using both descriptor similarity and compatibility with an auxiliary set of “less” potential correspondences. Experiments on publicly available data sets demonstrate the robustness and/or efficiency of the descriptor, selection approach, and recognition framework. Comparisons with the state-of-the-arts validate the superiority of our recognition approach, especially under challenging scenarios.", "title": "" }, { "docid": "neg:1840104_5", "text": "This paper introduces a novel three-phase buck-type unity power factor rectifier appropriate for high power Electric Vehicle battery charging mains interfaces. The characteristics of the converter, named the Swiss Rectifier, including the principle of operation, modulation strategy, suitable control structure, and dimensioning equations are described in detail. Additionally, the proposed rectifier is compared to a conventional 6-switch buck-type ac-dc power conversion. According to the results, the Swiss Rectifier is the topology of choice for a buck-type PFC. Finally, the feasibility of the Swiss Rectifier concept for buck-type rectifier applications is demonstrated by means of a hardware prototype.", "title": "" }, { "docid": "neg:1840104_6", "text": "Object selection and manipulation in world-fixed displays such as CAVE-type systems are typically achieved with tracked input devices, which lack the tangibility of real-world interactions. Conversely, due to the visual blockage of the real world, head-mounted displays allow the use of many types of real world objects that can convey realistic haptic feedback. To bridge this gap, we propose Specimen Box, an interaction technique that allows users to naturally hold a plausible physical object while manipulating virtual content inside it. This virtual content is rendered based on the tracked position of the box in relation to the user's point of view. Specimen Box provides the weight and tactile feel of an actual object and does not occlude rendered objects in the scene. The end result is that the user sees the virtual content as if it exists inside the clear physical box. We hypothesize that the effect of holding a physical box, which is a valid part of the overall scenario, would improve user performance and experience. To verify this hypothesis, we conducted a user study which involved a cognitively loaded inspection task requiring extensive manipulation of the box. We compared Specimen Box to Grab-and-Twirl, a naturalistic bimanual manipulation technique that closely mimics the mechanics of our proposed technique. Results show that in our specific task, performance was significantly faster and rotation rate was significantly lower with Specimen Box. Further, performance of the control technique was positively affected by experience with Specimen Box.", "title": "" }, { "docid": "neg:1840104_7", "text": "We present a study of how Linux kernel developers respond to bug reports issued by a static analysis tool. We found that developers prefer to triage reports in younger, smaller, and more actively-maintained files ( §2), first address easy-to-fix bugs and defer difficult (but possibly critical) bugs ( §3), and triage bugs in batches rather than individually (§4). Also, although automated tools cannot find many types of bugs, they can be effective at directing developers’ attentions towards parts of the codebase that contain up to 3X more user-reported bugs ( §5). Our insights into developer attitudes towards static analysis tools allow us to make suggestions for improving their usability and effectiveness. We feel that it could be effective to run static analysis tools continuously while programming and before committing code, to rank reports so that those most likely to be triaged are shown to developers first, to show the easiest reports to new developers, to perform deeper analysis on more actively-maintained code, and to use reports as indirect indicators of code quality and importance.", "title": "" }, { "docid": "neg:1840104_8", "text": "The Public Key Infrastructure (PKI) in use today on the Internet to secure communications has several drawbacks arising from its centralised and non-transparent design. In the past there has been instances of certificate authorities publishing rogue certificates for targeted attacks, and this has been difficult to immediately detect as certificate authorities are not transparent about the certificates they issue. Furthermore, the centralised selection of trusted certificate authorities by operating system and browser vendors means that it is not practical to untrust certificate authorities that have issued rogue certificates, as this would disrupt the TLS process for many other hosts.\n SCPKI is an alternative PKI system based on a decentralised and transparent design using a web-of-trust model and a smart contract on the Ethereum blockchain, to make it easily possible for rogue certificates to be detected when they are published. The web-of-trust model is designed such that an entity or authority in the system can verify (or vouch for) fine-grained attributes of another entity's identity (such as company name or domain name), as an alternative to the centralised certificate authority identity verification model.", "title": "" }, { "docid": "neg:1840104_9", "text": "A low-power forwarded-clock I/O transceiver architecture is presented that employs a high degree of output/input multiplexing, supply-voltage scaling with data rate, and low-voltage circuit techniques to enable low-power operation. The transmitter utilizes a 4:1 output multiplexing voltage-mode driver along with 4-phase clocking that is efficiently generated from a passive poly-phase filter. The output driver voltage swing is accurately controlled from 100–200 <formula formulatype=\"inline\"><tex Notation=\"TeX\">${\\rm mV}_{\\rm ppd}$</tex></formula> using a low-voltage pseudo-differential regulator that employs a partial negative-resistance load for improved low frequency gain. 1:8 input de-multiplexing is performed at the receiver equalizer output with 8 parallel input samplers clocked from an 8-phase injection-locked oscillator that provides more than 1UI de-skew range. In the transmitter clocking circuitry, per-phase duty-cycle and phase-spacing adjustment is implemented to allow adequate timing margins at low operating voltages. Fabricated in a general purpose 65 nm CMOS process, the transceiver achieves 4.8–8 Gb/s at 0.47–0.66 pJ/b energy efficiency for <formula formulatype=\"inline\"><tex Notation=\"TeX\">${\\rm V}_{\\rm DD}=0.6$</tex></formula>–0.8 V.", "title": "" }, { "docid": "neg:1840104_10", "text": "....................................................................... 2 Table of", "title": "" }, { "docid": "neg:1840104_11", "text": "Although a great deal of media attention has been given to the negative effects of playing video games, relatively less attention has been paid to the positive effects of engaging in this activity. Video games in health care provide ample examples of innovative ways to use existing commercial games for health improvement or surgical training. Tailor-made games help patients be more adherent to treatment regimens and train doctors how to manage patients in different clinical situations. In this review, examples in the scientific literature of commercially available and tailor-made games used for education and training with patients and medical students and doctors are summarized. There is a history of using video games with patients from the early days of gaming in the 1980s, and this has evolved into a focus on making tailor-made games for different disease groups, which have been evaluated in scientific trials more recently. Commercial video games have been of interest regarding their impact on surgical skill. More recently, some basic computer games have been developed and evaluated that train doctors in clinical skills. The studies presented in this article represent a body of work outlining positive effects of playing video games in the area of health care.", "title": "" }, { "docid": "neg:1840104_12", "text": "A method for two dimensional position finding of stationary targets whose bearing measurements suffers from indeterminable bias and random noise has been proposed. The algorithm uses convex optimization to minimize an error function which has been calculated based on circular as well as linear loci of error. Taking into account a number of observations, certain modifications have been applied to the initial crude method so as to arrive at a faster, more accurate method. Simulation results of the method illustrate up to 30% increase in accuracy compared with the well-known least square filter.", "title": "" }, { "docid": "neg:1840104_13", "text": "In 2011, Lake Erie experienced the largest harmful algal bloom in its recorded history, with a peak intensity over three times greater than any previously observed bloom. Here we show that long-term trends in agricultural practices are consistent with increasing phosphorus loading to the western basin of the lake, and that these trends, coupled with meteorological conditions in spring 2011, produced record-breaking nutrient loads. An extended period of weak lake circulation then led to abnormally long residence times that incubated the bloom, and warm and quiescent conditions after bloom onset allowed algae to remain near the top of the water column and prevented flushing of nutrients from the system. We further find that all of these factors are consistent with expected future conditions. If a scientifically guided management plan to mitigate these impacts is not implemented, we can therefore expect this bloom to be a harbinger of future blooms in Lake Erie.", "title": "" }, { "docid": "neg:1840104_14", "text": "Evidence for viewpoint-specific image-based object representations have been collected almost entirely using exemplar-specific recognition tasks. Recent results, however, implicate image-based processes in more categorical tasks, for instance when objects contain qualitatively different 3D parts. Although such discriminations approximate class-level recognition. they do not establish whether image-based representations can support generalization across members of an object class. This issue is critical to any theory of recognition, in that one hallmark of human visual competence is the ability to recognize unfamiliar instances of a familiar class. The present study addresses this questions by testing whether viewpoint-specific representations for some members of a class facilitate the recognition of other members of that class. Experiment 1 demonstrates that familiarity with several members of a class of novel 3D objects generalizes in a viewpoint-dependent manner to cohort objects from the same class. Experiment 2 demonstrates that this generalization is based on the degree of familiarity and the degree of geometrical distinctiveness for particular viewpoints. Experiment 3 demonstrates that this generalization is restricted to visually-similar objects rather than all objects learned in a given context. These results support the hypothesis that image-based representations are viewpoint dependent, but that these representations generalize across members of perceptually-defined classes. More generally, these results provide evidence for a new approach to image-based recognition in which object classes are represented as cluster of visually-similar viewpoint-specific representations.", "title": "" }, { "docid": "neg:1840104_15", "text": "Although game-tree search works well in perfect-information games, it is less suitable for imperfect-information games such as contract bridge. The lack of knowledge about the opponents' possible moves gives the game tree a very large branching factor, making it impossible to search a signiicant portion of this tree in a reasonable amount of time. This paper describes our approach for overcoming this problem. We represent information about bridge in a task network that is extended to represent multi-agency and uncertainty. Our game-playing procedure uses this task network to generate game trees in which the set of alternative choices is determined not by the set of possible actions, but by the set of available tactical and strategic schemes. We have tested this approach on declarer play in the game of bridge, in an implementation called Tignum 2. On 5000 randomly generated notrump deals, Tignum 2 beat the strongest commercially available program by 1394 to 1302, with 2304 ties. These results are statistically signiicant at the = 0:05 level. Tignum 2 searched an average of only 8745.6 moves per deal in an average time of only 27.5 seconds per deal on a Sun SPARCstation 10. Further enhancements to Tignum 2 are currently underway.", "title": "" }, { "docid": "neg:1840104_16", "text": "The dopaminergic system plays a pivotal role in the central nervous system via its five diverse receptors (D1–D5). Dysfunction of dopaminergic system is implicated in many neuropsychological diseases, including attention deficit hyperactivity disorder (ADHD), a common mental disorder that prevalent in childhood. Understanding the relationship of five different dopamine (DA) receptors with ADHD will help us to elucidate different roles of these receptors and to develop therapeutic approaches of ADHD. This review summarized the ongoing research of DA receptor genes in ADHD pathogenesis and gathered the past published data with meta-analysis and revealed the high risk of DRD5, DRD2, and DRD4 polymorphisms in ADHD.", "title": "" }, { "docid": "neg:1840104_17", "text": "The Universal Serial Bus (USB) is an extremely popular interface standard for computer peripheral connections and is widely used in consumer Mass Storage Devices (MSDs). While current consumer USB MSDs provide relatively high transmission speed and are convenient to carry, the use of USB MSDs has been prohibited in many commercial and everyday environments primarily due to security concerns. Security protocols have been previously proposed and a recent approach for the USB MSDs is to utilize multi-factor authentication. This paper proposes significant enhancements to the three-factor control protocol that now makes it secure under many types of attacks including the password guessing attack, the denial-of-service attack, and the replay attack. The proposed solution is presented with a rigorous security analysis and practical computational cost analysis to demonstrate the usefulness of this new security protocol for consumer USB MSDs.", "title": "" }, { "docid": "neg:1840104_18", "text": "A microfluidic device designed to generate monodispersed picoliter to femtoliter sized droplet emulsions at controlled rates is presented. This PDMS microfabricated device utilizes the geometry of the channel junctions in addition to the flow rates to control the droplet sizes. An expanding nozzle is used to control the breakup location of the droplet generation process. The droplet breakup occurs at a fixed point d droplets with s 100 nm can b generation r ©", "title": "" }, { "docid": "neg:1840104_19", "text": "A microstrip antenna with frequency agility and polarization diversity is presented. Commercially available packaged RF microelectrical-mechanical (MEMS) single-pole double-throw (SPDT) devices are used with a novel feed network to provide four states of polarization control; linear-vertical, linear-horizontal, left-hand circular and right-handed circular. Also, hyper-abrupt silicon junction tuning diodes are used to tune the antenna center frequency from 0.9-1.5 GHz. The microstrip antenna is 1 in x 1 in, and is fabricated on a 4 in x 4 in commercial-grade dielectric laminate. To the authors' knowledge, this is the first demonstration of an antenna element with four polarization states across a tunable bandwidth of 1.4:1.", "title": "" } ]
1840105
Taxonomy Construction Using Syntactic Contextual Evidence
[ { "docid": "pos:1840105_0", "text": "Knowledge is indispensable to understanding. The ongoing information explosion highlights the need to enable machines to better understand electronic text in human language. Much work has been devoted to creating universal ontologies or taxonomies for this purpose. However, none of the existing ontologies has the needed depth and breadth for universal understanding. In this paper, we present a universal, probabilistic taxonomy that is more comprehensive than any existing ones. It contains 2.7 million concepts harnessed automatically from a corpus of 1.68 billion web pages. Unlike traditional taxonomies that treat knowledge as black and white, it uses probabilities to model inconsistent, ambiguous and uncertain information it contains. We present details of how the taxonomy is constructed, its probabilistic modeling, and its potential applications in text understanding.", "title": "" }, { "docid": "pos:1840105_1", "text": "We present a novel approach to weakly supervised semantic class learning from the web, using a single powerful hyponym pattern combined with graph structures, which capture two properties associated with pattern-based extractions:popularity and productivity. Intuitively, a candidate ispopular if it was discovered many times by other instances in the hyponym pattern. A candidate is productive if it frequently leads to the discovery of other instances. Together, these two measures capture not only frequency of occurrence, but also cross-checking that the candidate occurs both near the class name and near other class members. We developed two algorithms that begin with just a class name and one seed instance and then automatically generate a ranked list of new class instances. We conducted experiments on four semantic classes and consistently achieved high accuracies.", "title": "" } ]
[ { "docid": "neg:1840105_0", "text": "BACKGROUND\nAtopic dermatitis (AD) is characterized by dry skin and a hyperactive immune response to allergens, 2 cardinal features that are caused in part by epidermal barrier defects. Tight junctions (TJs) reside immediately below the stratum corneum and regulate the selective permeability of the paracellular pathway.\n\n\nOBJECTIVE\nWe evaluated the expression/function of the TJ protein claudin-1 in epithelium from AD and nonatopic subjects and screened 2 American populations for single nucleotide polymorphisms in the claudin-1 gene (CLDN1).\n\n\nMETHODS\nExpression profiles of nonlesional epithelium from patients with extrinsic AD, nonatopic subjects, and patients with psoriasis were generated using Illumina's BeadChips. Dysregulated intercellular proteins were validated by means of tissue staining and quantitative PCR. Bioelectric properties of epithelium were measured in Ussing chambers. Functional relevance of claudin-1 was assessed by using a knockdown approach in primary human keratinocytes. Twenty-seven haplotype-tagging SNPs in CLDN1 were screened in 2 independent populations with AD.\n\n\nRESULTS\nWe observed strikingly reduced expression of the TJ proteins claudin-1 and claudin-23 only in patients with AD, which were validated at the mRNA and protein levels. Claudin-1 expression inversely correlated with T(H)2 biomarkers. We observed a remarkable impairment of the bioelectric barrier function in AD epidermis. In vitro we confirmed that silencing claudin-1 expression in human keratinocytes diminishes TJ function while enhancing keratinocyte proliferation. Finally, CLDN1 haplotype-tagging SNPs revealed associations with AD in 2 North American populations.\n\n\nCONCLUSION\nCollectively, these data suggest that an impairment in tight junctions contributes to the barrier dysfunction and immune dysregulation observed in AD subjects and that this may be mediated in part by reductions in claudin-1.", "title": "" }, { "docid": "neg:1840105_1", "text": "Although space syntax has been successfully applied to many urban GIS studies, there is still a need to develop robust algorithms that support the automated derivation of graph representations. These graph structures are needed to apply the computational principles of space syntax and derive the morphological view of an urban structure. So far the application of space syntax principles to the study of urban structures has been a partially empirical and non-deterministic task, mainly due to the fact that an urban structure is modeled as a set of axial lines whose derivation is a non-computable process. This paper proposes an alternative model of space for the application of space syntax principles, based on the concepts of characteristic points defined as the nodes of an urban structure schematised as a graph. This method has several advantages over the axial line representation: it is computable and cognitively meaningful. Our proposal is illustrated by a case study applied to the city of GaÈ vle in Sweden. We will also show that this method has several nice properties that surpass the axial line technique.", "title": "" }, { "docid": "neg:1840105_2", "text": "We present a new data structure for the c-approximate near neighbor problem (ANN) in the Euclidean space. For n points in R, our algorithm achieves Oc(n + d log n) query time and Oc(n + d log n) space, where ρ ≤ 0.73/c + O(1/c) + oc(1). This is the first improvement over the result by Andoni and Indyk (FOCS 2006) and the first data structure that bypasses a locality-sensitive hashing lower bound proved by O’Donnell, Wu and Zhou (ICS 2011). By known reductions we obtain a data structure for the Hamming space and l1 norm with ρ ≤ 0.73/c+O(1/c) + oc(1), which is the first improvement over the result of Indyk and Motwani (STOC 1998). Thesis Supervisor: Piotr Indyk Title: Professor of Electrical Engineering and Computer Science", "title": "" }, { "docid": "neg:1840105_3", "text": "Conventional railway track, of the type seen throughout the majority of the UK rail network, is made up of rails that are fixed to sleepers (ties), which, in turn, are supported by ballast. The ballast comprises crushed, hard stone and its main purpose is to distribute loads from the sleepers as rail traffic passes along the track. Over time, the stones in the ballast deteriorate, leading the track to settle and the geometry of the rails to change. Changes in geometry must be addressed in order that the track remains in a safe condition. Track inspections are carried out by measurement trains, which use sensors to precisely measure the track geometry. Network operators aim to carry out maintenance before the track geometry degrades to such an extent that speed restrictions or line closures are required. However, despite the fact that it restores the track geometry, the maintenance also worsens the general condition of the ballast, meaning that the rate of track geometry deterioration tends to increase as the amount of maintenance performed to the ballast increases. This paper considers the degradation, inspection and maintenance of a single one eighth of a mile section of railway track. A Markov model of such a section is produced. Track degradation data from the UK rail network has been analysed to produce degradation distributions which are used to define transition rates within the Markov model. The model considers the changing deterioration rate of the track section following maintenance and is used to analyse the effects of changing the level of track geometry degradation at which maintenance is requested for the section. The results are also used to show the effects of unrevealed levels of degradation. A model such as the one presented can be used to form an integral part of an asset management strategy and maintenance decision making process for railway track.", "title": "" }, { "docid": "neg:1840105_4", "text": "In this paper, we study the problem of image-text matching. Inferring the latent semantic alignment between objects or other salient stuff (e.g. snow, sky, lawn) and the corresponding words in sentences allows to capture fine-grained interplay between vision and language, and makes image-text matching more interpretable. Prior work either simply aggregates the similarity of all possible pairs of regions and words without attending differentially to more and less important words or regions, or uses a multi-step attentional process to capture limited number of semantic alignments which is less interpretable. In this paper, we present Stacked Cross Attention to discover the full latent alignments using both image regions and words in a sentence as context and infer image-text similarity. Our approach achieves the state-of-the-art results on the MSCOCO and Flickr30K datasets. On Flickr30K, our approach outperforms the current best methods by 22.1% relatively in text retrieval from image query, and 18.2% relatively in image retrieval with text query (based on Recall@1). On MS-COCO, our approach improves sentence retrieval by 17.8% relatively and image retrieval by 16.6% relatively (based on Recall@1 using the 5K test set). Code has been made available at: https: //github.com/kuanghuei/SCAN.", "title": "" }, { "docid": "neg:1840105_5", "text": "We propose BrainNetCNN, a convolutional neural network (CNN) framework to predict clinical neurodevelopmental outcomes from brain networks. In contrast to the spatially local convolutions done in traditional image-based CNNs, our BrainNetCNN is composed of novel edge-to-edge, edge-to-node and node-to-graph convolutional filters that leverage the topological locality of structural brain networks. We apply the BrainNetCNN framework to predict cognitive and motor developmental outcome scores from structural brain networks of infants born preterm. Diffusion tensor images (DTI) of preterm infants, acquired between 27 and 46 weeks gestational age, were used to construct a dataset of structural brain connectivity networks. We first demonstrate the predictive capabilities of BrainNetCNN on synthetic phantom networks with simulated injury patterns and added noise. BrainNetCNN outperforms a fully connected neural-network with the same number of model parameters on both phantoms with focal and diffuse injury patterns. We then apply our method to the task of joint prediction of Bayley-III cognitive and motor scores, assessed at 18 months of age, adjusted for prematurity. We show that our BrainNetCNN framework outperforms a variety of other methods on the same data. Furthermore, BrainNetCNN is able to identify an infant's postmenstrual age to within about 2 weeks. Finally, we explore the high-level features learned by BrainNetCNN by visualizing the importance of each connection in the brain with respect to predicting the outcome scores. These findings are then discussed in the context of the anatomy and function of the developing preterm infant brain.", "title": "" }, { "docid": "neg:1840105_6", "text": "The physiological mechanisms that control energy balance are reciprocally linked to those that control reproduction, and together, these mechanisms optimize reproductive success under fluctuating metabolic conditions. Thus, it is difficult to understand the physiology of energy balance without understanding its link to reproductive success. The metabolic sensory stimuli, hormonal mediators and modulators, and central neuropeptides that control reproduction also influence energy balance. In general, those that increase ingestive behavior inhibit reproductive processes, with a few exceptions. Reproductive processes, including the hypothalamic-pituitary-gonadal (HPG) system and the mechanisms that control sex behavior are most proximally sensitive to the availability of oxidizable metabolic fuels. The role of hormones, such as insulin and leptin, are not understood, but there are two possible ways they might control food intake and reproduction. They either mediate the effects of energy metabolism on reproduction or they modulate the availability of metabolic fuels in the brain or periphery. This review examines the neural pathways from fuel detectors to the central effector system emphasizing the following points: first, metabolic stimuli can directly influence the effector systems independently from the hormones that bind to these central effector systems. For example, in some cases, excess energy storage in adipose tissue causes deficits in the pool of oxidizable fuels available for the reproductive system. Thus, in such cases, reproduction is inhibited despite a high body fat content and high plasma concentrations of hormones that are thought to stimulate reproductive processes. The deficit in fuels creates a primary sensory stimulus that is inhibitory to the reproductive system, despite high concentrations of hormones, such as insulin and leptin. Second, hormones might influence the central effector systems [including gonadotropin-releasing hormone (GnRH) secretion and sex behavior] indirectly by modulating the metabolic stimulus. Third, the critical neural circuitry involves extrahypothalamic sites, such as the caudal brain stem, and projections from the brain stem to the forebrain. Catecholamines, neuropeptide Y (NPY) and corticotropin-releasing hormone (CRH) are probably involved. Fourth, the metabolic stimuli and chemical messengers affect the motivation to engage in ingestive and sex behaviors instead of, or in addition to, affecting the ability to perform these behaviors. Finally, it is important to study these metabolic events and chemical messengers in a wider variety of species under natural or seminatural circumstances.", "title": "" }, { "docid": "neg:1840105_7", "text": "We propose a lattice Boltzmann method to treat moving boundary problems for solid objects moving in a fluid. The method is based on the simple bounce-back boundary scheme and interpolations. The proposed method is tested in two flows past an impulsively started cylinder moving in a channel in two dimensions: (a) the flow past an impulsively started cylinder moving in a transient Couette flow; and (b) the flow past an impulsively started cylinder moving in a channel flow at rest. We obtain satisfactory results and also verify the Galilean invariance of the lattice Boltzmann method. 2002 Elsevier Science B.V. All rights reserved.", "title": "" }, { "docid": "neg:1840105_8", "text": "This paper presents an automatic system for fire detection in video sequences. There are several previous methods to detect fire, however, all except two use spectroscopy or particle sensors. The two that use visual information suffer from the inability to cope with a moving camera or a moving scene. One of these is not able to work on general data, such as movie sequences. The other is too simplistic and unrestrictive in determining what is considered fire; so that it can be used reliably only in aircraft dry bays. We propose a system that uses color and motion information computed from video sequences to locate fire. This is done by first using an approach that is based upon creating a Gaussian-smoothed color histogram to detect the fire-colored pixels, and then using a temporal variation of pixels to determine which of these pixels are actually fire pixels. Next, some spurious fire pixels are automatically removed using an erode operation, and some missing fire pixels are found using region growing method. Unlike the two previous vision-based methods for fire detection, our method is applicable to more areas because of its insensitivity to camera motion. Two specific applications not possible with previous algorithms are the recognition of fire in the presence of global camera motion or scene motion and the recognition of fire in movies for possible use in an automatic rating system. We show that our method works in a variety of conditions, and that it can automatically determine when it has insufficient information.", "title": "" }, { "docid": "neg:1840105_9", "text": "In this paper, we consider the scene parsing problem and propose a novel MultiPath Feedback recurrent neural network (MPF-RNN) for parsing scene images. MPF-RNN can enhance the capability of RNNs in modeling long-range context information at multiple levels and better distinguish pixels that are easy to confuse. Different from feedforward CNNs and RNNs with only single feedback, MPFRNN propagates the contextual features learned at top layer through weighted recurrent connections to multiple bottom layers to help them learn better features with such “hindsight”. For better training MPF-RNN, we propose a new strategy that considers accumulative loss at multiple recurrent steps to improve performance of the MPF-RNN on parsing small objects. With these two novel components, MPF-RNN has achieved significant improvement over strong baselines (VGG16 and Res101) on five challenging scene parsing benchmarks, including traditional SiftFlow, Barcelona, CamVid, Stanford Background as well as the recently released large-scale ADE20K.", "title": "" }, { "docid": "neg:1840105_10", "text": "Classical mechanics was first envisaged by Newton, formed into a powerful tool by Euler, and brought to perfection by Lagrange and Laplace. It has served as the paradigm of science ever since. Even the great revolutions of 19th century phys icsnamely, the FaradayMaxwell electro-magnetic theory and the kinetic t h e o r y w e r e viewed as further support for the complete adequacy of the mechanistic world view. The physicist at the end of the 19th century had a coherent conceptual scheme which, in principle at least, answered all his questions about the world. The only work left to be done was the computing of the next decimal. This consensus began to unravel at the beginning of the 20th century. The work of Planck, Einstein, and Bohr simply could not be made to fit. The series of ad hoc moves by Bohr, Eherenfest, et al., now called the old quantum theory, was viewed by all as, at best, a stopgap. In the period 1925-27 a new synthesis was formed by Heisenberg, Schr6dinger, Dirac and others. This new synthesis was so successful that even today, fifty years later, physicists still teach quantum mechanics as it was formulated by these men. Nevertheless, two foundational tasks remained: that of providing a rigorous mathematical formulation of the theory, and that of providing a systematic comparison with classical mechanics so that the full ramifications of the quantum revolution could be clearly revealed. These tasks are, of course, related, and a possible fringe benefit of the second task might be the pointing of the way 'beyond quantum theory'. These tasks were taken up by von Neumann as a consequence of a seminar on the foundations of quantum mechanics conducted by Hilbert in the fall of 1926. In papers published in 1927 and in his book, The Mathemat ical Foundations of Quantum Mechanics, von Neumann provided the first completely rigorous", "title": "" }, { "docid": "neg:1840105_11", "text": "The Lexical Substitution task involves selecting and ranking lexical paraphrases for a target word in a given sentential context. We present PIC, a simple measure for estimating the appropriateness of substitutes in a given context. PIC outperforms another simple, comparable model proposed in recent work, especially when selecting substitutes from the entire vocabulary. Analysis shows that PIC improves over baselines by incorporating frequency biases into predictions.", "title": "" }, { "docid": "neg:1840105_12", "text": "Current CNN-based solutions to salient object detection (SOD) mainly rely on the optimization of cross-entropy loss (CELoss). Then the quality of detected saliency maps is often evaluated in terms of F-measure. In this paper, we investigate an interesting issue: can we consistently use the F-measure formulation in both training and evaluation for SOD? By reformulating the standard F-measure we propose the relaxed F-measure which is differentiable w.r.t the posterior and can be easily appended to the back of CNNs as the loss function. Compared to the conventional cross-entropy loss of which the gradients decrease dramatically in the saturated area, our loss function, named FLoss, holds considerable gradients even when the activation approaches the target. Consequently, the FLoss can continuously force the network to produce polarized activations. Comprehensive benchmarks on several popular datasets show that FLoss outperforms the stateof-the-arts with a considerable margin. More specifically, due to the polarized predictions, our method is able to obtain high quality saliency maps without carefully tuning the optimal threshold, showing significant advantages in real world applications.", "title": "" }, { "docid": "neg:1840105_13", "text": "Many Network Representation Learning (NRL) methods have been proposed to learn vector representations for vertices in a network recently. In this paper, we summarize most existing NRL methods into a unified two-step framework, including proximity matrix construction and dimension reduction. We focus on the analysis of proximity matrix construction step and conclude that an NRL method can be improved by exploring higher order proximities when building the proximity matrix. We propose Network Embedding Update (NEU) algorithm which implicitly approximates higher order proximities with theoretical approximation bound and can be applied on any NRL methods to enhance their performances. We conduct experiments on multi-label classification and link prediction tasks. Experimental results show that NEU can make a consistent and significant improvement over a number of NRL methods with almost negligible running time on all three publicly available datasets. The source code of this paper can be obtained from https://github.com/thunlp/NEU.", "title": "" }, { "docid": "neg:1840105_14", "text": "To run quantum algorithms on emerging gate-model quantum hardware, quantum circuits must be compiled to take into account constraints on the hardware. For near-term hardware, with only limited means to mitigate decoherence, it is critical to minimize the duration of the circuit. We investigate the application of temporal planners to the problem of compiling quantum circuits to newly emerging quantum hardware. While our approach is general, we focus on compiling to superconducting hardware architectures with nearest neighbor constraints. Our initial experiments focus on compiling Quantum Alternating Operator Ansatz (QAOA) circuits whose high number of commuting gates allow great flexibility in the order in which the gates can be applied. That freedom makes it more challenging to find optimal compilations but also means there is a greater potential win from more optimized compilation than for less flexible circuits. We map this quantum circuit compilation problem to a temporal planning problem, and generated a test suite of compilation problems for QAOA circuits of various sizes to a realistic hardware architecture. We report compilation results from several state-of-the-art temporal planners on this test set. This early empirical evaluation demonstrates that temporal planning is a viable approach to quantum circuit compilation.", "title": "" }, { "docid": "neg:1840105_15", "text": "Stable clones of neural stem cells (NSCs) have been isolated from the human fetal telencephalon. These self-renewing clones give rise to all fundamental neural lineages in vitro. Following transplantation into germinal zones of the newborn mouse brain they participate in aspects of normal development, including migration along established migratory pathways to disseminated central nervous system regions, differentiation into multiple developmentally and regionally appropriate cell types, and nondisruptive interspersion with host progenitors and their progeny. These human NSCs can be genetically engineered and are capable of expressing foreign transgenes in vivo. Supporting their gene therapy potential, secretory products from NSCs can correct a prototypical genetic metabolic defect in neurons and glia in vitro. The human NSCs can also replace specific deficient neuronal populations. Cryopreservable human NSCs may be propagated by both epigenetic and genetic means that are comparably safe and effective. By analogy to rodent NSCs, these observations may allow the development of NSC transplantation for a range of disorders.", "title": "" }, { "docid": "neg:1840105_16", "text": "Body Area Networks are unique in that the large-scale mobility of users allows the network itself to travel across a diverse range of operating domains or even to enter new and unknown environments. This network mobility is unlike node mobility in that sensed changes in inter-network interference level may be used to identify opportunities for intelligent inter-networking, for example, by merging or splitting from other networks, thus providing an extra degree of freedom. This paper introduces the concept of context-aware bodynets for interactive environments using inter-network interference sensing. New ideas are explored at both the physical and link layers with an investigation based on a 'smart' office environment. A series of carefully controlled measurements of the mesh interconnectivity both within and between an ambulatory body area network and a stationary desk-based network were performed using 2.45 GHz nodes. Received signal strength and carrier to interference ratio time series for selected node to node links are presented. The results provide an insight into the potential interference between the mobile and static networks and highlight the possibility for automatic identification of network merging and splitting opportunities.", "title": "" }, { "docid": "neg:1840105_17", "text": "Research on the predictive bias of cognitive tests has generally shown (a) no slope effects and (b) small intercept effects, typically favoring the minority group. Aguinis, Culpepper, and Pierce (2010) simulated data and demonstrated that statistical artifacts may have led to a lack of power to detect slope differences and an overestimate of the size of the intercept effect. In response to Aguinis et al.'s (2010) call for a revival of predictive bias research, we used data on over 475,000 students entering college between 2006 and 2008 to estimate slope and intercept differences in the college admissions context. Corrections for statistical artifacts were applied. Furthermore, plotting of regression lines supplemented traditional analyses of predictive bias to offer additional evidence of the form and extent to which predictive bias exists. Congruent with previous research on bias of cognitive tests, using SAT scores in conjunction with high school grade-point average to predict first-year grade-point average revealed minimal differential prediction (ΔR²intercept ranged from .004 to .032 and ΔR²slope ranged from .001 to .013 depending on the corrections applied and comparison groups examined). We found, on the basis of regression plots, that college grades were consistently overpredicted for Black and Hispanic students and underpredicted for female students.", "title": "" }, { "docid": "neg:1840105_18", "text": "PURPOSE OF REVIEW\nThis review discusses the rational development of guidelines for the management of neonatal sepsis in developing countries.\n\n\nRECENT FINDINGS\nDiagnosis of neonatal sepsis with high specificity remains challenging in developing countries. Aetiology data, particularly from rural, community-based studies, are very limited, but molecular tests to improve diagnostics are being tested in a community-based study in South Asia. Antibiotic susceptibility data are limited, but suggest reducing susceptibility to first-and second-line antibiotics in both hospital and community-acquired neonatal sepsis. Results of clinical trials in South Asia and sub-Saharan Africa assessing feasibility of simplified antibiotic regimens are awaited.\n\n\nSUMMARY\nEffective management of neonatal sepsis in developing countries is essential to reduce neonatal mortality and morbidity. Simplified antibiotic regimens are currently being examined in clinical trials, but reduced antimicrobial susceptibility threatens current empiric treatment strategies. Improved clinical and microbiological surveillance is essential, to inform current practice, treatment guidelines, and monitor implementation of policy changes.", "title": "" }, { "docid": "neg:1840105_19", "text": "Cloud computing, as a concept, promises cost savings to end-users by letting them outsource their non-critical business functions to a third party in pay-as-you-go style. However, to enable economic pay-as-you-go services, we need Cloud middleware that maximizes sharing and support near zero costs for unused applications. Multi-tenancy, which let multiple tenants (user) to share a single application instance securely, is a key enabler for building such a middleware. On the other hand, Business processes capture Business logic of organizations in an abstract and reusable manner, and hence play a key role in most organizations. This paper presents the design and architecture of a Multi-tenant Workflow engine while discussing in detail potential use cases of such architecture. Primary contributions of this paper are motivating workflow multi-tenancy, and the design and implementation of multi-tenant workflow engine that enables multiple tenants to run their workflows securely within the same workflow engine instance without modifications to the workflows.", "title": "" } ]
1840106
An End-to-End Text-Independent Speaker Identification System on Short Utterances
[ { "docid": "pos:1840106_0", "text": "We present Deep Speaker, a neural speaker embedding system that maps utterances to a hypersphere where speaker similarity is measured by cosine similarity. The embeddings generated by Deep Speaker can be used for many tasks, including speaker identification, verification, and clustering. We experiment with ResCNN and GRU architectures to extract the acoustic features, then mean pool to produce utterance-level speaker embeddings, and train using triplet loss based on cosine similarity. Experiments on three distinct datasets suggest that Deep Speaker outperforms a DNN-based i-vector baseline. For example, Deep Speaker reduces the verification equal error rate by 50% (relatively) and improves the identification accuracy by 60% (relatively) on a text-independent dataset. We also present results that suggest adapting from a model trained with Mandarin can improve accuracy for English speaker recognition.", "title": "" }, { "docid": "pos:1840106_1", "text": "It is known that the performance of the i-vectors/PLDA based speaker verification systems is affected in the cases of short utterances and limited training data. The performance degradation appears because the shorter the utterance, the less reliable the extracted i-vector is, and because the total variability covariance matrix and the underlying PLDA matrices need a significant amount of data to be robustly estimated. Considering the “MIT Mobile Device Speaker Verification Corpus” (MIT-MDSVC) as a representative dataset for robust speaker verification tasks on limited amount of training data, this paper investigates which configuration and which parameters lead to the best performance of an i-vectors/PLDA based speaker verification. The i-vectors/PLDA based system achieved good performance only when the total variability matrix and the underlying PLDA matrices were trained with data belonging to the enrolled speakers. This way of training means that the system should be fully retrained when new enrolled speakers were added. The performance of the system was more sensitive to the amount of training data of the underlying PLDA matrices than to the amount of training data of the total variability matrix. Overall, the Equal Error Rate performance of the i-vectors/PLDA based system was around 1% below the performance of a GMM-UBM system on the chosen dataset. The paper presents at the end some preliminary experiments in which the utterances comprised in the CSTR VCTK corpus were used besides utterances from MIT-MDSVC for training the total variability covariance matrix and the underlying PLDA matrices.", "title": "" } ]
[ { "docid": "neg:1840106_0", "text": "Scale-model laboratory tests of a surface effect ship (SES) conducted in a near-shore transforming wave field are discussed. Waves approaching a beach in a wave tank were used to simulate transforming sea conditions and a series of experiments were conducted with a 1:30 scale model SES traversing in heads seas. Pitch and heave motion of the vehicle were recorded in support of characterizing the seakeeping response of the vessel in developing seas. The aircushion pressure and the vessel speed were varied over a range of values and the corresponding vehicle responses were analyzed to identify functional dependence on these parameters. The results show a distinct correlation between the air-cushion pressure and the response amplitude of both pitch and heave.", "title": "" }, { "docid": "neg:1840106_1", "text": "Content-based image retrieval (CBIR) has attracted much attention due to the exponential growth of digital image collections that have become available in recent years. Relevance feedback (RF) in the context of search engines is a query expansion technique, which is based on relevance judgments about the top results that are initially returned for a given query. RF can be obtained directly from end users, inferred indirectly from user interactions with a result list, or even assumed (aka pseudo relevance feedback). RF information is used to generate a new query, aiming to re-focus the query towards more relevant results.\n This paper presents a methodology for use of signature based image retrieval with a user in the loop to improve retrieval performance. The significance of this study is twofold. First, it shows how to effectively use explicit RF with signature based image retrieval to improve retrieval quality and efficiency. Second, this approach provides a mechanism for end users to refine their image queries. This is an important contribution because, to date, there is no effective way to reformulate an image query; our approach provides a solution to this problem.\n Empirical experiments have been carried out to study the behaviour and optimal parameter settings of this approach. Empirical evaluations based on standard benchmarks demonstrate the effectiveness of the proposed approach in improving the performance of CBIR in terms of recall, precision, speed and scalability.", "title": "" }, { "docid": "neg:1840106_2", "text": "This is the second paper in a four-part series detailing the relative merits of the treatment strategies, clinical techniques and dental materials for the restoration of health, function and aesthetics for the dentition. In this paper the management of wear in the anterior dentition is discussed, using three case studies as illustration.", "title": "" }, { "docid": "neg:1840106_3", "text": "In most problem-solving activities, feedback is received at the end of an action sequence. This creates a credit-assignment problem where the learner must associate the feedback with earlier actions, and the interdependencies of actions require the learner to either remember past choices of actions (internal state information) or rely on external cues in the environment (external state information) to select the right actions. We investigated the nature of explicit and implicit learning processes in the credit-assignment problem using a probabilistic sequential choice task with and without external state information. We found that when explicit memory encoding was dominant, subjects were faster to select the better option in their first choices than in the last choices; when implicit reinforcement learning was dominant subjects were faster to select the better option in their last choices than in their first choices. However, implicit reinforcement learning was only successful when distinct external state information was available. The results suggest the nature of learning in credit assignment: an explicit memory encoding process that keeps track of internal state information and a reinforcement-learning process that uses state information to propagate reinforcement backwards to previous choices. However, the implicit reinforcement learning process is effective only when the valences can be attributed to the appropriate states in the system – either internally generated states in the cognitive system or externally presented stimuli in the environment.", "title": "" }, { "docid": "neg:1840106_4", "text": "Turkish Wikipedia Named-Entity Recognition and Text Categorization (TWNERTC) dataset is a collection of automatically categorized and annotated sentences obtained from Wikipedia. We constructed large-scale gazetteers by using a graph crawler algorithm to extract relevant entity and domain information from a semantic knowledge base, Freebase1. The constructed gazetteers contains approximately 300K entities with thousands of fine-grained entity types under 77 different domains. Since automated processes are prone to ambiguity, we also introduce two new content specific noise reduction methodologies. Moreover, we map fine-grained entity types to the equivalent four coarse-grained types, person, loc, org, misc. Eventually, we construct six different dataset versions and evaluate the quality of annotations by comparing ground truths from human annotators. We make these datasets publicly available to support studies on Turkish named-entity recognition (NER) and text categorization (TC).", "title": "" }, { "docid": "neg:1840106_5", "text": "Short-term traffic forecasting is becoming more important in intelligent transportation systems. The k-nearest neighbours (kNN) method is widely used for short-term traffic forecasting. However, the self-adjustment of kNN parameters has been a problem due to dynamic traffic characteristics. This paper proposes a fully automatic dynamic procedure kNN (DP-kNN) that makes the kNN parameters self-adjustable and robust without predefined models or training for the parameters. A real-world dataset with more than one year traffic records is used to conduct experiments. The results show that DP-kNN can perform better than manually adjusted kNN and other benchmarking methods in terms of accuracy on average. This study also discusses the difference between holiday and workday traffic prediction as well as the usage of neighbour distance measurement.", "title": "" }, { "docid": "neg:1840106_6", "text": "A goal of runtime software-fault monitoring is to observe software behavior to determine whether it complies with its intended behavior. Monitoring allows one to analyze and recover from detected faults, providing additional defense against catastrophic failure. Although runtime monitoring has been in use for over 30 years, there is renewed interest in its application to fault detection and recovery, largely because of the increasing complexity and ubiquitous nature of software systems. We present taxonomy that developers and researchers can use to analyze and differentiate recent developments in runtime software fault-monitoring approaches. The taxonomy categorizes the various runtime monitoring research by classifying the elements that are considered essential for building a monitoring system, i.e., the specification language used to define properties; the monitoring mechanism that oversees the program's execution; and the event handler that captures and communicates monitoring results. After describing the taxonomy, the paper presents the classification of the software-fault monitoring systems described in the literature.", "title": "" }, { "docid": "neg:1840106_7", "text": "BACKGROUND AND PURPOSE\nPatients with atrial fibrillation and previous ischemic stroke (IS)/transient ischemic attack (TIA) are at high risk of recurrent cerebrovascular events despite anticoagulation. In this prespecified subgroup analysis, we compared warfarin with edoxaban in patients with versus without previous IS/TIA.\n\n\nMETHODS\nENGAGE AF-TIMI 48 (Effective Anticoagulation With Factor Xa Next Generation in Atrial Fibrillation-Thrombolysis in Myocardial Infarction 48) was a double-blind trial of 21 105 patients with atrial fibrillation randomized to warfarin (international normalized ratio, 2.0-3.0; median time-in-therapeutic range, 68.4%) versus once-daily edoxaban (higher-dose edoxaban regimen [HDER], 60/30 mg; lower-dose edoxaban regimen, 30/15 mg) with 2.8-year median follow-up. Primary end points included all stroke/systemic embolic events (efficacy) and major bleeding (safety). Because only HDER is approved, we focused on the comparison of HDER versus warfarin.\n\n\nRESULTS\nOf 5973 (28.3%) patients with previous IS/TIA, 67% had CHADS2 (congestive heart failure, hypertension, age, diabetes, prior stroke/transient ischemic attack) >3 and 36% were ≥75 years. Compared with 15 132 without previous IS/TIA, patients with previous IS/TIA were at higher risk of both thromboembolism and bleeding (stroke/systemic embolic events 2.83% versus 1.42% per year; P<0.001; major bleeding 3.03% versus 2.64% per year; P<0.001; intracranial hemorrhage, 0.70% versus 0.40% per year; P<0.001). Among patients with previous IS/TIA, annualized intracranial hemorrhage rates were lower with HDER than with warfarin (0.62% versus 1.09%; absolute risk difference, 47 [8-85] per 10 000 patient-years; hazard ratio, 0.57; 95% confidence interval, 0.36-0.92; P=0.02). No treatment subgroup interactions were found for primary efficacy (P=0.86) or for intracranial hemorrhage (P=0.28).\n\n\nCONCLUSIONS\nPatients with atrial fibrillation with previous IS/TIA are at high risk of recurrent thromboembolism and bleeding. HDER is at least as effective and is safer than warfarin, regardless of the presence or the absence of previous IS or TIA.\n\n\nCLINICAL TRIAL REGISTRATION\nURL: http://www.clinicaltrials.gov. Unique identifier: NCT00781391.", "title": "" }, { "docid": "neg:1840106_8", "text": "Decoronation of ankylosed teeth in infraposition was introduced in 1984 by Malmgren and co-workers (1). This method is used all over the world today. It has been clinically shown that the procedure preserves the alveolar width and rebuilds lost vertical bone of the alveolar ridge in growing individuals. The biological explanation is that the decoronated root serves as a matrix for new bone development during resorption of the root and that the lost vertical alveolar bone is rebuilt during eruption of adjacent teeth. First a new periosteum is formed over the decoronated root, allowing vertical alveolar growth. Then the interdental fibers that have been severed by the decoronation procedure are reorganized between adjacent teeth. The continued eruption of these teeth mediates marginal bone apposition via the dental-periosteal fiber complex. The erupting teeth are linked with the periosteum covering the top of the alveolar socket and indirectly via the alveolar gingival fibers, which are inserted in the alveolar crest and in the lamina propria of the interdental papilla. Both structures can generate a traction force resulting in bone apposition on top of the alveolar crest. This theoretical biological explanation is based on known anatomical features, known eruption processes and clinical observations.", "title": "" }, { "docid": "neg:1840106_9", "text": "Ontology is a sub-field of Philosophy. It is the study of the nature of existence and a branch of metaphysics concerned with identifying the kinds of things that actually exists and how to describe them. It describes formally a domain of discourse. Ontology is used to capture knowledge about some domain of interest and to describe the concepts in the domain and also to express the relationships that hold between those concepts. Ontology consists of finite list of terms (or important concepts) and the relationships among the terms (or Classes of Objects). Relationships typically include hierarchies of classes. It is an explicit formal specification of conceptualization and the science of describing the kind of entities in the world and how they are related (W3C). Web Ontology Language (OWL) is a language for defining and instantiating web ontologies (a W3C Recommendation). OWL ontology includes description of classes, properties and their instances. OWL is used to explicitly represent the meaning of terms in vocabularies and the relationships between those terms. Such representation of terms and their interrelationships is called ontology. OWL has facilities for expressing meaning and semantics and the ability to represent machine interpretable content on the Web. OWL is designed for use by applications that need to process the content of information instead of just presenting information to humans. This is used for knowledge representation and also is useful to derive logical consequences from OWL formal semantics.", "title": "" }, { "docid": "neg:1840106_10", "text": "This literature review synthesized the existing research on cloud computing from a business perspective by investigating 60 sources and integrates their results in order to offer an overview about the existing body of knowledge. Using an established framework our results are structured according to the four dimensions following: cloud computing characteristics, adoption determinants, governance mechanisms, and business impact. This work reveals a shifting focus from technological aspects to a broader understanding of cloud computing as a new IT delivery model. There is a growing consensus about its characteristics and design principles. Unfortunately, research on factors driving or inhibiting the adoption of cloud services, as well as research investigating its business impact empirically, is still limited. This may be attributed to cloud computing being a rather recent research topic. Research on structures, processes and employee qualification to govern cloud services is at an early stage as well.", "title": "" }, { "docid": "neg:1840106_11", "text": "The objective of this work is to infer the 3D shape of an object from a single image. We use sculptures as our training and test bed, as these have great variety in shape and appearance. To achieve this we build on the success of multiple view geometry (MVG) which is able to accurately provide correspondences between images of 3D objects under varying viewpoint and illumination conditions, and make the following contributions: first, we introduce a new loss function that can harness image-to-image correspondences to provide a supervisory signal to train a deep network to infer a depth map. The network is trained end-to-end by differentiating through the camera. Second, we develop a processing pipeline to automatically generate a large scale multi-view set of correspondences for training the network. Finally, we demonstrate that we can indeed obtain a depth map of a novel object from a single image for a variety of sculptures with varying shape/texture, and that the network generalises at test time to new domains (e.g. synthetic images).", "title": "" }, { "docid": "neg:1840106_12", "text": "Electrophysiological recording studies in the dorsocaudal region of medial entorhinal cortex (dMEC) of the rat reveal cells whose spatial firing fields show a remarkably regular hexagonal grid pattern (Fyhn et al., 2004; Hafting et al., 2005). We describe a symmetric, locally connected neural network, or spin glass model, that spontaneously produces a hexagonal grid of activity bumps on a two-dimensional sheet of units. The spatial firing fields of the simulated cells closely resemble those of dMEC cells. A collection of grids with different scales and/or orientations forms a basis set for encoding position. Simulations show that the animal's location can easily be determined from the population activity pattern. Introducing an asymmetry in the model allows the activity bumps to be shifted in any direction, at a rate proportional to velocity, to achieve path integration. Furthermore, information about the structure of the environment can be superimposed on the spatial position signal by modulation of the bump activity levels without significantly interfering with the hexagonal periodicity of firing fields. Our results support the conjecture of Hafting et al. (2005) that an attractor network in dMEC may be the source of path integration information afferent to hippocampus.", "title": "" }, { "docid": "neg:1840106_13", "text": "All currently available network intrusion detection (ID) systems rely upon a mechanism of data collection---passive protocol analysis---which is fundamentally flawed. In passive protocol analysis, the intrusion detection system (IDS) unobtrusively watches all traffic on the network, and scrutinizes it for patterns of suspicious activity. We outline in this paper two basic problems with the reliability of passive protocol analysis: (1) there isn't enough information on the wire on which to base conclusions about what is actually happening on networked machines, and (2) the fact that the system is passive makes it inherently \"fail-open,\" meaning that a compromise in the availability of the IDS doesn't compromise the availability of the network. We define three classes of attacks which exploit these fundamental problems---insertion, evasion, and denial of service attacks --and describe how to apply these three types of attacks to IP and TCP protocol analysis. We present the results of tests of the efficacy of our attacks against four of the most popular network intrusion detection systems on the market. All of the ID systems tested were found to be vulnerable to each of our attacks. This indicates that network ID systems cannot be fully trusted until they are fundamentally redesigned. Insertion, Evasion, and Denial of Service: Eluding Network Intrusion Detection http://www.robertgraham.com/mirror/Ptacek-Newsham-Evasion-98.html (1 of 55) [17/01/2002 08:32:46 p.m.]", "title": "" }, { "docid": "neg:1840106_14", "text": "This paper focuses on the problem of vision-based obstacle detection and tracking for unmanned aerial vehicle navigation. A real-time object localization and tracking strategy from monocular image sequences is developed by effectively integrating the object detection and tracking into a dynamic Kalman model. At the detection stage, the object of interest is automatically detected and localized from a saliency map computed via the image background connectivity cue at each frame; at the tracking stage, a Kalman filter is employed to provide a coarse prediction of the object state, which is further refined via a local detector incorporating the saliency map and the temporal information between two consecutive frames. Compared with existing methods, the proposed approach does not require any manual initialization for tracking, runs much faster than the state-of-the-art trackers of its kind, and achieves competitive tracking performance on a large number of image sequences. Extensive experiments demonstrate the effectiveness and superior performance of the proposed approach.", "title": "" }, { "docid": "neg:1840106_15", "text": "Budget allocation in online advertising deals with distributing the campaign (insertion order) level budgets to different sub-campaigns which employ different targeting criteria and may perform differently in terms of return-on-investment (ROI). In this paper, we present the efforts at Turn on how to best allocate campaign budget so that the advertiser or campaign-level ROI is maximized. To do this, it is crucial to be able to correctly determine the performance of sub-campaigns. This determination is highly related to the action-attribution problem, i.e. to be able to find out the set of ads, and hence the sub-campaigns that provided them to a user, that an action should be attributed to. For this purpose, we employ both last-touch (last ad gets all credit) and multi-touch (many ads share the credit) attribution methodologies. We present the algorithms deployed at Turn for the attribution problem, as well as their parallel implementation on the large advertiser performance datasets. We conclude the paper with our empirical comparison of last-touch and multi-touch attribution-based budget allocation in a real online advertising setting.", "title": "" }, { "docid": "neg:1840106_16", "text": "Centralised patient monitoring systems are in huge demand as they not only reduce the labour work and cost but also the time of the clinical hospitals. Earlier wired communication was used but now Zigbee which is a wireless mesh network is preferred as it reduces the cost. Zigbee is also preferred over Bluetooth and infrared wireless communication because it is energy efficient, has low cost and long distance range (several miles). In this paper we proposed wireless transmission of data between a patient and centralised unit using Zigbee module. The paper is divided into two sections. First is patient monitoring system for multiple patients and second is the centralised patient monitoring system. These two systems are communicating using wireless transmission technology i.e. Zigbee. In the first section we have patient monitoring of multiple patients. Each patient's multiple physiological parameters like ECG, temperature, heartbeat are measured at their respective unit. If any physiological parameter value exceeds the threshold value, emergency alarm and LED blinks at each patient unit. This allows a doctor to read various physiological parameters of a patient in real time. The values are displayed on the LCD at each patient unit. Similarly multiple patients multiple physiological parameters are being measured using particular sensors and multiple patient's patient monitoring system is made. In the second section centralised patient monitoring system is made in which all multiple patients multiple parameters are displayed on a central monitor using MATLAB. ECG graph is also displayed on the central monitor using MATLAB software. The central LCD also displays parameters like heartbeat and temperature. The module is less expensive, consumes low power and has good range.", "title": "" }, { "docid": "neg:1840106_17", "text": "Inspection of printed circuit board (PCB) has been a crucial process in the electronic manufacturing industry to guarantee product quality & reliability, cut manufacturing cost and to increase production. The PCB inspection involves detection of defects in the PCB and classification of those defects in order to identify the roots of defects. In this paper, all 14 types of defects are detected and are classified in all possible classes using referential inspection approach. The proposed algorithm is mainly divided into five stages: Image registration, Pre-processing, Image segmentation, Defect detection and Defect classification. The algorithm is able to perform inspection even when captured test image is rotated, scaled and translated with respect to template image which makes the algorithm rotation, scale and translation in-variant. The novelty of the algorithm lies in its robustness to analyze a defect in its different possible appearance and severity. In addition to this, algorithm takes only 2.528 s to inspect a PCB image. The efficacy of the proposed algorithm is verified by conducting experiments on the different PCB images and it shows that the proposed afgorithm is suitable for automatic visual inspection of PCBs.", "title": "" }, { "docid": "neg:1840106_18", "text": "The area of machine learning has made considerable progress over the past decade, enabled by the widespread availability of large datasets, as well as by improved algorithms and models. Given the large computational demands of machine learning workloads, parallelism, implemented either through single-node concurrency or through multi-node distribution, has been a third key ingredient to advances in machine learning.\n The goal of this tutorial is to provide the audience with an overview of standard distribution techniques in machine learning, with an eye towards the intriguing trade-offs between synchronization and communication costs of distributed machine learning algorithms, on the one hand, and their convergence, on the other.The tutorial will focus on parallelization strategies for the fundamental stochastic gradient descent (SGD) algorithm, which is a key tool when training machine learning models, from classical instances such as linear regression, to state-of-the-art neural network architectures.\n The tutorial will describe the guarantees provided by this algorithm in the sequential case, and then move on to cover both shared-memory and message-passing parallelization strategies, together with the guarantees they provide, and corresponding trade-offs. The presentation will conclude with a broad overview of ongoing research in distributed and concurrent machine learning. The tutorial will assume no prior knowledge beyond familiarity with basic concepts in algebra and analysis.", "title": "" }, { "docid": "neg:1840106_19", "text": "While advances in computing resources have made processing enormous amounts of data possible, human ability to identify patterns in such data has not scaled accordingly. Thus, efficient computational methods for condensing and simplifying data are becoming vital for extracting actionable insights. In particular, while data summarization techniques have been studied extensively, only recently has summarizing interconnected data, or graphs, become popular. This survey is a structured, comprehensive overview of the state-of-the-art methods for summarizing graph data. We first broach the motivation behind and the challenges of graph summarization. We then categorize summarization approaches by the type of graphs taken as input and further organize each category by core methodology. Finally, we discuss applications of summarization on real-world graphs and conclude by describing some open problems in the field.", "title": "" } ]
1840107
Hedging Deep Features for Visual Tracking.
[ { "docid": "pos:1840107_0", "text": "In this paper, we treat tracking as a learning problem of estimating the location and the scale of an object given its previous location, scale, as well as current and previous image frames. Given a set of examples, we train convolutional neural networks (CNNs) to perform the above estimation task. Different from other learning methods, the CNNs learn both spatial and temporal features jointly from image pairs of two adjacent frames. We introduce multiple path ways in CNN to better fuse local and global information. A creative shift-variant CNN architecture is designed so as to alleviate the drift problem when the distracting objects are similar to the target in cluttered environment. Furthermore, we employ CNNs to estimate the scale through the accurate localization of some key points. These techniques are object-independent so that the proposed method can be applied to track other types of object. The capability of the tracker of handling complex situations is demonstrated in many testing sequences.", "title": "" }, { "docid": "pos:1840107_1", "text": "In this paper we present a tracker, which is radically different from state-of-the-art trackers: we apply no model updating, no occlusion detection, no combination of trackers, no geometric matching, and still deliver state-of-the-art tracking performance, as demonstrated on the popular online tracking benchmark (OTB) and six very challenging YouTube videos. The presented tracker simply matches the initial patch of the target in the first frame with candidates in a new frame and returns the most similar patch by a learned matching function. The strength of the matching function comes from being extensively trained generically, i.e., without any data of the target, using a Siamese deep neural network, which we design for tracking. Once learned, the matching function is used as is, without any adapting, to track previously unseen targets. It turns out that the learned matching function is so powerful that a simple tracker built upon it, coined Siamese INstance search Tracker, SINT, which only uses the original observation of the target from the first frame, suffices to reach state-of-the-art performance. Further, we show the proposed tracker even allows for target re-identification after the target was absent for a complete video shot.", "title": "" }, { "docid": "pos:1840107_2", "text": "Although not commonly used, correlation filters can track complex objects through rotations, occlusions and other distractions at over 20 times the rate of current state-of-the-art techniques. The oldest and simplest correlation filters use simple templates and generally fail when applied to tracking. More modern approaches such as ASEF and UMACE perform better, but their training needs are poorly suited to tracking. Visual tracking requires robust filters to be trained from a single frame and dynamically adapted as the appearance of the target object changes. This paper presents a new type of correlation filter, a Minimum Output Sum of Squared Error (MOSSE) filter, which produces stable correlation filters when initialized using a single frame. A tracker based upon MOSSE filters is robust to variations in lighting, scale, pose, and nonrigid deformations while operating at 669 frames per second. Occlusion is detected based upon the peak-to-sidelobe ratio, which enables the tracker to pause and resume where it left off when the object reappears.", "title": "" } ]
[ { "docid": "neg:1840107_0", "text": "In this paper, an unequal 1:N Wilkinson power divider with variable power dividing ratio is proposed. The proposed unequal power divider is composed of the conventional Wilkinson divider structure, rectangular-shaped defected ground structure (DGS), island in DGS, and varactor diodes of which capacitance is adjustable according to bias voltage. The high impedance value of microstrip line having DGS is going up and down by adjusting the bias voltage for varactor diodes. Output power dividing ratio (N) is adjusted from 2.59 to 10.4 for the unequal power divider with 2 diodes.", "title": "" }, { "docid": "neg:1840107_1", "text": "Gadolinium based contrast agents (GBCAs) play an important role in the diagnostic evaluation of many patients. The safety of these agents has been once again questioned after gadolinium deposits were observed and measured in brain and bone of patients with normal renal function. This retention of gadolinium in the human body has been termed \"gadolinium storage condition\". The long-term and cumulative effects of retained gadolinium in the brain and elsewhere are not as yet understood. Recently, patients who report that they suffer from chronic symptoms secondary to gadolinium exposure and retention created gadolinium-toxicity on-line support groups. Their self-reported symptoms have recently been published. Bone and joint complaints, and skin changes were two of the most common complaints. This condition has been termed \"gadolinium deposition disease\". In this review we will address gadolinium toxicity disorders, from acute adverse reactions to GBCAs to gadolinium deposition disease, with special emphasis on the latter, as it is the most recently described and least known.", "title": "" }, { "docid": "neg:1840107_2", "text": "In this review, we present the recent developments and future prospects of improving nitrogen use efficiency (NUE) in crops using various complementary approaches. These include conventional breeding and molecular genetics, in addition to alternative farming techniques based on no-till continuous cover cropping cultures and/or organic nitrogen (N) nutrition. Whatever the mode of N fertilization, an increased knowledge of the mechanisms controlling plant N economy is essential for improving NUE and for reducing excessive input of fertilizers, while maintaining an acceptable yield and sufficient profit margin for the farmers. Using plants grown under agronomic conditions, with different tillage conditions, in pure or associated cultures, at low and high N mineral fertilizer input, or using organic fertilization, it is now possible to develop further whole plant agronomic and physiological studies. These can be combined with gene, protein and metabolite profiling to build up a comprehensive picture depicting the different steps of N uptake, assimilation and recycling to produce either biomass in vegetative organs or proteins in storage organs. We provide a critical overview as to how our understanding of the agro-ecophysiological, physiological and molecular controls of N assimilation in crops, under varying environmental conditions, has been improved. We OPEN ACCESS Sustainability 2011, 3 1453 have used combined approaches, based on agronomic studies, whole plant physiology, quantitative genetics, forward and reverse genetics and the emerging systems biology. Long-term sustainability may require a gradual transition from synthetic N inputs to legume-based crop rotation, including continuous cover cropping systems, where these may be possible in certain areas of the world, depending on climatic conditions. Current knowledge and prospects for future agronomic development and application for breeding crops adapted to lower mineral fertilizer input and to alternative farming techniques are explored, whilst taking into account the constraints of both the current world economic situation and the environment.", "title": "" }, { "docid": "neg:1840107_3", "text": "Open data marketplaces have emerged as a mode of addressing open data adoption barriers. However, knowledge of how such marketplaces affect digital service innovation in open data ecosystems is limited. This paper explores their value proposition for open data users based on an exploratory case study. Five prominent perceived values are identified: lower task complexity, higher access to knowledge, increased possibilities to influence, lower risk and higher visibility. The impact on open data adoption barriers is analyzed and the consequences for ecosystem sustainability is discussed. The paper concludes that open data marketplaces can lower the threshold of using open data by providing better access to open data and associated support services, and by increasing knowledge transfer within the ecosystem.", "title": "" }, { "docid": "neg:1840107_4", "text": "A hallmark of glaucomatous optic nerve damage is retinal ganglion cell (RGC) death. RGCs, like other central nervous system neurons, have a limited capacity to survive or regenerate an axon after injury. Strategies that prevent or slow down RGC degeneration, in combination with intraocular pressure management, may be beneficial to preserve vision in glaucoma. Recent progress in neurobiological research has led to a better understanding of the molecular pathways that regulate the survival of injured RGCs. Here we discuss a variety of experimental strategies including intraocular delivery of neuroprotective molecules, viral-mediated gene transfer, cell implants and stem cell therapies, which share the ultimate goal of promoting RGC survival after optic nerve damage. The challenge now is to assess how this wealth of knowledge can be translated into viable therapies for the treatment of glaucoma and other optic neuropathies.", "title": "" }, { "docid": "neg:1840107_5", "text": "In this article, we present the Menpo 2D and Menpo 3D benchmarks, two new datasets for multi-pose 2D and 3D facial landmark localisation and tracking. In contrast to the previous benchmarks such as 300W and 300VW, the proposed benchmarks contain facial images in both semi-frontal and profile pose. We introduce an elaborate semi-automatic methodology for providing high-quality annotations for both the Menpo 2D and Menpo 3D benchmarks. In Menpo 2D benchmark, different visible landmark configurations are designed for semi-frontal and profile faces, thus making the 2D face alignment full-pose. In Menpo 3D benchmark, a united landmark configuration is designed for both semi-frontal and profile faces based on the correspondence with a 3D face model, thus making face alignment not only full-pose but also corresponding to the real-world 3D space. Based on the considerable number of annotated images, we organised Menpo 2D Challenge and Menpo 3D Challenge for face alignment under large pose variations in conjunction with CVPR 2017 and ICCV 2017, respectively. The results of these challenges demonstrate that recent deep learning architectures, when trained with the abundant data, lead to excellent results. We also provide a very simple, yet effective solution, named Cascade Multi-view Hourglass Model, to 2D and 3D face alignment. In our method, we take advantage of all 2D and 3D facial landmark annotations in a joint way. We not only capitalise on the correspondences between the semi-frontal and profile 2D facial landmarks but also employ joint supervision from both 2D and 3D facial landmarks. Finally, we discuss future directions on the topic of face alignment.", "title": "" }, { "docid": "neg:1840107_6", "text": "Light-weight antenna arrays require utilizing the same antenna aperture to provide multiple functions (e.g., communications and radar) in separate frequency bands. In this paper, we present a novel antenna element design for a dual-band array, comprising interleaved printed dipoles spaced to avoid grating lobes in each band. The folded dipoles are designed to be resonant at octave-separated frequency bands (1 and 2 GHz), and inkjet-printed on photographic paper. Each dipole is gap-fed by voltage induced electromagnetically from a microstrip line on the other side of the substrate. This nested element configuration shows excellent corroboration between simulated and measured data, with 10-dB return loss bandwidth of at least 5% for each band and interchannel isolation better than 15 dB. The measured element gain is 5.3 to 7 dBi in the two bands, with cross-polarization less than -25 dBi. A large array containing 39 printed dipoles has been fabricated on paper, with each dipole individually fed to facilitate independent beam control. Measurements on the array reveal broadside gain of 12 to 17 dBi in each band with low cross-polarization.", "title": "" }, { "docid": "neg:1840107_7", "text": "After reviewing six senses of abstraction, this article focuses on abstractions that take the form of summary representations. Three central properties of these abstractions are established: ( i ) type-token interpretation; (ii) structured representation; and (iii) dynamic realization. Traditional theories of representation handle interpretation and structure well but are not sufficiently dynamical. Conversely, connectionist theories are exquisitely dynamic but have problems with structure. Perceptual symbol systems offer an approach that implements all three properties naturally. Within this framework, a loose collection of property and relation simulators develops to represent abstractions. Type-token interpretation results from binding a property simulator to a region of a perceived or simulated category member. Structured representation results from binding a configuration of property and relation simulators to multiple regions in an integrated manner. Dynamic realization results from applying different subsets of property and relation simulators to category members on different occasions. From this standpoint, there are no permanent or complete abstractions of a category in memory. Instead, abstraction is the skill to construct temporary online interpretations of a category's members. Although an infinite number of abstractions are possible, attractors develop for habitual approaches to interpretation. This approach provides new ways of thinking about abstraction phenomena in categorization, inference, background knowledge and learning.", "title": "" }, { "docid": "neg:1840107_8", "text": "Feature extraction of EEG signals is core issues on EEG based brain mapping analysis. The classification of EEG signals has been performed using features extracted from EEG signals. Many features have proved to be unique enough to use in all brain related medical application. EEG signals can be classified using a set of features like Autoregression, Energy Spectrum Density, Energy Entropy, and Linear Complexity. However, different features show different discriminative power for different subjects or different trials. In this research, two-features are used to improve the performance of EEG signals. Neural Network based techniques are applied to feature extraction of EEG signal. This paper discuss on extracting features based on Average method and Max & Min method of the data set. The Extracted Features are classified using Neural Network Temporal Pattern Recognition Technique. The two methods are compared and performance is analyzed based on the results obtained from the Neural Network classifier.", "title": "" }, { "docid": "neg:1840107_9", "text": "Modern vehicle fleets, e.g., for ridesharing platforms and taxi companies, can reduce passengers' waiting times by proactively dispatching vehicles to locations where pickup requests are anticipated in the future. Yet it is unclear how to best do this: optimal dispatching requires optimizing over several sources of uncertainty, including vehicles' travel times to their dispatched locations, as well as coordinating between vehicles so that they do not attempt to pick up the same passenger. While prior works have developed models for this uncertainty and used them to optimize dispatch policies, in this work we introduce a model-free approach. Specifically, we propose MOVI, a Deep Q-network (DQN)-based framework that directly learns the optimal vehicle dispatch policy. Since DQNs scale poorly with a large number of possible dispatches, we streamline our DQN training and suppose that each individual vehicle independently learns its own optimal policy, ensuring scalability at the cost of less coordination between vehicles. We then formulate a centralized receding-horizon control (RHC) policy to compare with our DQN policies. To compare these policies, we design and build MOVI as a large-scale realistic simulator based on 15 million taxi trip records that simulates policy-agnostic responses to dispatch decisions. We show that the DQN dispatch policy reduces the number of unserviced requests by 76% compared to without dispatch and 20% compared to the RHC approach, emphasizing the benefits of a model-free approach and suggesting that there is limited value to coordinating vehicle actions. This finding may help to explain the success of ridesharing platforms, for which drivers make individual decisions.", "title": "" }, { "docid": "neg:1840107_10", "text": "We study the problem of visualizing large-scale and highdimensional data in a low-dimensional (typically 2D or 3D) space. Much success has been reported recently by techniques that first compute a similarity structure of the data points and then project them into a low-dimensional space with the structure preserved. These two steps suffer from considerable computational costs, preventing the state-ofthe-art methods such as the t-SNE from scaling to largescale and high-dimensional data (e.g., millions of data points and hundreds of dimensions). We propose the LargeVis, a technique that first constructs an accurately approximated K-nearest neighbor graph from the data and then layouts the graph in the low-dimensional space. Comparing to tSNE, LargeVis significantly reduces the computational cost of the graph construction step and employs a principled probabilistic model for the visualization step, the objective of which can be effectively optimized through asynchronous stochastic gradient descent with a linear time complexity. The whole procedure thus easily scales to millions of highdimensional data points. Experimental results on real-world data sets demonstrate that the LargeVis outperforms the state-of-the-art methods in both efficiency and effectiveness. The hyper-parameters of LargeVis are also much more stable over different data sets.", "title": "" }, { "docid": "neg:1840107_11", "text": "For over 50 years, electron beams have been an important modality for providing an accurate dose of radiation to superficial cancers and disease and for limiting the dose to underlying normal tissues and structures. This review looks at many of the important contributions of physics and dosimetry to the development and utilization of electron beam therapy, including electron treatment machines, dose specification and calibration, dose measurement, electron transport calculations, treatment and treatment-planning tools, and clinical utilization, including special procedures. Also, future changes in the practice of electron therapy resulting from challenges to its utilization and from potential future technology are discussed.", "title": "" }, { "docid": "neg:1840107_12", "text": "We discuss the temporal-difference learning algorithm, as applied to approximating the cost-to-go function of an infinite-horizon discounted Markov chain, using a function approximator involving linear combinations of fixed basis functions. The algorithm we analyze performs on-line updating of a parameter vector during a single endless trajectory of an ergodic Markov chain with a finite or infinite state space. We present a proof of convergence (with probability 1), a characterization of the limit of convergence, and a bound on the resulting approximation error. In addition to proving new and stronger results than those previously available, our analysis is based on a new line of reasoning that provides new intuition about the dynamics of temporal-difference learning. Finally, we prove that on-line updates, based on entire trajectories of the Markov chain, are in a certain sense necessary for convergence. This fact reconciles positive and negative results that have been discussed in the literature, regarding the soundness of temporal-difference learning.", "title": "" }, { "docid": "neg:1840107_13", "text": "User verification systems that use a single biometric indicator often have to contend with noisy sensor data, restricted degrees of freedom, non-universality of the biometric trait and unacceptable error rates. Attempting to improve the performance of individual matchers in such situations may not prove to be effective because of these inherent problems. Multibiometric systems seek to alleviate some of these drawbacks by providing multiple evidences of the same identity. These systems help achieve an increase in performance that may not be possible using a single biometric indicator. Further, multibiometric systems provide anti-spoofing measures by making it difficult for an intruder to spoof multiple biometric traits simultaneously. However, an effective fusion scheme is necessary to combine the information presented by multiple domain experts. This paper addresses the problem of information fusion in biometric verification systems by combining information at the matching score level. Experimental results on combining three biometric modalities (face, fingerprint and hand geometry) are presented.", "title": "" }, { "docid": "neg:1840107_14", "text": "This paper describes the issues and tradeoffs in the design and monolithic implementation of direct-conversion receivers and proposes circuit techniques that can alleviate the drawbacks of this architecture. Following a brief study of heterodyne and image-reject topologies, the direct-conversion architecture is introduced and effects such as dc offset, I=Q mismatch, even-order distortion, flicker noise, and oscillator leakage are analyzed. Related design techniques for amplification and mixing, quadrature phase calibration, and baseband processing are also described.", "title": "" }, { "docid": "neg:1840107_15", "text": "In recent years, the study of lightweight symmetric ciphers has gained interest due to the increasing demand for security services in constrained computing environments, such as in the Internet of Things. However, when there are several algorithms to choose from and different implementation criteria and conditions, it becomes hard to select the most adequate security primitive for a specific application. This paper discusses the hardware implementations of Present, a standardized lightweight cipher called to overcome part of the security issues in extremely constrained environments. The most representative realizations of this cipher are reviewed and two novel designs are presented. Using the same implementation conditions, the two new proposals and three state-of-the-art designs are evaluated and compared, using area, performance, energy, and efficiency as metrics. From this wide experimental evaluation, to the best of our knowledge, new records are obtained in terms of implementation size and energy consumption. In particular, our designs result to be adequate in regards to energy-per-bit and throughput-per-slice.", "title": "" }, { "docid": "neg:1840107_16", "text": "Measuring Semantic Textual Similarity (STS), between words/ terms, sentences, paragraph and document plays an important role in computer science and computational linguistic. It also has many applications over several fields such as Biomedical Informatics and Geoinformation. In this paper, we present a survey on different methods of textual similarity and we also reported about the availability of different software and tools those are useful for STS. In natural language processing (NLP), STS is a important component for many tasks such as document summarization, word sense disambiguation, short answer grading, information retrieval and extraction. We split out the measures for semantic similarity into three broad categories such as (i) Topological/Knowledge-based (ii) Statistical/ Corpus Based (iii) String based. More emphasis is given to the methods related to the WordNet taxonomy. Because topological methods, plays an important role to understand intended meaning of an ambiguous word, which is very difficult to process computationally. We also propose a new method for measuring semantic similarity between sentences. This proposed method, uses the advantages of taxonomy methods and merge these information to a language model. It considers the WordNet synsets for lexical relationships between nodes/words and a uni-gram language model is implemented over a large corpus to assign the information content value between the two nodes of different classes.", "title": "" }, { "docid": "neg:1840107_17", "text": "Pseudolymphomatous folliculitis (PLF), which clinically mimicks cutaneous lymphoma, is a rare manifestation of cutaneous pseudolymphoma and cutaneous lymphoid hyperplasia. Here, we report on a 45-year-old Japanese woman with PLF. Dermoscopy findings revealed prominent arborizing vessels with small perifollicular and follicular yellowish spots and follicular red dots. A biopsy specimen also revealed dense lymphocytes, especially CD1a+ cells, infiltrated around the hair follicles. Without any additional treatment, the patient's nodule rapidly decreased. The presented case suggests that typical dermoscopy findings could be a possible supportive tool for the diagnosis of PLF.", "title": "" }, { "docid": "neg:1840107_18", "text": "processes associated with social identity. Group identification, as self-categorization, constructs an intragroup prototypicality gradient that invests the most prototypical member with the appearance of having influence; the appearance arises because members cognitively and behaviorally conform to the prototype. The appearance of influence becomes a reality through depersonalized social attraction processes that makefollowers agree and comply with the leader's ideas and suggestions. Consensual social attraction also imbues the leader with apparent status and creates a status-based structural differentiation within the group into leader(s) and followers, which has characteristics ofunequal status intergroup relations. In addition, afundamental attribution process constructs a charismatic leadership personality for the leader, which further empowers the leader and sharpens the leader-follower status differential. Empirical supportfor the theory is reviewed and a range of implications discussed, including intergroup dimensions, uncertainty reduction and extremism, power, and pitfalls ofprototype-based leadership.", "title": "" }, { "docid": "neg:1840107_19", "text": "Although relatively small in size and power output, automotive accessory motors play a vital role in improving such critical vehicle characteristics as drivability, comfort, and, most importantly, fuel economy. This paper describes a design method and experimental verification of a novel technique for torque ripple reduction in stator claw-pole permanent-magnet (PM) machines, which are a promising technology prospect for automotive accessory motors.", "title": "" } ]
1840108
Advances in Game Accessibility from 2005 to 2010
[ { "docid": "pos:1840108_0", "text": "CopyCat is an American Sign Language (ASL) game, which uses gesture recognition technology to help young deaf children practice ASL skills. We describe a brief history of the game, an overview of recent user studies, and the results of recent work on the problem of continuous, user-independent sign language recognition in classroom settings. Our database of signing samples was collected from user studies of deaf children playing aWizard of Oz version of the game at the Atlanta Area School for the Deaf (AASD). Our data set is characterized by disfluencies inherent in continuous signing, varied user characteristics including clothing and skin tones, and illumination changes in the classroom. The dataset consisted of 541 phrase samples and 1,959 individual sign samples of five children signing game phrases from a 22 word vocabulary. Our recognition approach uses color histogram adaptation for robust hand segmentation and tracking. The children wear small colored gloves with wireless accelerometers mounted on the back of their wrists. The hand shape information is combined with accelerometer data and used to train hidden Markov models for recognition. We evaluated our approach by using leave-one-out validation; this technique iterates through each child, training on data from four children and testing on the remaining child's data. We achieved average word accuracies per child ranging from 91.75% to 73.73% for the user-independent models.", "title": "" } ]
[ { "docid": "neg:1840108_0", "text": "In this paper, we introduce an Iterative Kalman Smoother (IKS) for tracking the 3D motion of a mobile device in real-time using visual and inertial measurements. In contrast to existing Extended Kalman Filter (EKF)-based approaches, smoothing can better approximate the underlying nonlinear system and measurement models by re-linearizing them. Additionally, by iteratively optimizing over all measurements available, the IKS increases the convergence rate of critical parameters (e.g., IMU-camera clock drift) and improves the positioning accuracy during challenging conditions (e.g., scarcity of visual features). Furthermore, and in contrast to existing inverse filters, the proposed IKS's numerical stability allows for efficient 32-bit implementations on resource-constrained devices, such as cell phones and wearables. We validate the IKS for performing vision-aided inertial navigation on Google Glass, a wearable device with limited sensing and processing, and demonstrate positioning accuracy comparable to that achieved on cell phones. To the best of our knowledge, this work presents the first proof-of-concept real-time 3D indoor localization system on a commercial-grade wearable computer.", "title": "" }, { "docid": "neg:1840108_1", "text": "Medical training has traditionally depended on patient contact. However, changes in healthcare delivery coupled with concerns about lack of objectivity or standardization of clinical examinations lead to the introduction of the 'simulated patient' (SP). SPs are now used widely for teaching and assessment purposes. SPs are usually, but not necessarily, lay people who are trained to portray a patient with a specific condition in a realistic way, sometimes in a standardized way (where they give a consistent presentation which does not vary from student to student). SPs can be used for teaching and assessment of consultation and clinical/physical examination skills, in simulated teaching environments or in situ. All SPs play roles but SPs have also been used successfully to give feedback and evaluate student performance. Clearly, given this potential level of involvement in medical training, it is critical to recruit, train and use SPs appropriately. We have provided a detailed overview on how to do so, for both teaching and assessment purposes. The contents include: how to monitor and assess SP performance, both in terms of validity and reliability, and in terms of the impact on the SP; and an overview of the methods, staff costs and routine expenses required for recruiting, administrating and training an SP bank, and finally, we provide some intercultural comparisons, a 'snapshot' of the use of SPs in medical education across Europe and Asia, and briefly discuss some of the areas of SP use which require further research.", "title": "" }, { "docid": "neg:1840108_2", "text": "Commercial clouds bring a great opportunity to the scientific computing area. Scientific applications usually require significant resources, however not all scientists have access to sufficient high-end computing systems. Cloud computing has gained the attention of scientists as a competitive resource to run HPC applications at a potentially lower cost. But as a different infrastructure, it is unclear whether clouds are capable of running scientific applications with a reasonable performance per money spent. This work provides a comprehensive evaluation of EC2 cloud in different aspects. We first analyze the potentials of the cloud by evaluating the raw performance of different services of AWS such as compute, memory, network and I/O. Based on the findings on the raw performance, we then evaluate the performance of the scientific applications running in the cloud. Finally, we compare the performance of AWS with a private cloud, in order to find the root cause of its limitations while running scientific applications. This paper aims to assess the ability of the cloud to perform well, as well as to evaluate the cost of the cloud in terms of both raw performance and scientific applications performance. Furthermore, we evaluate other services including S3, EBS and DynamoDB among many AWS services in order to assess the abilities of those to be used by scientific applications and frameworks. We also evaluate a real scientific computing application through the Swift parallel scripting system at scale. Armed with both detailed benchmarks to gauge expected performance and a detailed monetary cost analysis, we expect this paper will be a recipe cookbook for scientists to help them decide where to deploy and run their scientific applications between public clouds, private clouds, or hybrid clouds.", "title": "" }, { "docid": "neg:1840108_3", "text": "As known, fractional CO2 resurfacing treatments are more effective than non-ablative ones against aging signs, but post-operative redness and swelling prolong the overall downtime requiring up to steroid administration in order to reduce these local systems. In the last years, an increasing interest has been focused on the possible use of probiotics for treating inflammatory and allergic conditions suggesting that they can exert profound beneficial effects on skin homeostasis. In this work, the Authors report their experience on fractional CO2 laser resurfacing and provide the results of a new post-operative topical treatment with an experimental cream containing probiotic-derived active principles potentially able to modulate the inflammatory reaction associated to laser-treatment. The cream containing DermaACB (CERABEST™) was administered post-operatively to 42 consecutive patients who were treated with fractional CO2 laser. All patients adopted the cream twice a day for 2 weeks. Grades were given according to outcome scale. The efficacy of the cream containing DermaACB was evaluated comparing the rate of post-operative signs vanishing with a control group of 20 patients topically treated with an antibiotic cream and a hyaluronic acid based cream. Results registered with the experimental treatment were good in 22 patients, moderate in 17, and poor in 3 cases. Patients using the study cream took an average time of 14.3 days for erythema resolution and 9.3 days for swelling vanishing. The post-operative administration of the cream containing DermaACB induces a quicker reduction of post-operative erythema and swelling when compared to a standard treatment.", "title": "" }, { "docid": "neg:1840108_4", "text": "Competition in the wireless telecommunications industry is fierce. To maintain profitability, wireless carriers must control churn, which is the loss of subscribers who switch from one carrier to another.We explore techniques from statistical machine learning to predict churn and, based on these predictions, to determine what incentives should be offered to subscribers to improve retention and maximize profitability to the carrier. The techniques include logit regression, decision trees, neural networks, and boosting. Our experiments are based on a database of nearly 47,000 U.S. domestic subscribers and includes information about their usage, billing, credit, application, and complaint history. Our experiments show that under a wide variety of assumptions concerning the cost of intervention and the retention rate resulting from intervention, using predictive techniques to identify potential churners and offering incentives can yield significant savings to a carrier. We also show the importance of a data representation crafted by domain experts. Finally, we report on a real-world test of the techniques that validate our simulation experiments.", "title": "" }, { "docid": "neg:1840108_5", "text": "Task parallelism has increasingly become a trend with programming models such as OpenMP 3.0, Cilk, Java Concurrency, X10, Chapel and Habanero-Java (HJ) to address the requirements of multicore programmers. While task parallelism increases productivity by allowing the programmer to express multiple levels of parallelism, it can also lead to performance degradation due to increased overheads. In this article, we introduce a transformation framework for optimizing task-parallel programs with a focus on task creation and task termination operations. These operations can appear explicitly in constructs such as async, finish in X10 and HJ, task, taskwait in OpenMP 3.0, and spawn, sync in Cilk, or implicitly in composite code statements such as foreach and ateach loops in X10, forall and foreach loops in HJ, and parallel loop in OpenMP.\n Our framework includes a definition of data dependence in task-parallel programs, a happens-before analysis algorithm, and a range of program transformations for optimizing task parallelism. Broadly, our transformations cover three different but interrelated optimizations: (1) finish-elimination, (2) forall-coarsening, and (3) loop-chunking. Finish-elimination removes redundant task termination operations, forall-coarsening replaces expensive task creation and termination operations with more efficient synchronization operations, and loop-chunking extracts useful parallelism from ideal parallelism. All three optimizations are specified in an iterative transformation framework that applies a sequence of relevant transformations until a fixed point is reached. Further, we discuss the impact of exception semantics on the specified transformations, and extend them to handle task-parallel programs with precise exception semantics. Experimental results were obtained for a collection of task-parallel benchmarks on three multicore platforms: a dual-socket 128-thread (16-core) Niagara T2 system, a quad-socket 16-core Intel Xeon SMP, and a quad-socket 32-core Power7 SMP. We have observed that the proposed optimizations interact with each other in a synergistic way, and result in an overall geometric average performance improvement between 6.28× and 10.30×, measured across all three platforms for the benchmarks studied.", "title": "" }, { "docid": "neg:1840108_6", "text": "Gender-affirmation surgery is often the final gender-confirming medical intervention sought by those patients suffering from gender dysphoria. In the male-to-female (MtF) transgendered patient, the creation of esthetic and functional external female genitalia with a functional vaginal channel is of the utmost importance. The aim of this review and meta-analysis is to evaluate the epidemiology, presentation, management, and outcomes of neovaginal complications in the MtF transgender reassignment surgery patients. PUBMED was searched in accordance with PRISMA guidelines for relevant articles (n = 125). Ineligible articles were excluded and articles meeting all inclusion criteria went on to review and analysis (n = 13). Ultimately, studies reported on 1,684 patients with an overall complication rate of 32.5% and a reoperation rate of 21.7% for non-esthetic reasons. The most common complication was stenosis of the neo-meatus (14.4%). Wound infection was associated with an increased risk of all tissue-healing complications. Use of sacrospinous ligament fixation (SSL) was associated with a significantly decreased risk of prolapse of the neovagina. Gender-affirmation surgery is important in the treatment of gender dysphoric patients, but there is a high complication rate in the reported literature. Variability in technique and complication reporting standards makes it difficult to assess the accurately the current state of MtF gender reassignment surgery. Further research and implementation of standards is necessary to improve patient outcomes. Clin. Anat. 31:191-199, 2018. © 2017 Wiley Periodicals, Inc.", "title": "" }, { "docid": "neg:1840108_7", "text": "BACKGROUND\nThe rate of bacterial meningitis declined by 55% in the United States in the early 1990s, when the Haemophilus influenzae type b (Hib) conjugate vaccine for infants was introduced. More recent prevention measures such as the pneumococcal conjugate vaccine and universal screening of pregnant women for group B streptococcus (GBS) have further changed the epidemiology of bacterial meningitis.\n\n\nMETHODS\nWe analyzed data on cases of bacterial meningitis reported among residents in eight surveillance areas of the Emerging Infections Programs Network, consisting of approximately 17.4 million persons, during 1998-2007. We defined bacterial meningitis as the presence of H. influenzae, Streptococcus pneumoniae, GBS, Listeria monocytogenes, or Neisseria meningitidis in cerebrospinal fluid or other normally sterile site in association with a clinical diagnosis of meningitis.\n\n\nRESULTS\nWe identified 3188 patients with bacterial meningitis; of 3155 patients for whom outcome data were available, 466 (14.8%) died. The incidence of meningitis changed by -31% (95% confidence interval [CI], -33 to -29) during the surveillance period, from 2.00 cases per 100,000 population (95% CI, 1.85 to 2.15) in 1998-1999 to 1.38 cases per 100,000 population (95% CI 1.27 to 1.50) in 2006-2007. The median age of patients increased from 30.3 years in 1998-1999 to 41.9 years in 2006-2007 (P<0.001 by the Wilcoxon rank-sum test). The case fatality rate did not change significantly: it was 15.7% in 1998-1999 and 14.3% in 2006-2007 (P=0.50). Of the 1670 cases reported during 2003-2007, S. pneumoniae was the predominant infective species (58.0%), followed by GBS (18.1%), N. meningitidis (13.9%), H. influenzae (6.7%), and L. monocytogenes (3.4%). An estimated 4100 cases and 500 deaths from bacterial meningitis occurred annually in the United States during 2003-2007.\n\n\nCONCLUSIONS\nThe rates of bacterial meningitis have decreased since 1998, but the disease still often results in death. With the success of pneumococcal and Hib conjugate vaccines in reducing the risk of meningitis among young children, the burden of bacterial meningitis is now borne more by older adults. (Funded by the Emerging Infections Programs, Centers for Disease Control and Prevention.).", "title": "" }, { "docid": "neg:1840108_8", "text": "We present a method to classify images into different categories of pornographic content to create a system for filtering pornographic images from network traffic. Although different systems for this application were presented in the past, most of these systems are based on simple skin colour features and have rather poor performance. Recent advances in the image recognition field in particular for the classification of objects have shown that bag-of-visual-words-approaches are a good method for many image classification problems. The system we present here, is based on this approach, uses a task-specific visual vocabulary and is trained and evaluated on an image database of 8500 images from different categories. It is shown that it clearly outperforms earlier systems on this dataset and further evaluation on two novel web-traffic collections shows the good performance of the proposed system.", "title": "" }, { "docid": "neg:1840108_9", "text": "BACKGROUND\nIn 2010, overweight and obesity were estimated to cause 3·4 million deaths, 3·9% of years of life lost, and 3·8% of disability-adjusted life-years (DALYs) worldwide. The rise in obesity has led to widespread calls for regular monitoring of changes in overweight and obesity prevalence in all populations. Comparable, up-to-date information about levels and trends is essential to quantify population health effects and to prompt decision makers to prioritise action. We estimate the global, regional, and national prevalence of overweight and obesity in children and adults during 1980-2013.\n\n\nMETHODS\nWe systematically identified surveys, reports, and published studies (n=1769) that included data for height and weight, both through physical measurements and self-reports. We used mixed effects linear regression to correct for bias in self-reports. We obtained data for prevalence of obesity and overweight by age, sex, country, and year (n=19,244) with a spatiotemporal Gaussian process regression model to estimate prevalence with 95% uncertainty intervals (UIs).\n\n\nFINDINGS\nWorldwide, the proportion of adults with a body-mass index (BMI) of 25 kg/m(2) or greater increased between 1980 and 2013 from 28·8% (95% UI 28·4-29·3) to 36·9% (36·3-37·4) in men, and from 29·8% (29·3-30·2) to 38·0% (37·5-38·5) in women. Prevalence has increased substantially in children and adolescents in developed countries; 23·8% (22·9-24·7) of boys and 22·6% (21·7-23·6) of girls were overweight or obese in 2013. The prevalence of overweight and obesity has also increased in children and adolescents in developing countries, from 8·1% (7·7-8·6) to 12·9% (12·3-13·5) in 2013 for boys and from 8·4% (8·1-8·8) to 13·4% (13·0-13·9) in girls. In adults, estimated prevalence of obesity exceeded 50% in men in Tonga and in women in Kuwait, Kiribati, Federated States of Micronesia, Libya, Qatar, Tonga, and Samoa. Since 2006, the increase in adult obesity in developed countries has slowed down.\n\n\nINTERPRETATION\nBecause of the established health risks and substantial increases in prevalence, obesity has become a major global health challenge. Not only is obesity increasing, but no national success stories have been reported in the past 33 years. Urgent global action and leadership is needed to help countries to more effectively intervene.\n\n\nFUNDING\nBill & Melinda Gates Foundation.", "title": "" }, { "docid": "neg:1840108_10", "text": "In this paper we propose a replacement algorithm, SF-LRU (second chance-frequency - least recently used) that combines the LRU (least recently used) and the LFU (least frequently used) using the second chance concept. A comprehensive comparison is made between our algorithm and both LRU and LFU algorithms. Experimental results show that the SF-LRU significantly reduces the number of cache misses compared the other two algorithms. Simulation results show that our algorithm can provide a maximum value of approximately 6.3% improvement in the miss ratio over the LRU algorithm in data cache and approximately 9.3% improvement in miss ratio in instruction cache. This performance improvement is attributed to the fact that our algorithm provides a second chance to the block that may be deleted according to LRU's rules. This is done by comparing the frequency of the block with the block next to it in the set.", "title": "" }, { "docid": "neg:1840108_11", "text": "OBJECTIVE\nTo describe and discuss the process used to write a narrative review of the literature for publication in a peer-reviewed journal. Publication of narrative overviews of the literature should be standardized to increase their objectivity.\n\n\nBACKGROUND\nIn the past decade numerous changes in research methodology pertaining to reviews of the literature have occurred. These changes necessitate authors of review articles to be familiar with current standards in the publication process.\n\n\nMETHODS\nNarrative overview of the literature synthesizing the findings of literature retrieved from searches of computerized databases, hand searches, and authoritative texts.\n\n\nDISCUSSION\nAn overview of the use of three types of reviews of the literature is presented. Step by step instructions for how to conduct and write a narrative overview utilizing a 'best-evidence synthesis' approach are discussed, starting with appropriate preparatory work and ending with how to create proper illustrations. Several resources for creating reviews of the literature are presented and a narrative overview critical appraisal worksheet is included. A bibliography of other useful reading is presented in an appendix.\n\n\nCONCLUSION\nNarrative overviews can be a valuable contribution to the literature if prepared properly. New and experienced authors wishing to write a narrative overview should find this article useful in constructing such a paper and carrying out the research process. It is hoped that this article will stimulate scholarly dialog amongst colleagues about this research design and other complex literature review methods.", "title": "" }, { "docid": "neg:1840108_12", "text": "Network intrusion detection systems have become a standard component in security infrastructures. Unfortunately, current systems are poor at detecting novel attacks without an unacceptable level of false alarms. We propose that the solution to this problem is the application of an ensemble of data mining techniques which can be applied to network connection data in an offline environment, augmenting existing real-time sensors. In this paper, we expand on our motivation, particularly with regard to running in an offline environment, and our interest in multisensor and multimethod correlation. We then review existing systems, from commercial systems, to research based intrusion detection systems. Next we survey the state of the art in the area. Standard datasets and feature extraction turned out to be more important than we had initially anticipated, so each can be found under its own heading. Next, we review the actual data mining methods that have been proposed or implemented. We conclude by summarizing the open problems in this area and proposing a new research project to answer some of these open problems.", "title": "" }, { "docid": "neg:1840108_13", "text": "This article reviews some of the criticisms directed towards the eclectic paradigm of international production over the past decade, and restates its main tenets. The second part of the article considers a number of possible extensions of the paradigm and concludes by asserting that it remains \"a robust general framework for explaining and analysing not only the economic rationale of economic production but many organisational nd impact issues in relation to MNE activity as well.\"", "title": "" }, { "docid": "neg:1840108_14", "text": "A wavelet scattering network computes a translation invariant image representation which is stable to deformations and preserves high-frequency information for classification. It cascades wavelet transform convolutions with nonlinear modulus and averaging operators. The first network layer outputs SIFT-type descriptors, whereas the next layers provide complementary invariant information that improves classification. The mathematical analysis of wavelet scattering networks explains important properties of deep convolution networks for classification. A scattering representation of stationary processes incorporates higher order moments and can thus discriminate textures having the same Fourier power spectrum. State-of-the-art classification results are obtained for handwritten digits and texture discrimination, with a Gaussian kernel SVM and a generative PCA classifier.", "title": "" }, { "docid": "neg:1840108_15", "text": "How to effectively manage increasingly complex enterprise computing environments is one of the hardest challenges that most organizations have to face in the era of cloud computing, big data and IoT. Advanced automation and orchestration systems are the most valuable solutions helping IT staff to handle large-scale cloud data centers. Containers are the new revolution in the cloud computing world, they are more lightweight than VMs, and can radically decrease both the start up time of instances and the processing and storage overhead with respect to traditional VMs. The aim of this paper is to provide a comprehensive description of cloud orchestration approaches with containers, analyzing current research efforts, existing solutions and presenting issues and challenges facing this topic.", "title": "" }, { "docid": "neg:1840108_16", "text": "Deep learning describes a class of machine learning algorithms that are capable of combining raw inputs into layers of intermediate features. These algorithms have recently shown impressive results across a variety of domains. Biology and medicine are data-rich disciplines, but the data are complex and often ill-understood. Hence, deep learning techniques may be particularly well suited to solve problems of these fields. We examine applications of deep learning to a variety of biomedical problems-patient classification, fundamental biological processes and treatment of patients-and discuss whether deep learning will be able to transform these tasks or if the biomedical sphere poses unique challenges. Following from an extensive literature review, we find that deep learning has yet to revolutionize biomedicine or definitively resolve any of the most pressing challenges in the field, but promising advances have been made on the prior state of the art. Even though improvements over previous baselines have been modest in general, the recent progress indicates that deep learning methods will provide valuable means for speeding up or aiding human investigation. Though progress has been made linking a specific neural network's prediction to input features, understanding how users should interpret these models to make testable hypotheses about the system under study remains an open challenge. Furthermore, the limited amount of labelled data for training presents problems in some domains, as do legal and privacy constraints on work with sensitive health records. Nonetheless, we foresee deep learning enabling changes at both bench and bedside with the potential to transform several areas of biology and medicine.", "title": "" }, { "docid": "neg:1840108_17", "text": "Extractive summarization is the strategy of concatenating extracts taken from a corpus into a summary, while abstractive summarization involves paraphrasing the corpus using novel sentences. We define a novel measure of corpus controversiality of opinions contained in evaluative text, and report the results of a user study comparing extractive and NLG-based abstractive summarization at different levels of controversiality. While the abstractive summarizer performs better overall, the results suggest that the margin by which abstraction outperforms extraction is greater when controversiality is high, providing aion outperforms extraction is greater when controversiality is high, providing a context in which the need for generationbased methods is especially great.", "title": "" }, { "docid": "neg:1840108_18", "text": "Loan fraud is a critical factor in the insolvency of financial institutions, so companies make an effort to reduce the loss from fraud by building a model for proactive fraud prediction. However, there are still two critical problems to be resolved for the fraud detection: (1) the lack of cost sensitivity between type I error and type II error in most prediction models, and (2) highly skewed distribution of class in the dataset used for fraud detection because of sparse fraud-related data. The objective of this paper is to examine whether classification cost is affected both by the cost-sensitive approach and by skewed distribution of class. To that end, we compare the classification cost incurred by a traditional cost-insensitive classification approach and two cost-sensitive classification approaches, Cost-Sensitive Classifier (CSC) and MetaCost. Experiments were conducted with a credit loan dataset from a major financial institution in Korea, while varying the distribution of class in the dataset and the number of input variables. The experiments showed that the lowest classification cost was incurred when the MetaCost approach was used and when non-fraud data and fraud data were balanced. In addition, the dataset that includes all delinquency variables was shown to be most effective on reducing the classification cost. 2011 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "neg:1840108_19", "text": "Automatic spelling and grammatical correction systems are one of the most widely used tools within natural language applications. In this thesis, we assume the task of error correction as a type of monolingual machine translation where the source sentence is potentially erroneous and the target sentence should be the corrected form of the input. Our main focus in this project is building neural network models for the task of error correction. In particular, we investigate sequence-to-sequence and attention-based models which have recently shown a higher performance than the state-of-the-art of many language processing problems. We demonstrate that neural machine translation models can be successfully applied to the task of error correction. While the experiments of this research are performed on an Arabic corpus, our methods in this thesis can be easily applied to any language. Keywords— natural language error correction, recurrent neural networks, encoderdecoder models, attention mechanism", "title": "" } ]
1840109
A Distributed Sensor Data Search Platform for Internet of Things Environments
[ { "docid": "pos:1840109_0", "text": "The multiple criteria decision making (MCDM) methods VIKOR and TOPSIS are based on an aggregating function representing “closeness to the idealâ€​, which originated in the compromise programming method. In VIKOR linear normalization and in TOPSIS vector normalization is used to eliminate the units of criterion functions. The VIKOR method of compromise ranking determines a compromise solution, providing a maximum “group utilityâ€​ for the “majorityâ€​ and a minimum of an individual regret for the “opponentâ€​. The TOPSIS method determines a solution with the shortest distance to the ideal solution and the greatest distance from the negative-ideal solution, but it does not consider the relative importance of these distances. A comparative analysis of these two methods is illustrated with a numerical example, showing their similarity and some differences. a, 1 b Purchase Export Previous article Next article Check if you have access through your login credentials or your institution.", "title": "" } ]
[ { "docid": "neg:1840109_0", "text": "We review Bacry and Lévy-Leblond’s work on possible kinematics as applied to 2-dimensional spacetimes, as well as the nine types of 2-dimensional Cayley–Klein geometries, illustrating how the Cayley–Klein geometries give homogeneous spacetimes for all but one of the kinematical groups. We then construct a two-parameter family of Clifford algebras that give a unified framework for representing both the Lie algebras as well as the kinematical groups, showing that these groups are true rotation groups. In addition we give conformal models for these spacetimes.", "title": "" }, { "docid": "neg:1840109_1", "text": "In this paper, we present madmom, an open-source audio processing and music information retrieval (MIR) library written in Python. madmom features a concise, NumPy-compatible, object oriented design with simple calling conventions and sensible default values for all parameters, which facilitates fast prototyping of MIR applications. Prototypes can be seamlessly converted into callable processing pipelines through madmom's concept of Processors, callable objects that run transparently on multiple cores. Processors can also be serialised, saved, and re-run to allow results to be easily reproduced anywhere. Apart from low-level audio processing, madmom puts emphasis on musically meaningful high-level features. Many of these incorporate machine learning techniques and madmom provides a module that implements some methods commonly used in MIR such as hidden Markov models and neural networks. Additionally, madmom comes with several state-of-the-art MIR algorithms for onset detection, beat, downbeat and meter tracking, tempo estimation, and chord recognition. These can easily be incorporated into bigger MIR systems or run as stand-alone programs.", "title": "" }, { "docid": "neg:1840109_2", "text": "We propose a method to procedurally generate a familiar yet complex human artifact: the city. We are not trying to reproduce existing cities, but to generate artificial cities that are convincing and plausible by capturing developmental behavior. In addition, our results are meant to build upon themselves, such that they ought to look compelling at any point along the transition from village to metropolis. Our approach largely focuses upon land usage and building distribution for creating realistic city environments, whereas previous attempts at city modeling have mainly focused on populating road networks. Finally, we want our model to be self automated to the point that the only necessary input is a terrain description, but other high-level and low-level parameters can be specified to support artistic contributions. With the aid of agent based simulation we are generating a system of agents and behaviors that interact with one another through their effects upon a simulated environment. Our philosophy is that as each agent follows a simple behavioral rule set, a more complex behavior will tend to emerge out of the interactions between the agents and their differing rule sets. By confining our model to a set of simple rules for each class of agents, we hope to make our model extendible not only in regard to the types of structures that are produced, but also in describing the social and cultural influences prevalent in all cities.", "title": "" }, { "docid": "neg:1840109_3", "text": "This paper details a methodology for using structured light laser imaging to create high resolution bathymetric maps of the sea floor. The system includes a pair of stereo cameras and an inclined 532nm sheet laser mounted to a remotely operated vehicle (ROV). While a structured light system generally requires a single camera, a stereo vision set up is used here for in-situ calibration of the laser system geometry by triangulating points on the laser line. This allows for quick calibration at the survey site and does not require precise jigs or a controlled environment. A batch procedure to extract the laser line from the images to sub-pixel accuracy is also presented. The method is robust to variations in image quality and moderate amounts of water column turbidity. The final maps are constructed using a reformulation of a previous bathymetric Simultaneous Localization and Mapping (SLAM) algorithm called incremental Smoothing and Mapping (iSAM). The iSAM framework is adapted from previous applications to perform sub-mapping, where segments of previously visited terrain are registered to create relative pose constraints. The resulting maps can be gridded at one centimeter and have significantly higher sample density than similar surveys using high frequency multibeam sonar or stereo vision. Results are presented for sample surveys at a submerged archaeological site and sea floor rock outcrop.", "title": "" }, { "docid": "neg:1840109_4", "text": "The present article describes VAS Generator (www.vasgenerator.net), a free Web service for creating a wide range of visual analogue scales that can be used as measurement devices in Web surveys and Web experimentation, as well as for local computerized assessment. A step-by-step example for creating and implementing a visual analogue scale with visual feedback is given. VAS Generator and the scales it generates work independently of platforms and use the underlying languages HTML and JavaScript. Results from a validation study with 355 participants are reported and show that the scales generated with VAS Generator approximate an interval-scale level. In light of previous research on visual analogue versus categorical (e.g., radio button) scales in Internet-based research, we conclude that categorical scales only reach ordinal-scale level, and thus visual analogue scales are to be preferred whenever possible.", "title": "" }, { "docid": "neg:1840109_5", "text": "While our knowledge about ancient civilizations comes mostly from studies in archaeology and history books, much can also be learned or confirmed from literary texts . Using natural language processing techniques, we present aspects of ancient China as revealed by statistical textual analysis on the Complete Tang Poems , a 2.6-million-character corpus of all surviving poems from the Tang Dynasty (AD 618 —907). Using an automatically created treebank of this corpus , we outline the semantic profiles of various poets, and discuss the role of s easons, geography, history, architecture, and colours , as observed through word selection and dependencies.", "title": "" }, { "docid": "neg:1840109_6", "text": "This paper proposed a 4-channel parallel 40 Gb/s front-end amplifier (FEA) in optical receiver for parallel optical transmission system. A novel enhancement type regulated cascade (ETRGC) configuration with an active inductor is originated in this paper for the transimpedance amplifier to significantly increase the bandwidth. The technique of three-order interleaving active feedback expands the bandwidth of the gain stage of transimpedance amplifier and limiting amplifier. Experimental results show that the output swing is 210 mV (Vpp) when the input voltage varies from 5 mV to 500 mV. The power consumption of the 4-channel parallel 40 Gb/s front-end amplifier (FEA) is 370 mW with 1.8 V power supply and the chip area is 650 μm×1300 μm.", "title": "" }, { "docid": "neg:1840109_7", "text": "In this paper, a novel chemical sensor system utilizing an Ion-Sensitive Field Effect Transistor (ISFET) for pH measurement is presented. Compared to other interface circuits, this system uses auto-zero amplifiers with a pingpong control scheme and array of Programmable-Gate Ion-Sensitive Field Effect Transistor (PG-ISFET). By feedback controlling the programable gates of ISFETs, the intrinsic sensor offset can be compensated for uniformly. Furthermore the chemical signal sensitivity can be enhanced due to the feedback system on the sensing node. A pingpong structure and operation protocol has been developed to realize the circuit, reducing the error and achieve continuous measurement. This system has been designed and fabricated in AMS 0.35µm, to compensate for a threshold voltage variation of ±5V and enhance the pH sensitivity to 100mV/pH.", "title": "" }, { "docid": "neg:1840109_8", "text": "A new wideband circularly polarized antenna using metasurface superstrate for C-band satellite communication application is proposed in this letter. The proposed antenna consists of a planar slot coupling antenna with an array of metallic rectangular patches that can be viewed as a polarization-dependent metasurface superstrate. The metasurface is utilized to adjust axial ratio (AR) for wideband circular polarization. Furthermore, the proposed antenna has a compact structure with a low profile of 0.07λ0 ( λ0 stands for the free-space wavelength at 5.25 GHz) and ground size of 34.5×28 mm2. Measured results show that the -10-dB impedance bandwidth for the proposed antenna is 33.7% from 4.2 to 5.9 GHz, and 3-dB AR bandwidth is 16.5% from 4.9 to 5.9 GHz with an average gain of 5.8 dBi. The simulated and measured results are in good agreement to verify the good performance of the proposed antenna.", "title": "" }, { "docid": "neg:1840109_9", "text": "This paper proposes a robust background model-based dense-visual-odometry (BaMVO) algorithm that uses an RGB-D sensor in a dynamic environment. The proposed algorithm estimates the background model represented by the nonparametric model from depth scenes and then estimates the ego-motion of the sensor using the energy-based dense-visual-odometry approach based on the estimated background model in order to consider moving objects. Experimental results demonstrate that the ego-motion is robustly obtained by BaMVO in a dynamic environment.", "title": "" }, { "docid": "neg:1840109_10", "text": "Natural language makes considerable use of recurrent formulaic patterns of words. This article triangulates the construct of formula from corpus linguistic, psycholinguistic, and educational perspectives. It describes the corpus linguistic extraction of pedagogically useful formulaic sequences for academic speech and writing. It determines English as a second language (ESL) and English for academic purposes (EAP) instructors’ evaluations of their pedagogical importance. It summarizes three experiments which show that different aspects of formulaicity affect the accuracy and fluency of processing of these formulas in native speakers and in advanced L2 learners of English. The language processing tasks were selected to sample an ecologically valid range of language processing skills: spoken and written, production and comprehension. Processing in all experiments was affected by various corpus-derived metrics: length, frequency, and mutual information (MI), but to different degrees in the different populations. For native speakers, it is predominantly the MI of the formula which determines processability; for nonnative learners of the language, it is predominantly the frequency of the formula. The implications of these findings are discussed for (a) the psycholinguistic validity of corpus-derived formulas, (b) a model of their acquisition, (c) ESL and EAP instruction and the prioritization of which formulas to teach.", "title": "" }, { "docid": "neg:1840109_11", "text": "Truck platooning for which multiple trucks follow at a short distance is considered a near-term truck automation opportunity, with the potential to reduce fuel consumption. Short following distances and increasing automation make it hard for a driver to be the backup if the system fails. The EcoTwin consortium successfully demonstrated a two truck platooning system with trucks following at 20 meters distance at the public road, in which the driver is the backup. The ambition of the consortium is to increase the truck automation and to reduce the following distance, which requires a new fail-operational truck platooning architecture. This paper presents a level 2+ platooning system architecture, which is fail-operational for a single failure, and the corresponding process to obtain it. First insights in the existing two truck platooning system are obtained by analyzing its key aspects, being utilization, latency, reliability, and safety. Using these insights, candidate level 2+ platooning system architectures are defined from which the most suitable truck platooning architecture is selected. Future work is the design and implementation of a prototype, based on the presented level 2+ platooning system architecture.", "title": "" }, { "docid": "neg:1840109_12", "text": "Automotive Safety Integrity Level (ASIL) decomposition is a technique presented in the ISO 26262: Road Vehicles Functional Safety standard. Its purpose is to satisfy safety-critical requirements by decomposing them into less critical ones. This procedure requires a system-level validation, and the elements of the architecture to which the decomposed requirements are allocated must be analyzed in terms of Common-Cause Faults (CCF). In this work, we present a generic method for a bottomup ASIL decomposition, which can be used during the development of a new product. The system architecture is described in a three-layer model, from which fault trees are generated, formed by the application, resource, and physical layers and their mappings. A CCF analysis is performed on the fault trees to verify the absence of possible common faults between the redundant elements and to validate the ASIL decomposition.", "title": "" }, { "docid": "neg:1840109_13", "text": "Recent increased regulatory scrutiny concerning subvisible particulates (SbVPs) in parenteral formulations of biologics has led to the publication of numerous articles about the sources, characteristics, implications, and approaches to monitoring and detecting SbVPs. Despite varying opinions on the level of associated risks and method of regulation, nearly all industry scientists and regulators agree on the need for monitoring and reporting visible and subvisible particles. As prefillable drug delivery systems have become a prominent packaging option, silicone oil, a common primary packaging lubricant, may play a role in the appearance of particles. The goal of this article is to complement the current SbVP knowledge base with new insights into the evolution of silicone-oil-related particulates and their interactions with components in prefillable systems. We propose a \"toolbox\" for improved silicone-oil-related particulate detection and enumeration, and discuss the benefits and limitations of approaches for lowering and controlling silicone oil release in parenterals. Finally, we present surface cross-linking of silicone as the recommended solution for achieving significant SbVP reduction without negatively affecting functional performance.", "title": "" }, { "docid": "neg:1840109_14", "text": "This paper presents a new paradigm of cryptography, quantum public-key cryptosystems. In quantum public-key cryptosystems, all parties including senders, receivers and adversaries are modeled as quantum (probabilistic) poly-time Turing (QPT) machines and only classical channels (i.e., no quantum channels) are employed. A quantum trapdoor one-way function, f , plays an essential role in our system, in which a QPT machine can compute f with high probability, any QPT machine can invert f with negligible probability, and a QPT machine with trapdoor data can invert f . This paper proposes a concrete scheme for quantum public-key cryptosystems: a quantum public-key encryption scheme or quantum trapdoor one-way function. The security of our schemes is based on the computational assumption (over QPT machines) that a class of subset-sum problems is intractable against any QPT machine. Our scheme is very efficient and practical if Shor’s discrete logarithm algorithm is efficiently realized on a quantum machine.", "title": "" }, { "docid": "neg:1840109_15", "text": "In this paper, we present results on the implementation of a hierarchical quaternion based attitude and trajectory controller for manual and autonomous flights of quadrotors. Unlike previous papers on using quaternion representation, we use the nonlinear complementary filter that estimates the attitude in quaternions and as such does not involve Euler angles or rotation matrices. We show that for precise trajectory tracking, the resulting attitude error dynamics of the system is non-autonomous and is almost globally asymptotically and locally exponentially stable under the proposed control law. We also show local exponential stability of the translational dynamics under the proposed trajectory tracking controller which sits at the highest level of the hierarchy. Thus by input-to-state stability, the entire system is locally exponentially stable. The quaternion based observer and controllers are available as open-source.", "title": "" }, { "docid": "neg:1840109_16", "text": "The levels of pregnenolone, dehydroepiandrosterone (DHA), androstenedione, testosterone, dihydrotestosterone (DHT), oestrone, oestradiol, cortisol and luteinizing hormone (LH) were measured in the peripheral plasma of a group of young, apparently healthy males before and after masturbation. The same steroids were also determined in a control study, in which the psychological antipation of masturbation was encouraged, but the physical act was not carried out. The plasma levels of all steroids were significantly increased after masturbation, whereas steroid levels remained unchanged in the control study. The most marked changes after masturbation were observed in pregnenolone and DHA levels. No alterations were observed in the plasma levels of LH. Both before and after masturbation plasma levels of testosterone were significantly correlated to those of DHT and oestradiol, but not to those of the other steroids studied. On the other hand, cortisol levels were significantly correlated to those of pregnenolone, DHA, androstenedione and oestrone. In the same subjects, the levels of pregnenolone, DHA, androstenedione, testosterone and DHT, androstenedione and oestrone. In the same subjects, the levels of pregnenolone, DHA, androstenedione, testosterone and DHT in seminal plasma were also estimated; they were all significantly correlated to the levels of the corresponding steroid in the systemic blood withdrawn both before and after masturbation. As a practical consequence, the results indicate that whenever both blood and semen are analysed, blood sampling must precede semen collection.", "title": "" }, { "docid": "neg:1840109_17", "text": "This inaugural article has a twofold purpose: (i) to present a simpler and more general justification of the fundamental scaling laws of quasibrittle fracture, bridging the asymptotic behaviors of plasticity, linear elastic fracture mechanics, and Weibull statistical theory of brittle failure, and (ii) to give a broad but succinct overview of various applications and ramifications covering many fields, many kinds of quasibrittle materials, and many scales (from 10(-8) to 10(6) m). The justification rests on developing a method to combine dimensional analysis of cohesive fracture with second-order accurate asymptotic matching. This method exploits the recently established general asymptotic properties of the cohesive crack model and nonlocal Weibull statistical model. The key idea is to select the dimensionless variables in such a way that, in each asymptotic case, all of them vanish except one. The minimal nature of the hypotheses made explains the surprisingly broad applicability of the scaling laws.", "title": "" }, { "docid": "neg:1840109_18", "text": "The success of IT project related to numerous factors. It had an important significance to find the critical factors for the success of project. Based on the general analysis of IT project management, this paper analyzed some factors of project management for successful IT project from the angle of modern project management. These factors include project participators, project communication, collaboration, and information sharing mechanism as well as project management process. In the end, it analyzed the function of each factor for a successful IT project. On behalf of the collective goal, by the use of the favorable project communication and collaboration, the project participants carry out successfully to the management of the process, which is significant to the project, and make project achieve success eventually.", "title": "" }, { "docid": "neg:1840109_19", "text": "The purpose of text clustering in information retrieval is to discover groups of semantically related documents. Accurate and comprehensible cluster descriptions (labels) let the user comprehend the collection’s content faster and are essential for various document browsing interfaces. The task of creating descriptive, sensible cluster labels is difficult—typical text clustering algorithms focus on optimizing proximity between documents inside a cluster and rely on keyword representation for describing discovered clusters. In the approach called Description Comes First (DCF) cluster labels are as important as document groups—DCF promotes machine discovery of comprehensible candidate cluster labels later used to discover related document groups. In this paper we describe an application of DCF to the k-Means algorithm, including results of experiments performed on the 20-newsgroups document collection. Experimental evaluation showed that DCF does not decrease the metrics used to assess the quality of document assignment and offers good cluster labels in return. The algorithm utilizes search engine’s data structures directly to scale to large document collections. Introduction Organizing unstructured collections of textual content into semantically related groups, from now on referred to as text clustering or clustering, provides unique ways of digesting large amounts of information. In the context of information retrieval and text mining, a general definition of clustering is the following: given a large set of documents, automatically discover diverse subsets of documents that share a similar topic. In typical applications input documents are first transformed into a mathematical model where each document is described by certain features. The most popular representation for text is the vector space model [Salton, 1989]. In the VSM, documents are expressed as rows in a matrix, where columns represent unique terms (features) and the intersection of a column and a row indicates the importance of a given word to the document. A model such as the VSM helps in calculation of similarity between documents (angle between document vectors) and thus facilitates application of various known (or modified) numerical clustering algorithms. While this is sufficient for many applications, problems arise when one needs to construct some representation of the discovered groups of documents—a label, a symbolic description for each cluster, something to represent the information that makes documents inside a cluster similar to each other and that would convey this information to the user. Cluster labeling problems are often present in modern text and Web mining applications with document browsing interfaces. The process of returning from the mathematical model of clusters to comprehensible, explanatory labels is difficult because text representation used for clustering rarely preserves the inflection and syntax of the original text. Clustering algorithms presented in literature usually fall back to the simplest form of cluster representation—a list of cluster’s keywords (most “central” terms in the cluster). Unfortunately, keywords are stripped from syntactical information and force the user to manually find the underlying concept which is often confusing. Motivation and Related Works The user of a retrieval system judges the clustering algorithm by what he sees in the output— clusters’ descriptions, not the final model which is usually incomprehensible for humans. The experiences with the text clustering framework Carrot (www.carrot2.org) resulted in posing a slightly different research problem (aligned with clustering but not exactly the same). We shifted the emphasis of a clustering method to providing comprehensible and accurate cluster labels in addition to discovery of document groups. We call this problem descriptive clustering: discovery of diverse groups of semantically related documents associated with a meaningful, comprehensible and compact text labels. This definition obviously leaves a great deal of freedom for interpretation because terms such as meaningful or accurate are very vague. We narrowed the set of requirements of descriptive clustering to the following ones: — comprehensibility understood as grammatical correctness (word order, inflection, agreement between words if applicable); — conciseness of labels. Phrases selected for a cluster label should minimize its total length (without sacrificing its comprehensibility); — transparency of the relationship between cluster label and cluster content, best explained by ability to answer questions as: “Why was this label selected for these documents?” and “Why is this document in a cluster labeled X?”. Little research has been done to address the requirements above. In the STC algorithm authors employed frequently recurring phrases as both document similarity feature and final cluster description [Zamir and Etzioni, 1999]. A follow-up work [Ferragina and Gulli, 2004] showed how to avoid certain STC limitations and use non-contiguous phrases (so-called approximate sentences). A different idea of ‘label-driven’ clustering appeared in clustering with committees algorithm [Pantel and Lin, 2002], where strongly associated terms related to unambiguous concepts were evaluated using semantic relationships from WordNet. We introduced the DCF approach in our previous work [Osiński and Weiss, 2005] and showed its feasibility using an algorithm called Lingo. Lingo used singular value decomposition of the term-document matrix to select good cluster labels among candidates extracted from the text (frequent phrases). The algorithm was designed to cluster results from Web search engines (short snippets and fragmented descriptions of original documents) and proved to provide diverse meaningful cluster labels. Lingo’s weak point is its limited scalability to full or even medium sized documents. In this", "title": "" } ]
1840110
Automatic nonverbal behavior indicators of depression and PTSD: the effect of gender
[ { "docid": "pos:1840110_0", "text": "Depression is a typical mood disorder, and the persons who are often in this state face the risk in mental and even physical problems. In recent years, there has therefore been increasing attention in machine based depression analysis. In such a low mood, both the facial expression and voice of human beings appear different from the ones in normal states. This paper presents a novel method, which comprehensively models visual and vocal modalities, and automatically predicts the scale of depression. On one hand, Motion History Histogram (MHH) extracts the dynamics from corresponding video and audio data to represent characteristics of subtle changes in facial and vocal expression of depression. On the other hand, for each modality, the Partial Least Square (PLS) regression algorithm is applied to learn the relationship between the dynamic features and depression scales using training data, and then predict the depression scale for an unseen one. Predicted values of visual and vocal clues are further combined at decision level for final decision. The proposed approach is evaluated on the AVEC2013 dataset and experimental results clearly highlight its effectiveness and better performance than baseline results provided by the AVEC2013 challenge organiser.", "title": "" } ]
[ { "docid": "neg:1840110_0", "text": "Soft robot arms possess unique capabilities when it comes to adaptability, flexibility, and dexterity. In addition, soft systems that are pneumatically actuated can claim high power-to-weight ratio. One of the main drawbacks of pneumatically actuated soft arms is that their stiffness cannot be varied independently from their end-effector position in space. The novel robot arm physical design presented in this article successfully decouples its end-effector positioning from its stiffness. An experimental characterization of this ability is coupled with a mathematical analysis. The arm combines the light weight, high payload to weight ratio and robustness of pneumatic actuation with the adaptability and versatility of variable stiffness. Light weight is a vital component of the inherent safety approach to physical human-robot interaction. To characterize the arm, a neural network analysis of the curvature of the arm for different input pressures is performed. The curvature-pressure relationship is also characterized experimentally.", "title": "" }, { "docid": "neg:1840110_1", "text": "Our lives are heavily influenced by persuasive communication, and it is essential in almost any types of social interactions from business negotiation to conversation with our friends and family. With the rapid growth of social multimedia websites, it is becoming ever more important and useful to understand persuasiveness in the context of social multimedia content online. In this paper, we introduce our newly created multimedia corpus of 1,000 movie review videos obtained from a social multimedia website called ExpoTV.com, which will be made freely available to the research community. Our research results presented here revolve around the following 3 main research hypotheses. Firstly, we show that computational descriptors derived from verbal and nonverbal behavior can be predictive of persuasiveness. We further show that combining descriptors from multiple communication modalities (audio, text and visual) improve the prediction performance compared to using those from single modality alone. Secondly, we investigate if having prior knowledge of a speaker expressing a positive or negative opinion helps better predict the speaker's persuasiveness. Lastly, we show that it is possible to make comparable prediction of persuasiveness by only looking at thin slices (shorter time windows) of a speaker's behavior.", "title": "" }, { "docid": "neg:1840110_2", "text": "Networks are a fundamental tool for understanding and modeling complex systems in physics, biology, neuroscience, engineering, and social science. Many networks are known to exhibit rich, lower-order connectivity patterns that can be captured at the level of individual nodes and edges. However, higher-order organization of complex networks -- at the level of small network subgraphs -- remains largely unknown. Here, we develop a generalized framework for clustering networks on the basis of higher-order connectivity patterns. This framework provides mathematical guarantees on the optimality of obtained clusters and scales to networks with billions of edges. The framework reveals higher-order organization in a number of networks, including information propagation units in neuronal networks and hub structure in transportation networks. Results show that networks exhibit rich higher-order organizational structures that are exposed by clustering based on higher-order connectivity patterns.\n Prediction tasks over nodes and edges in networks require careful effort in engineering features used by learning algorithms. Recent research in the broader field of representation learning has led to significant progress in automating prediction by learning the features themselves. However, present feature learning approaches are not expressive enough to capture the diversity of connectivity patterns observed in networks. Here we propose node2vec, an algorithmic framework for learning continuous feature representations for nodes in networks. In node2vec, we learn a mapping of nodes to a low-dimensional space of features that maximizes the likelihood of preserving network neighborhoods of nodes. We define a flexible notion of a node's network neighborhood and design a biased random walk procedure, which efficiently explores diverse neighborhoods. Our algorithm generalizes prior work which is based on rigid notions of network neighborhoods, and we argue that the added flexibility in exploring neighborhoods is the key to learning richer representations. We demonstrate the efficacy of node2vec over existing state-of-the-art techniques on multi-label classification and link prediction in several real-world networks from diverse domains. Taken together, our work represents a new way for efficiently learning state-of-the-art task-independent representations in complex networks.", "title": "" }, { "docid": "neg:1840110_3", "text": "Automatically mapping natural language into programming language semantics has always been a major and interesting challenge. In this paper, we approach such problem by carrying out mapping at syntactic level and then applying machine learning algorithms to derive an automatic translator of natural language questions into their associated SQL queries. For this purpose, we design a dataset of relational pairs containing syntactic trees of questions and queries and we encode them in Support Vector Machines by means of kernel functions. Pair classification experiments suggest that our approach is promising in deriving shared semantics between the languages above.", "title": "" }, { "docid": "neg:1840110_4", "text": "Fast fashion is a business model that offers (the perception of) fashionable clothes at affordable prices. From an operations standpoint, fast fashion requires a highly responsive supply chain that can support a product assortment that is periodically changing. Though the underlying principles are simple, the successful execution of the fast-fashion business model poses numerous challenges. We present a careful examination of this business model and discuss its execution by analyzing the most prominent firms in the industry. We then survey the academic literature for research that is specifically relevant or directly related to fast fashion. Our goal is to expose the main components of fast fashion and to identify untapped research opportunities.", "title": "" }, { "docid": "neg:1840110_5", "text": "The use of virtual reality (VR) display systems has escalated over the last 5 yr and may have consequences for those working within vision research. This paper provides a brief review of the literature pertaining to the representation of depth in stereoscopic VR displays. Specific attention is paid to the response of the accommodation system with its cross-links to vergence eye movements, and to the spatial errors that arise when portraying three-dimensional space on a two-dimensional window. It is suggested that these factors prevent large depth intervals of three-dimensional visual space being rendered with integrity through dual two-dimensional arrays.", "title": "" }, { "docid": "neg:1840110_6", "text": "In this paper, we propose a novel semi-supervised approach for detecting profanity-related offensive content in Twitter. Our approach exploits linguistic regularities in profane language via statistical topic modeling on a huge Twitter corpus, and detects offensive tweets using automatically these generated features. Our approach performs competitively with a variety of machine learning (ML) algorithms. For instance, our approach achieves a true positive rate (TP) of 75.1% over 4029 testing tweets using Logistic Regression, significantly outperforming the popular keyword matching baseline, which has a TP of 69.7%, while keeping the false positive rate (FP) at the same level as the baseline at about 3.77%. Our approach provides an alternative to large scale hand annotation efforts required by fully supervised learning approaches.", "title": "" }, { "docid": "neg:1840110_7", "text": "Electronic textiles, or e-textiles, are an increasingly important part of wearable computing, helping to make pervasive devices truly wearable. These soft, fabric-based computers can function as lovely embodiments of Mark Weiser's vision of ubiquitous computing: providing useful functionality while disappearing discreetly into the fabric of our clothing. E-textiles also give new, expressive materials to fashion designers, textile designers, and artists, and garments stemming from these disciplines usually employ technology in visible and dramatic style. Integrating computer science, electrical engineering, textile design, and fashion design, e-textiles cross unusual boundaries, appeal to a broad spectrum of people, and provide novel opportunities for creative experimentation both in engineering and design. Moreover, e-textiles are cutting- edge technologies that capture people's imagination in unusual ways. (What other emerging pervasive technology has Vogue magazine featured?) Our work aims to capitalize on these unique features by providing a toolkit that empowers novices to design, engineer, and build their own e-textiles.", "title": "" }, { "docid": "neg:1840110_8", "text": "INTRODUCTION\nPriapism describes a persistent erection arising from dysfunction of mechanisms regulating penile tumescence, rigidity, and flaccidity. A correct diagnosis of priapism is a matter of urgency requiring identification of underlying hemodynamics.\n\n\nAIMS\nTo define the types of priapism, address its pathogenesis and epidemiology, and develop an evidence-based guideline for effective management.\n\n\nMETHODS\nSix experts from four countries developed a consensus document on priapism; this document was presented for peer review and debate in a public forum and revisions were made based on recommendations of chairpersons to the International Consultation on Sexual Medicine. This report focuses on guidelines written over the past decade and reviews the priapism literature from 2003 to 2009. Although the literature is predominantly case series, recent reports have more detailed methodology including duration of priapism, etiology of priapism, and erectile function outcomes.\n\n\nMAIN OUTCOME MEASURES\nConsensus recommendations were based on evidence-based literature, best medical practices, and bench research.\n\n\nRESULTS\nBasic science supporting current concepts in the pathophysiology of priapism, and clinical research supporting the most effective treatment strategies are summarized in this review.\n\n\nCONCLUSIONS\nPrompt diagnosis and appropriate management of priapism are necessary to spare patients ineffective interventions and maximize erectile function outcomes. Future research is needed to understand corporal smooth muscle pathology associated with genetic and acquired conditions resulting in ischemic priapism. Better understanding of molecular mechanisms involved in the pathogenesis of stuttering ischemic priapism will offer new avenues for medical intervention. Documenting erectile function outcomes based on duration of ischemic priapism, time to interventions, and types of interventions is needed to establish evidence-based guidance. In contrast, pathogenesis of nonischemic priapism is understood, and largely attributable to trauma. Better documentation of onset of high-flow priapism in relation to time of injury, and response to conservative management vs. angiogroaphic or surgical interventions is needed to establish evidence-based guidance.", "title": "" }, { "docid": "neg:1840110_9", "text": "Recent progress in semantic segmentation has been driven by improving the spatial resolution under Fully Convolutional Networks (FCNs). To address this problem, we propose a Stacked Deconvolutional Network (SDN) for semantic segmentation. In SDN, multiple shallow deconvolutional networks, which are called as SDN units, are stacked one by one to integrate contextual information and bring the fine recovery of localization information. Meanwhile, inter-unit and intra-unit connections are designed to assist network training and enhance feature fusion since the connections improve the flow of information and gradient propagation throughout the network. Besides, hierarchical supervision is applied during the upsampling process of each SDN unit, which enhances the discrimination of feature representations and benefits the network optimization. We carry out comprehensive experiments and achieve the new state-ofthe- art results on four datasets, including PASCAL VOC 2012, CamVid, GATECH, COCO Stuff. In particular, our best model without CRF post-processing achieves an intersection-over-union score of 86.6% in the test set.", "title": "" }, { "docid": "neg:1840110_10", "text": "The future smart grid is envisioned as a large scale cyberphysical system encompassing advanced power, communications, control, and computing technologies. To accommodate these technologies, it will have to build on solid mathematical tools that can ensure an efficient and robust operation of such heterogeneous and large-scale cyberphysical systems. In this context, this article is an overview on the potential of applying game theory for addressing relevant and timely open problems in three emerging areas that pertain to the smart grid: microgrid systems, demand-side management, and communications. In each area, the state-of-the-art contributions are gathered and a systematic treatment, using game theory, of some of the most relevant problems for future power systems is provided. Future opportunities for adopting game-theoretic methodologies in the transition from legacy systems toward smart and intelligent grids are also discussed. In a nutshell, this article provides a comprehensive account of the application of game theory in smart grid systems tailored to the interdisciplinary characteristics of these systems that integrate components from power systems, networking, communications, and control.", "title": "" }, { "docid": "neg:1840110_11", "text": "Business intelligence and analytics (BIA) is about the development of technologies, systems, practices, and applications to analyze critical business data so as to gain new insights about business and markets. The new insights can be used for improving products and services, achieving better operational efficiency, and fostering customer relationships. In this article, we will categorize BIA research activities into three broad research directions: (a) big data analytics, (b) text analytics, and (c) network analytics. The article aims to review the state-of-the-art techniques and models and to summarize their use in BIA applications. For each research direction, we will also determine a few important questions to be addressed in future research.", "title": "" }, { "docid": "neg:1840110_12", "text": "This article proposes a method for the automatic transcription of the melody, bass line, and chords in polyphonic pop music. The method uses a frame-wise pitch-salience estimator as a feature extraction front-end. For the melody and bass-line transcription, this is followed by acoustic modeling of note events and musicological modeling of note transitions. The acoustic models include a model for the target notes (i.e., melody or bass notes) and a background model. The musicological model involves key estimation and note bigrams that determine probabilities for transitions between target notes. A transcription of the melody or the bass line is obtained using Viterbi search via the target and the background note models. The performance of the melody and the bass-line transcription is evaluated using approximately 8.5 hours of realistic polyphonic music. The chord transcription maps the pitch salience estimates to a pitch-class representation and uses trained chord models and chord-transition probabilities to produce a transcription consisting of major and minor triads. For chords, the evaluation material consists of the first eight Beatles albums. The method is computationally efficient and allows causal implementation, so it can process streaming audio. Transcription of music refers to the analysis of an acoustic music signal for producing a parametric representation of the signal. The representation may be a music score with a meticulous arrangement for each instrument or an approximate description of melody and chords in the piece, for example. The latter type of transcription is commonly used in commercial songbooks of pop music and is usually sufficient for musicians or music hobbyists to play the piece. On the other hand, more detailed transcriptions are often employed in classical music to preserve the exact arrangement of the composer.", "title": "" }, { "docid": "neg:1840110_13", "text": "Sunni extremism poses a significant danger to society, yet it is relatively easy for these extremist organizations to spread jihadist propaganda and recruit new members via the Internet, Darknet, and social media. The sheer volume of these sites make them very difficult to police. This paper discusses an approach that can assist with this problem, by automatically identifying a subset of web pages and social media content (or any text) that contains extremist content. The approach utilizes machine learning, specifically neural networks and deep learning, to classify text as containing “extremist” or “benign” (i.e., not extremist) content. This method is robust and can effectively learn to classify extremist multilingual text of varying length. This study also involved the construction of a high quality dataset for training and testing, put together by a team of 40 people (some with fluency in Arabic) who expended 9,500 hours of combined effort. This dataset should facilitate future research on this topic.", "title": "" }, { "docid": "neg:1840110_14", "text": "Of numerous proposals to improve the accuracy of naive Bayes by weakening its attribute independence assumption, both LBR and Super-Parent TAN have demonstrated remarkable error performance. However, both techniques obtain this outcome at a considerable computational cost. We present a new approach to weakening the attribute independence assumption by averaging all of a constrained class of classifiers. In extensive experiments this technique delivers comparable prediction accuracy to LBR and Super-Parent TAN with substantially improved computational efficiency at test time relative to the former and at training time relative to the latter. The new algorithm is shown to have low variance and is suited to incremental learning.", "title": "" }, { "docid": "neg:1840110_15", "text": "This paper aims to evaluate the security and accuracy of Multi-Factor Biometric Authentication (MFBA) schemes that are based on applying UserBased Transformations (UBTs) on biometric features. Typically, UBTs employ transformation keys generated from passwords/PINs or retrieved from tokens. In this paper, we not only highlight the importance of simulating the scenario of compromised transformation keys rigorously, but also show that there has been misevaluation of this scenario as the results can be easily misinterpreted. In particular, we expose the falsehood of the widely reported claim in the literature that in the case of stolen keys, authentication accuracy drops but remains close to the authentication accuracy of biometric only system. We show that MFBA systems setup to operate at zero (%) Equal Error Rates (EER) can be undermined in the event of keys being compromised where the False Acceptance Rate reaches unacceptable levels. We demonstrate that for commonly used recognition schemes the FAR could be as high as 21%, 56%, and 66% for iris, fingerprint, and face biometrics respectively when using stolen transformation keys compared to near zero (%) EER when keys are assumed secure. We also discuss the trade off between improving accuracy of biometric systems using additional authentication factor(s) and compromising the security when the additional factor(s) are compromised. Finally, we propose mechanisms to enhance the security as well as the accuracy of MFBA schemes.", "title": "" }, { "docid": "neg:1840110_16", "text": "This work introduces the engineering design of a device capable to detect serum turbidity. We hypothesized that an electronic, portable, and low cost device that can provide objective, quantitative measurements of serum turbidity might have the potential to improve the early detection of neonatal sepsis. The design features, testing methodologies, and the obtained results are described. The final electronic device was evaluated in two experiments. The first one consisted in recording the turbidity value measured by the device for different solutions with known concentrations and different degrees of turbidity. The second analysis demonstrates a positive correlation between visual turbidity estimation and electronic turbidity measurement. Furthermore, our device demonstrated high turbidity in serum from two neonates with sepsis (one with a confirmed positive blood culture; the other one with a clinical diagnosis). We conclude that our electronic device may effectively measure serum turbidity at the bedside. Future studies will widen the possibility of additional clinical implications.", "title": "" }, { "docid": "neg:1840110_17", "text": "Nguyen and Shparlinski recently presented a polynomial-time algorithm that provably recovers the signer’s secret DSA key when a few bits of the random nonces k (used at each signature generation) are known for a number of DSA signatures at most linear in log q (q denoting as usual the small prime of DSA), under a reasonable assumption on the hash function used in DSA. The number of required bits is about log q, and can be further decreased to 2 if one assumes access to ideal lattice basis reduction, namely an oracle for the lattice closest vector problem for the infinity norm. All previously known results were only heuristic, including those of Howgrave-Graham and Smart who introduced the topic. Here, we obtain similar results for the elliptic curve variant of DSA (ECDSA).", "title": "" }, { "docid": "neg:1840110_18", "text": "In recent years, Convolutional Neural Network (CNN) has been extensively applied in the field of computer vision, which has also made remarkable achievements. However, the CNN models are computation-intensive and memory-consuming, which hinders the deployment of CNN-based methods on resource-limited embedded platforms. Therefore, this paper gives insight into low numerical precision Convolutional Neural Networks. At first, an image classification CNN model is quantized into 8-bit dynamic fixed-point with no more than 1% accuracy drop and then the method of conducting inference on low-cost ARM processor has been proposed. Experimental results verified the effectiveness of this method. Besides, our proof-of-concept prototype implementation can obtain a frame rate of 4.54fps when running on single Cortex-A72 core under 1.8GHz working frequency and 6.48 watts of gross power consumption.", "title": "" }, { "docid": "neg:1840110_19", "text": "In order to increase accuracy of the linear array CCD edge detection system, a wavelet-based sub-pixel edge detection method is proposed, the basic process is like this: firstly, according to the step gradient features, automatically calculate the pixel-level border of the CCD image. Then use the wavelet transform algorithm to devide the image’s edge location in sub-pixel level, thus detecting the sub-pixel edge. In this way we prove that the method has no principle error and at the same time possesses a good anti-noise performance. Experiments show that under the circumstance of no special requirements, the accuracy of the method is greater than 0.02 pixel, thus verifying the correctness of the theory.", "title": "" } ]
1840111
A Robot-Partner for Preschool Children Learning English Using Socio-Cognitive Conflict
[ { "docid": "pos:1840111_0", "text": "By engaging in construction-based robotics activities, children as young as four can play to learn a range of concepts. The TangibleK Robotics Program paired developmentally appropriate computer programming and robotics tools with a constructionist curriculum designed to engage kindergarten children in learning computational thinking, robotics, programming, and problem-solving. This paper documents three kindergarten classrooms’ exposure to computer programming concepts and explores learning outcomes. Results point to strengths of the curriculum and areas where further redesign of the curriculum and technologies would be appropriate. Overall, the study demonstrates that kindergartners were both interested in and able to learn many aspects of robotics, programming, and computational thinking with the TangibleK curriculum design. 2013 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "pos:1840111_1", "text": "Along with the rapid development of information and communication technologies, educators are trying to keep up with the dramatic changes in our electronic environment. These days mobile technology, with popular devices such as iPhones, Android phones, and iPads, is steering our learning environment towards increasingly focusing on mobile learning or m-Learning. Currently, most interfaces employ keyboards, mouse or touch technology, but some emerging input-interfaces use voiceor marker-based gesture recognition. In the future, one of the cutting-edge technologies likely to be used is robotics. Robots are already being used in some classrooms and are receiving an increasing level of attention. Robots today are developed for special purposes, quite similar to personal computers in their early days. However, in the future, when mass production lowers prices, robots will bring about big changes in our society. In this column, the author focuses on educational service robots. Educational service robots for language learning and robot-assisted language learning (RALL) will be introduced, and the hardware and software platforms for RALL will be explored, as well as implications for future research.", "title": "" } ]
[ { "docid": "neg:1840111_0", "text": "Math anxiety is a negative affective reaction to situations involving math. Previous work demonstrates that math anxiety can negatively impact math problem solving by creating performance-related worries that disrupt the working memory needed for the task at hand. By leveraging knowledge about the mechanism underlying the math anxiety-performance relationship, we tested the effectiveness of a short expressive writing intervention that has been shown to reduce intrusive thoughts and improve working memory availability. Students (N = 80) varying in math anxiety were asked to sit quietly (control group) prior to completing difficulty-matched math and word problems or to write about their thoughts and feelings regarding the exam they were about to take (expressive writing group). For the control group, high math-anxious individuals (HMAs) performed significantly worse on the math problems than low math-anxious students (LMAs). In the expressive writing group, however, this difference in math performance across HMAs and LMAs was significantly reduced. Among HMAs, the use of words related to anxiety, cause, and insight in their writing was positively related to math performance. Expressive writing boosts the performance of anxious students in math-testing situations.", "title": "" }, { "docid": "neg:1840111_1", "text": "1 Multisensor Data Fusion for Next Generation Distributed Intrusion Detection Systems Tim Bass ERIM International & Silk Road Ann Arbor, MI 48113 Abstract| Next generation cyberspace intrusion detection systems will fuse data from heterogeneous distributed network sensors to create cyberspace situational awareness. This paper provides a few rst steps toward developing the engineering requirements using the art and science of multisensor data fusion as the underlying model. Current generation internet-based intrusion detection systems and basic multisensor data fusion constructs are summarized. The TCP/IP model is used to develop framework sensor and database models. The SNMP ASN.1 MIB construct is recommended for the representation of context-dependent threat & vulnerabilities databases.", "title": "" }, { "docid": "neg:1840111_2", "text": "on Walden Pond (Massachusetts, USA) using diatoms and stable isotopes Dörte Köster,1∗ Reinhard Pienitz,1∗ Brent B. Wolfe,2 Sylvia Barry,3 David R. Foster,3 and Sushil S. Dixit4 Paleolimnology-Paleoecology Laboratory, Centre d’études nordiques, Department of Geography, Université Laval, Québec, Québec, G1K 7P4, Canada Department of Geography and Environmentals Studies, Wilfrid Laurier University, Waterloo, Ontario, N2L 3C5, Canada Harvard University, Harvard Forest, Post Office Box 68, Petersham, Massachusetts, 01366-0068, USA Environment Canada, National Guidelines & Standards Office, 351 St. Joseph Blvd., 8th Floor, Ottawa, Ontario, K1A 0H3, Canada ∗Corresponding authors: E-mail: doerte.koster.1@ulaval.ca, reinhard.pienitz@cen.ulaval.ca", "title": "" }, { "docid": "neg:1840111_3", "text": "Prior research has established that peer tutors can benefit academically from their tutoring experiences. However, although tutor learning has been observed across diverse settings, the magnitude of these gains is often underwhelming. In this review, the authors consider how analyses of tutors’ actual behaviors may help to account for variation in learning outcomes and how typical tutoring behaviors may create or undermine opportunities for learning. The authors examine two tutoring activities that are commonly hypothesized to support tutor learning: explaining and questioning. These activities are hypothesized to support peer tutors’ learning via reflective knowledge-building, which includes self-monitoring of comprehension, integration of new and prior knowledge, and elaboration and construction of knowledge. The review supports these hypotheses but also finds that peer tutors tend to exhibit a pervasive knowledge-telling bias. Peer tutors, even when trained, focus more on delivering knowledge rather than developing it. As a result, the true potential for tutor learning may rarely be achieved. The review concludes by offering recommendations for how future research can utilize tutoring process data to understand how tutors learn and perhaps develop new training methods.", "title": "" }, { "docid": "neg:1840111_4", "text": "OBJECTIVE\nBrain-computer interfaces (BCIs) have the potential to be valuable clinical tools. However, the varied nature of BCIs, combined with the large number of laboratories participating in BCI research, makes uniform performance reporting difficult. To address this situation, we present a tutorial on performance measurement in BCI research.\n\n\nAPPROACH\nA workshop on this topic was held at the 2013 International BCI Meeting at Asilomar Conference Center in Pacific Grove, California. This paper contains the consensus opinion of the workshop members, refined through discussion in the following months and the input of authors who were unable to attend the workshop.\n\n\nMAIN RESULTS\nChecklists for methods reporting were developed for both discrete and continuous BCIs. Relevant metrics are reviewed for different types of BCI research, with notes on their use to encourage uniform application between laboratories.\n\n\nSIGNIFICANCE\nGraduate students and other researchers new to BCI research may find this tutorial a helpful introduction to performance measurement in the field.", "title": "" }, { "docid": "neg:1840111_5", "text": "Question Generation (QG) is the task of generating reasonable questions from a text. It is a relatively new research topic and has its potential usage in intelligent tutoring systems and closed-domain question answering systems. Current approaches include template or syntax based methods. This thesis proposes a novel approach based entirely on semantics. Minimal Recursion Semantics (MRS) is a meta-level semantic representation with emphasis on scope underspecification. With the English Resource Grammar and various tools from the DELPH-IN community, a natural language sentence can be interpreted as an MRS structure by parsing, and an MRS structure can be realized as a natural language sentence through generation. There are three issues emerging from semantics-based QG: (1) sentence simplification for complex sentences, (2) question transformation for declarative sentences, and (3) generation ranking. Three solutions are also proposed: (1) MRS decomposition through a Connected Dependency MRS Graph, (2) MRS transformation from declarative sentences to interrogative sentences, and (3) question ranking by simple language models atop a MaxEnt-based model. The evaluation is conducted in the context of the Question Generation Shared Task and Generation Challenge 2010. The performance of proposed method is compared against other syntax and rule based systems. The result also reveals the challenges of current research on question generation and indicates direction for future work.", "title": "" }, { "docid": "neg:1840111_6", "text": "Previous research has indicated that exposure to traditional media (i.e., television, film, and print) predicts the likelihood of internalization of a thin ideal; however, the relationship between exposure to internet-based social media on internalization of this ideal remains less understood. Social media differ from traditional forms of media by allowing users to create and upload their own content that is then subject to feedback from other users. This meta-analysis examined the association linking the use of social networking sites (SNSs) and the internalization of a thin ideal in females. Systematic searches were performed in the databases: PsychINFO, PubMed, Web of Science, Communication and Mass Media Complete, and ProQuest Dissertations and Theses Global. Six studies were included in the meta-analysis that yielded 10 independent effect sizes and a total of 1,829 female participants ranging in age from 10 to 46 years. We found a positive association between extent of use of SNSs and extent of internalization of a thin ideal with a small to moderate effect size (r = 0.18). The positive effect indicated that more use of SNSs was associated with significantly higher internalization of a thin ideal. A comparison was also made between study outcomes measuring broad use of SNSs and outcomes measuring SNS use solely as a function of specific appearance-related features (e.g., posting or viewing photographs). The use of appearance-related features had a stronger relationship with the internalization of a thin ideal than broad use of SNSs. The finding suggests that the ability to interact with appearance-related features online and be an active participant in media creation is associated with body image disturbance. Future research should aim to explore the way SNS users interact with the media posted online and the relationship linking the use of specific appearance features and body image disturbance.", "title": "" }, { "docid": "neg:1840111_7", "text": "With the fast development pace of deep submicron technology, the size and density of semiconductor memory grows rapidly. However, keeping a high level of yield and reliability for memory products is more and more difficult. Both the redundancy repair and ECC techniques have been widely used for enhancing the yield and reliability of memory chips. Specifically, the redundancy repair and ECC techniques are conventionally used to repair or correct the hard faults and soft errors, respectively. In this paper, we propose an integrated ECC and redundancy repair scheme for memory reliability enhancement. Our approach can identify the hard faults and soft errors during the memory normal operation mode, and repair the hard faults during the memory idle time as long as there are unused redundant elements. We also develop a method for evaluating the memory reliability. Experimental results show that the proposed approach is effective, e.g., the MTTF of a 32K /spl times/ 64 memory is improved by 1.412 hours (7.1%) with our integrated ECC and repair scheme.", "title": "" }, { "docid": "neg:1840111_8", "text": "Verbs play a critical role in the meaning of sentences, but these ubiquitous words have received little attention in recent distributional semantics research. We introduce SimVerb-3500, an evaluation resource that provides human ratings for the similarity of 3,500 verb pairs. SimVerb-3500 covers all normed verb types from the USF free-association database, providing at least three examples for every VerbNet class. This broad coverage facilitates detailed analyses of how syntactic and semantic phenomena together influence human understanding of verb meaning. Further, with significantly larger development and test sets than existing benchmarks, SimVerb-3500 enables more robust evaluation of representation learning architectures and promotes the development of methods tailored to verbs. We hope that SimVerb-3500 will enable a richer understanding of the diversity and complexity of verb semantics and guide the development of systems that can effectively represent and interpret this meaning.", "title": "" }, { "docid": "neg:1840111_9", "text": "Substrate Integrated Waveguide has been an emerging technology for the realization of microwave and millimeter wave regions. It is the planar form of the conventional rectangular waveguide. It has profound applications at higher frequencies, since prevalent platforms like microstrip and coplanar waveguide have loss related issues. This paper discusses basic concepts of SIW, design aspects and their applications to leaky wave antennas. A brief overview of recent works on Substrate integrated Waveguide based Leaky Wave Antennas has been provided.", "title": "" }, { "docid": "neg:1840111_10", "text": "I’ve taken to writing this series of posts on a statistical view of deep learning with two principal motivations in mind. The first was as a personal exercise to make concrete and to test the limits of the way that I think about and use deep learning in my every day work. The second, was to highlight important statistical connections and implications of deep learning that I have not seen made in the popular courses, reviews and books on deep learning, but which are extremely important to keep in mind. This document forms a collection of these essays originally posted at blog.shakirm.com.", "title": "" }, { "docid": "neg:1840111_11", "text": "Over the next few years the amount of biometric data being at the disposal of various agencies and authentication service providers is expected to grow significantly. Such quantities of data require not only enormous amounts of storage but unprecedented processing power as well. To be able to face this future challenges more and more people are looking towards cloud computing, which can address these challenges quite effectively with its seemingly unlimited storage capacity, rapid data distribution and parallel processing capabilities. Since the available literature on how to implement cloud-based biometric services is extremely scarce, this paper capitalizes on the most important challenges encountered during the development work on biometric services, presents the most important standards and recommendations pertaining to biometric services in the cloud and ultimately, elaborates on the potential value of cloud-based biometric solutions by presenting a few existing (commercial) examples. In the final part of the paper, a case study on fingerprint recognition in the cloud and its integration into the e-learning environment Moodle is presented.", "title": "" }, { "docid": "neg:1840111_12", "text": "With the success of image classification problems, deep learning is expanding its application areas. In this paper, we apply deep learning to decode a polar code. As an initial step for memoryless additive Gaussian noise channel, we consider a deep feed-forward neural network and investigate its decoding performances with respect to numerous configurations: the number of hidden layers, the number of nodes for each layer, and activation functions. Generally, the higher complex network yields a better performance. Comparing the performances of regular list decoding, we provide a guideline for the configuration parameters. Although the training of deep learning may require high computational complexity, it should be noted that the field application of trained networks can be accomplished at a low level complexity. Considering the level of performance and complexity, we believe that deep learning is a competitive decoding tool.", "title": "" }, { "docid": "neg:1840111_13", "text": "Action recognition and human pose estimation are closely related but both problems are generally handled as distinct tasks in the literature. In this work, we propose a multitask framework for jointly 2D and 3D pose estimation from still images and human action recognition from video sequences. We show that a single architecture can be used to solve the two problems in an efficient way and still achieves state-of-the-art results. Additionally, we demonstrate that optimization from end-to-end leads to significantly higher accuracy than separated learning. The proposed architecture can be trained with data from different categories simultaneously in a seamlessly way. The reported results on four datasets (MPII, Human3.6M, Penn Action and NTU) demonstrate the effectiveness of our method on the targeted tasks.", "title": "" }, { "docid": "neg:1840111_14", "text": "Information Retrieval deals with searching and retrieving information within the documents and it also searches the online databases and internet. Web crawler is defined as a program or software which traverses the Web and downloads web documents in a methodical, automated manner. Based on the type of knowledge, web crawler is usually divided in three types of crawling techniques: General Purpose Crawling, Focused crawling and Distributed Crawling. In this paper, the applicability of Web Crawler in the field of web search and a review on Web Crawler to different problem domains in web search is discussed.", "title": "" }, { "docid": "neg:1840111_15", "text": "This article presents a review of recent literature of intersection behavior analysis for three types of intersection participants; vehicles, drivers, and pedestrians. In this survey, behavior analysis of each participant group is discussed based on key features and elements used for intersection design, planning and safety analysis. Different methods used for data collection, behavior recognition and analysis are reviewed for each group and a discussion is provided on the state of the art along with challenges and future research directions in the field.", "title": "" }, { "docid": "neg:1840111_16", "text": "The real-time bidding (RTB), aka programmatic buying, has recently become the fastest growing area in online advertising. Instead of bulking buying and inventory-centric buying, RTB mimics stock exchanges and utilises computer algorithms to automatically buy and sell ads in real-time; It uses per impression context and targets the ads to specific people based on data about them, and hence dramatically increases the effectiveness of display advertising. In this paper, we provide an empirical analysis and measurement of a production ad exchange. Using the data sampled from both demand and supply side, we aim to provide first-hand insights into the emerging new impression selling infrastructure and its bidding behaviours, and help identifying research and design issues in such systems. From our study, we observed that periodic patterns occur in various statistics including impressions, clicks, bids, and conversion rates (both post-view and post-click), which suggest time-dependent models would be appropriate for capturing the repeated patterns in RTB. We also found that despite the claimed second price auction, the first price payment in fact is accounted for 55.4% of total cost due to the arrangement of the soft floor price. As such, we argue that the setting of soft floor price in the current RTB systems puts advertisers in a less favourable position. Furthermore, our analysis on the conversation rates shows that the current bidding strategy is far less optimal, indicating the significant needs for optimisation algorithms incorporating the facts such as the temporal behaviours, the frequency and recency of the ad displays, which have not been well considered in the past.", "title": "" }, { "docid": "neg:1840111_17", "text": "The rapid growth of social media, especially Twitter in Indonesia, has produced a large amount of user generated texts in the form of tweets. Since Twitter only provides the name and location of its users, we develop a classification system that predicts latent attributes of Twitter user based on his tweets. Latent attribute is an attribute that is not stated directly. Our system predicts age and job attributes of Twitter users that use Indonesian language. Classification model is developed by employing lexical features and three learning algorithms (Naïve Bayes, SVM, and Random Forest). Based on the experimental results, it can be concluded that the SVM method produces the best accuracy for balanced data.", "title": "" }, { "docid": "neg:1840111_18", "text": "We employ the new geometric active contour models, previously formulated, for edge detection and segmentation of magnetic resonance imaging (MRI), computed tomography (CT), and ultrasound medical imagery. Our method is based on defining feature-based metrics on a given image which in turn leads to a novel snake paradigm in which the feature of interest may be considered to lie at the bottom of a potential well. Thus, the snake is attracted very quickly and efficiently to the desired feature.", "title": "" } ]
1840112
Multilevel secure data stream processing: Architecture and implementation
[ { "docid": "pos:1840112_0", "text": "CQL, a continuous query language, is supported by the STREAM prototype data stream management system (DSMS) at Stanford. CQL is an expressive SQL-based declarative language for registering continuous queries against streams and stored relations. We begin by presenting an abstract semantics that relies only on “black-box” mappings among streams and relations. From these mappings we define a precise and general interpretation for continuous queries. CQL is an instantiation of our abstract semantics using SQL to map from relations to relations, window specifications derived from SQL-99 to map from streams to relations, and three new operators to map from relations to streams. Most of the CQL language is operational in the STREAM system. We present the structure of CQL's query execution plans as well as details of the most important components: operators, interoperator queues, synopses, and sharing of components among multiple operators and queries. Examples throughout the paper are drawn from the Linear Road benchmark recently proposed for DSMSs. We also curate a public repository of data stream applications that includes a wide variety of queries expressed in CQL. The relative ease of capturing these applications in CQL is one indicator that the language contains an appropriate set of constructs for data stream processing.", "title": "" } ]
[ { "docid": "neg:1840112_0", "text": "Self adaptive video games are important for rehabilitation at home. Recent works have explored different techniques with satisfactory results but these have a poor use of game design concepts like Challenge and Conservative Handling of Failure. Dynamic Difficult Adjustment with Help (DDA-Help) approach is presented as a new point of view for self adaptive video games for rehabilitation. Procedural Content Generation (PCG) and automatic helpers are used to a different work on Conservative Handling of Failure and Challenge. An experience with amblyopic children showed the proposal effectiveness, increasing the visual acuity 2-3 level following the Snellen Vision Test and improving the performance curve during the game time.", "title": "" }, { "docid": "neg:1840112_1", "text": "We introduce a new algorithm for reinforcement learning called Maximum aposteriori Policy Optimisation (MPO) based on coordinate ascent on a relativeentropy objective. We show that several existing methods can directly be related to our derivation. We develop two off-policy algorithms and demonstrate that they are competitive with the state-of-the-art in deep reinforcement learning. In particular, for continuous control, our method outperforms existing methods with respect to sample efficiency, premature convergence and robustness to hyperparameter settings.", "title": "" }, { "docid": "neg:1840112_2", "text": "With the explosion of multimedia data, semantic event detection from videos has become a demanding and challenging topic. In addition, when the data has a skewed data distribution, interesting event detection also needs to address the data imbalance problem. The recent proliferation of deep learning has made it an essential part of many Artificial Intelligence (AI) systems. Till now, various deep learning architectures have been proposed for numerous applications such as Natural Language Processing (NLP) and image processing. Nonetheless, it is still impracticable for a single model to work well for different applications. Hence, in this paper, a new ensemble deep learning framework is proposed which can be utilized in various scenarios and datasets. The proposed framework is able to handle the over-fitting issue as well as the information losses caused by single models. Moreover, it alleviates the imbalanced data problem in real-world multimedia data. The whole framework includes a suite of deep learning feature extractors integrated with an enhanced ensemble algorithm based on the performance metrics for the imbalanced data. The Support Vector Machine (SVM) classifier is utilized as the last layer of each deep learning component and also as the weak learners in the ensemble module. The framework is evaluated on two large-scale and imbalanced video datasets (namely, disaster and TRECVID). The extensive experimental results illustrate the advantage and effectiveness of the proposed framework. It also demonstrates that the proposed framework outperforms several well-known deep learning methods, as well as the conventional features integrated with different classifiers.", "title": "" }, { "docid": "neg:1840112_3", "text": "In this paper, we explored a learning approach which combines di erent learning methods in inductive logic programming (ILP) to allow a learner to produce more expressive hypotheses than that of each individual learner. Such a learning approach may be useful when the performance of the task depends on solving a large amount of classication problems and each has its own characteristics which may or may not t a particular learning method. The task of semantic parser acquisition in two di erent domains was attempted and preliminary results demonstrated that such an approach is promising.", "title": "" }, { "docid": "neg:1840112_4", "text": "Neural network language models (NNLM) have become an increasingly popular choice for large vocabulary continuous speech recognition (LVCSR) tasks, due to their inherent generalisation and discriminative power. This paper present two techniques to improve performance of standard NNLMs. First, the form of NNLM is modelled by introduction an additional output layer node to model the probability mass of out-of-shortlist (OOS) words. An associated probability normalisation scheme is explicitly derived. Second, a novel NNLM adaptation method using a cascaded network is proposed. Consistent WER reductions were obtained on a state-of-the-art Arabic LVCSR task over conventional NNLMs. Further performance gains were also observed after NNLM adaptation.", "title": "" }, { "docid": "neg:1840112_5", "text": "In this paper, a methodology is developed to use data acquisition derived from condition monitoring and standard diagnosis for rehabilitation purposes of transformers. The interpretation and understanding of the test data are obtained from international test standards to determine the current condition of transformers. In an attempt to ascertain monitoring priorities, the effective test methods are selected for transformer diagnosis. In particular, the standardization of diagnostic and analytical techniques are being improved that will enable field personnel to more easily use the test results and will reduce the need for interpretation by experts. In addition, the advanced method has the potential to reduce the time greatly and increase the accuracy of diagnostics. The important aim of the standardization is to develop the multiple diagnostic models that combine results from the different tests and give an overall assessment of reliability and maintenance for transformers.", "title": "" }, { "docid": "neg:1840112_6", "text": "Detection of drowsiness based on extraction of IMF’s from EEG signal using EMD process and characterizing the features using trained Artificial Neural Network (ANN) is introduced in this paper. Our subjects are 8 volunteers who have not slept for last 24 hour due to travelling. EEG signal was recorded when the subject is sitting on a chair facing video camera and are obliged to see camera only. ANN is trained using a utility made in Matlab to mark the EEG data for drowsy state and awaked state and then extract IMF’s of marked data using EMD to prepare feature inputs for Neural Network. Once the neural network is trained, IMFs of New subjects EEG Signals is given as input and ANN will give output in two different states i.e. ‘drowsy’ or ‘awake’. The system is tested on 8 different subjects and it provided good results with more than 84.8% of correct detection of drowsy states.", "title": "" }, { "docid": "neg:1840112_7", "text": "We adopt and analyze a synchronous K-step averaging stochastic gradient descent algorithm which we call K-AVG for solving large scale machine learning problems. We establish the convergence results of K-AVG for nonconvex objectives. Our analysis of K-AVG applies to many existing variants of synchronous SGD. We explain why the Kstep delay is necessary and leads to better performance than traditional parallel stochastic gradient descent which is equivalent to K-AVG withK = 1. We also show that K-AVG scales better with the number of learners than asynchronous stochastic gradient descent (ASGD). Another advantage of K-AVG over ASGD is that it allows larger stepsizes and facilitates faster convergence. On a cluster of 128 GPUs, K-AVG is faster than ASGD implementations and achieves better accuracies and faster convergence for training with the CIFAR-10 dataset.", "title": "" }, { "docid": "neg:1840112_8", "text": "Social networking sites, especially Facebook, are an integral part of the lifestyle of contemporary youth. The facilities are increasingly being used by older persons as well. Usage is mainly for social purposes, but the groupand discussion facilities of Facebook hold potential for focused academic use. This paper describes and discusses a venture in which postgraduate distancelearning students joined an optional group for the purpose of discussions on academic, contentrelated topics, largely initiated by the students themselves. Learning and insight were enhanced by these discussions and the students, in their environment of distance learning, are benefiting by contact with fellow students.", "title": "" }, { "docid": "neg:1840112_9", "text": "We use logical inference techniques for recognising textual entailment, with theorem proving operating on deep semantic interpretations as the backbone of our system. However, the performance of theorem proving on its own turns out to be highly dependent on a wide range of background knowledge, which is not necessarily included in publically available knowledge sources. Therefore, we achieve robustness via two extensions. Firstly, we incorporate model building, a technique borrowed from automated reasoning, and show that it is a useful robust method to approximate entailment. Secondly, we use machine learning to combine these deep semantic analysis techniques with simple shallow word overlap. The resulting hybrid model achieves high accuracy on the RTE testset, given the state of the art. Our results also show that the various techniques that we employ perform very differently on some of the subsets of the RTE corpus and as a result, it is useful to use the nature of the dataset as a feature.", "title": "" }, { "docid": "neg:1840112_10", "text": "With the advent of high dimensionality, adequate identification of relevant features of the data has become indispensable in real-world scenarios. In this context, the importance of feature selection is beyond doubt and different methods have been developed. However, with such a vast body of algorithms available, choosing the adequate feature selection method is not an easy-to-solve question and it is necessary to check their effectiveness on different situations. Nevertheless, the assessment of relevant features is difficult in real datasets and so an interesting option is to use artificial data. In this paper, several synthetic datasets are employed for this purpose, aiming at reviewing the performance of feature selection methods in the presence of a crescent number or irrelevant features, noise in the data, redundancy and interaction between attributes, as well as a small ratio between number of samples and number of features. Seven filters, two embedded methods, and two wrappers are applied over eleven synthetic datasets, tested by four classifiers, so as to be able to choose a robust method, paving the way for its application to real datasets.", "title": "" }, { "docid": "neg:1840112_11", "text": "Unsupervised automatic topic discovery in micro-blogging social networks is a very challenging task, as it involves the analysis of very short, noisy, ungrammatical and uncontextual messages. Most of the current approaches to this problem are basically syntactic, as they focus either on the use of statistical techniques or on the analysis of the co-occurrences between the terms. This paper presents a novel topic discovery methodology, based on the mapping of hashtags to WordNet terms and their posterior clustering, in which semantics plays a centre role. The paper also presents a detailed case study in the field of Oncology, in which the discovered topics are thoroughly compared to a golden standard, showing promising results. 2015 Published by Elsevier Ltd.", "title": "" }, { "docid": "neg:1840112_12", "text": "Nearly 40 years ago, Dr. R.J. Gibbons made the first reports of the clinical relevance of what we now know as bacterial biofilms when he published his observations of the role of polysaccharide glycocalyx formation on teeth by Streptococcus mutans [Sci. Am. 238 (1978) 86]. As the clinical relevance of bacterial biofilm formation became increasingly apparent, interest in the phenomenon exploded. Studies are rapidly shedding light on the biomolecular pathways leading to this sessile mode of growth but many fundamental questions remain. The intent of this review is to consider the reasons why bacteria switch from a free-floating to a biofilm mode of growth. The currently available wealth of data pertaining to the molecular genetics of biofilm formation in commonly studied, clinically relevant, single-species biofilms will be discussed in an effort to decipher the motivation behind the transition from planktonic to sessile growth in the human body. Four potential incentives behind the formation of biofilms by bacteria during infection are considered: (1) protection from harmful conditions in the host (defense), (2) sequestration to a nutrient-rich area (colonization), (3) utilization of cooperative benefits (community), (4) biofilms normally grow as biofilms and planktonic cultures are an in vitro artifact (biofilms as the default mode of growth).", "title": "" }, { "docid": "neg:1840112_13", "text": "In contrast to the increasing popularity of REpresentational State Transfer (REST), systematic testing of RESTful Application Programming Interfaces (API) has not attracted much attention so far. This paper describes different aspects of automated testing of RESTful APIs. Later, we focus on functional and security tests, for which we apply a technique called model-based software development. Based on an abstract model of the RESTful API that comprises resources, states and transitions a software generator not only creates the source code of the RESTful API but also creates a large number of test cases that can be immediately used to test the implementation. This paper describes the process of developing a software generator for test cases using state-of-the-art tools and provides an example to show the feasibility of our approach.", "title": "" }, { "docid": "neg:1840112_14", "text": "BACKGROUND\nProblematic Internet addiction or excessive Internet use is characterized by excessive or poorly controlled preoccupations, urges, or behaviors regarding computer use and Internet access that lead to impairment or distress. Currently, there is no recognition of internet addiction within the spectrum of addictive disorders and, therefore, no corresponding diagnosis. It has, however, been proposed for inclusion in the next version of the Diagnostic and Statistical Manual of Mental Disorder (DSM).\n\n\nOBJECTIVE\nTo review the literature on Internet addiction over the topics of diagnosis, phenomenology, epidemiology, and treatment.\n\n\nMETHODS\nReview of published literature between 2000-2009 in Medline and PubMed using the term \"internet addiction.\n\n\nRESULTS\nSurveys in the United States and Europe have indicated prevalence rate between 1.5% and 8.2%, although the diagnostic criteria and assessment questionnaires used for diagnosis vary between countries. Cross-sectional studies on samples of patients report high comorbidity of Internet addiction with psychiatric disorders, especially affective disorders (including depression), anxiety disorders (generalized anxiety disorder, social anxiety disorder), and attention deficit hyperactivity disorder (ADHD). Several factors are predictive of problematic Internet use, including personality traits, parenting and familial factors, alcohol use, and social anxiety.\n\n\nCONCLUSIONS AND SCIENTIFIC SIGNIFICANCE\nAlthough Internet-addicted individuals have difficulty suppressing their excessive online behaviors in real life, little is known about the patho-physiological and cognitive mechanisms responsible for Internet addiction. Due to the lack of methodologically adequate research, it is currently impossible to recommend any evidence-based treatment of Internet addiction.", "title": "" }, { "docid": "neg:1840112_15", "text": "Image cropping aims at improving the aesthetic quality of images by adjusting their composition. Most weakly supervised cropping methods (without bounding box supervision) rely on the sliding window mechanism. The sliding window mechanism requires fixed aspect ratios and limits the cropping region with arbitrary size. Moreover, the sliding window method usually produces tens of thousands of windows on the input image which is very time-consuming. Motivated by these challenges, we firstly formulate the aesthetic image cropping as a sequential decision-making process and propose a weakly supervised Aesthetics Aware Reinforcement Learning (A2-RL) framework to address this problem. Particularly, the proposed method develops an aesthetics aware reward function which especially benefits image cropping. Similar to human's decision making, we use a comprehensive state representation including both the current observation and the historical experience. We train the agent using the actor-critic architecture in an end-to-end manner. The agent is evaluated on several popular unseen cropping datasets. Experiment results show that our method achieves the state-of-the-art performance with much fewer candidate windows and much less time compared with previous weakly supervised methods.", "title": "" }, { "docid": "neg:1840112_16", "text": "This paper presents for the first time the analysis and experimental validation of a six-slot four-pole synchronous reluctance motor with nonoverlapping fractional slot-concentrated windings. The machine exhibits high torque density and efficiency due to its high fill factor coils with very short end windings, facilitated by a segmented stator and bobbin winding of the coils. These advantages are coupled with its inherent robustness and low cost. The topology is presented as a logical step forward in advancing synchronous reluctance machines that have been universally wound with a sinusoidally distributed winding. The paper presents the motor design, performance evaluation through finite element studies and validation of the electromagnetic model, and thermal specification through empirical testing. It is shown that high performance synchronous reluctance motors can be constructed with single tooth wound coils, but considerations must be given regarding torque quality and the d-q axis inductances.", "title": "" }, { "docid": "neg:1840112_17", "text": "Most regional anesthesia in breast surgeries is performed as postoperative pain management under general anesthesia, and not as the primary anesthesia. Regional anesthesia has very few cardiovascular or pulmonary side-effects, as compared with general anesthesia. Pectoral nerve block is a relatively new technique, with fewer complications than other regional anesthesia. We performed Pecs I and Pec II block simultaneously as primary anesthesia under moderate sedation with dexmedetomidine for breast conserving surgery in a 49-year-old female patient with invasive ductal carcinoma. Block was uneventful and showed no complications. Thus, Pecs block with sedation could be an alternative to general anesthesia for breast surgeries.", "title": "" }, { "docid": "neg:1840112_18", "text": "The probabilistic method comes up in various fields in mathematics. In these notes, we will give a brief introduction to graph theory and applications of the probabilistic method in proving bounds for Ramsey numbers and a theorem in graph cuts. This method is based on the following idea: in order to prove the existence of an object with some desired property, one defines a probability space on some larger class of objects, and then shows that an element of this space has the desired property with positive probability. The elements contained in this probability space may be of any kind. We will illustrate the probabilistic method by giving applications in graph theory.", "title": "" }, { "docid": "neg:1840112_19", "text": "Let G=(V,E) be a complete undirected graph, with node set V={v 1 , . . ., v n } and edge set E . The edges (v i ,v j ) ∈ E have nonnegative weights that satisfy the triangle inequality. Given a set of integers K = { k i } i=1 p $(\\sum_{i=1}^p k_i \\leq |V|$) , the minimum K-cut problem is to compute disjoint subsets with sizes { k i } i=1 p , minimizing the total weight of edges whose two ends are in different subsets. We demonstrate that for any fixed p it is possible to obtain in polynomial time an approximation of at most three times the optimal value. We also prove bounds on the ratio between the weights of maximum and minimum cuts.", "title": "" } ]