query_id
stringlengths 1
6
| query
stringlengths 2
185
| positive_passages
listlengths 1
121
| negative_passages
listlengths 15
100
|
---|---|---|---|
1840313 | Gami fi cation and Mobile Marketing Effectiveness | [
{
"docid": "pos:1840313_0",
"text": "Social Mediator is a forum exploring the ways that HCI research and principles interact---or might interact---with practices in the social media world.<br /><b><i>Joe McCarthy, Editor</i></b>",
"title": ""
}
] | [
{
"docid": "neg:1840313_0",
"text": "BACKGROUND\nAttention-deficit/hyperactivity disorder (ADHD) is one of the most common developmental disorders experienced in childhood and can persist into adulthood. The disorder has early onset and is characterized by a combination of overactive, poorly modulated behavior with marked inattention. In the long term it can impair academic performance, vocational success and social-emotional development. Meditation is increasingly used for psychological conditions and could be used as a tool for attentional training in the ADHD population.\n\n\nOBJECTIVES\nTo assess the effectiveness of meditation therapies as a treatment for ADHD.\n\n\nSEARCH STRATEGY\nOur extensive search included: CENTRAL, MEDLINE, EMBASE, CINAHL, ERIC, PsycINFO, C2-SPECTR, dissertation abstracts, LILACS, Virtual Health Library (VHL) in BIREME, Complementary and Alternative Medicine specific databases, HSTAT, Informit, JST, Thai Psychiatric databases and ISI Proceedings, plus grey literature and trial registries from inception to January 2010.\n\n\nSELECTION CRITERIA\nRandomized controlled trials that investigated the efficacy of meditation therapy in children or adults diagnosed with ADHD.\n\n\nDATA COLLECTION AND ANALYSIS\nTwo authors extracted data independently using a pre-designed data extraction form. We contacted study authors for additional information required. We analyzed data using mean difference (MD) to calculate the treatment effect. The results are presented in tables, figures and narrative form.\n\n\nMAIN RESULTS\nFour studies, including 83 participants, are included in this review. Two studies used mantra meditation while the other two used yoga compared with drugs, relaxation training, non-specific exercises and standard treatment control. Design limitations caused high risk of bias across the studies. Only one out of four studies provided data appropriate for analysis. For this study there was no statistically significant difference between the meditation therapy group and the drug therapy group on the teacher rating ADHD scale (MD -2.72, 95% CI -8.49 to 3.05, 15 patients). Likewise, there was no statistically significant difference between the meditation therapy group and the standard therapy group on the teacher rating ADHD scale (MD -0.52, 95% CI -5.88 to 4.84, 17 patients). There was also no statistically significant difference between the meditation therapy group and the standard therapy group in the distraction test (MD -8.34, 95% CI -107.05 to 90.37, 17 patients).\n\n\nAUTHORS' CONCLUSIONS\nAs a result of the limited number of included studies, the small sample sizes and the high risk of bias, we are unable to draw any conclusions regarding the effectiveness of meditation therapy for ADHD. The adverse effects of meditation have not been reported. More trials are needed.",
"title": ""
},
{
"docid": "neg:1840313_1",
"text": "Modulation recognition algorithms have recently received a great deal of attention in academia and industry. In addition to their application in the military field, these algorithms found civilian use in reconfigurable systems, such as cognitive radios. Most previously existing algorithms are focused on recognition of a single modulation. However, a multiple-input multiple-output two-way relaying channel (MIMO TWRC) with physical-layer network coding (PLNC) requires the recognition of the pair of sources modulations from the superposed constellation at the relay. In this paper, we propose an algorithm for recognition of sources modulations for MIMO TWRC with PLNC. The proposed algorithm is divided in two steps. The first step uses the higher order statistics based features in conjunction with genetic algorithm as a features selection method, while the second step employs AdaBoost as a classifier. Simulation results show the ability of the proposed algorithm to provide a good recognition performance at acceptable signal-to-noise values.",
"title": ""
},
{
"docid": "neg:1840313_2",
"text": "Fake news pose serious threat to our society nowadays, particularly due to its wide spread through social networks. While human fact checkers cannot handle such tremendous information online in real time, AI technology can be leveraged to automate fake news detection. The first step leading to a sophisticated fake news detection system is the stance detection between statement and body text. In this work, we analyze the dataset from Fake News Challenge (FNC1) and explore several neural stance detection models based on the ideas of natural language inference and machine comprehension. Experiment results show that all neural network models can outperform the hand-crafted feature based system. By improving Attentive Reader with a full attention mechanism between body text and headline and implementing bilateral multi-perspective mathcing models, we are able to further bring up the performance and reach metric score close to 87%.",
"title": ""
},
{
"docid": "neg:1840313_3",
"text": "HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. Asteroids. From Observations to Models. D. Hestroffer, Paolo Tanga",
"title": ""
},
{
"docid": "neg:1840313_4",
"text": "UNLABELLED\nPrevious studies have shown that resistance training with restricted venous blood flow (Kaatsu) results in significant strength gains and muscle hypertrophy. However, few studies have examined the concurrent vascular responses following restrictive venous blood flow training protocols.\n\n\nPURPOSE\nThe purpose of this study was to examine the effects of 4 wk of handgrip exercise training, with and without venous restriction, on handgrip strength and brachial artery flow-mediated dilation (BAFMD).\n\n\nMETHODS\nTwelve participants (mean +/- SD: age = 22 +/- 1 yr, men = 5, women = 7) completed 4 wk of bilateral handgrip exercise training (duration = 20 min, intensity = 60% of the maximum voluntary contraction, cadence = 15 grips per minute, frequency = three sessions per week). During each session, venous blood flow was restricted in one arm (experimental (EXP) arm) using a pneumatic cuff placed 4 cm proximal to the antecubital fossa and inflated to 80 mm Hg for the duration of each exercise session. The EXP and the control (CON) arms were randomly selected. Handgrip strength was measured using a hydraulic hand dynamometer. Brachial diameters and blood velocity profiles were assessed, using Doppler ultrasonography, before and after 5 min of forearm occlusion (200 mm Hg) before and at the end of the 4-wk exercise.\n\n\nRESULTS\nAfter exercise training, handgrip strength increased 8.32% (P = 0.05) in the CON arm and 16.17% (P = 0.05) in the EXP arm. BAFMD increased 24.19% (P = 0.0001) in the CON arm and decreased 30.36% (P = 0.0001) in the EXP arm.\n\n\nCONCLUSIONS\nThe data indicate handgrip training combined with venous restriction results in superior strength gains but reduced BAFMD compared with the nonrestricted arm.",
"title": ""
},
{
"docid": "neg:1840313_5",
"text": "Name ambiguity stems from the fact that many people or objects share identical names in the real world. Such name ambiguity decreases the performance of document retrieval, Web search, information integration, and may cause confusion in other applications. Due to the same name spellings and lack of information, it is a nontrivial task to distinguish them accurately. In this article, we focus on investigating the problem in digital libraries to distinguish publications written by authors with identical names. We present an effective framework named GHOST (abbreviation for GrapHical framewOrk for name diSambiguaTion), to solve the problem systematically. We devise a novel similarity metric, and utilize only one type of attribute (i.e., coauthorship) in GHOST. Given the similarity matrix, intermediate results are grouped into clusters with a recently introduced powerful clustering algorithm called Affinity Propagation. In addition, as a complementary technique, user feedback can be used to enhance the performance. We evaluated the framework on the real DBLP and PubMed datasets, and the experimental results show that GHOST can achieve both high precision and recall.",
"title": ""
},
{
"docid": "neg:1840313_6",
"text": "The freshwater angelfishes (Pterophyllum) are South American cichlids that have become very popular among aquarists, yet scarce information on their culture and aquarium husbandry exists. We studied Pterophyllum scalare to analyze dietary effects on fecundity, growth, and survival of eggs and larvae during 135 days. Three diets were used: A) decapsulated cysts of Artemia, B) commercial dry fish food, and C) a mix diet of the rotifer Brachionus plicatilis and the cladoceran Daphnia magna. The initial larval density was 100 organisms in each 40 L aquarium. With diet A, larvae reached a maximum weight of 3.80 g, a total length of 6.3 cm, and a height of 5.8 cm; with diet B: 2.80 g, 4.81 cm, and 4.79 cm, and with diet C: 3.00 g, 5.15 cm, and 5.10 cm, respectively. Significant differences were observed between diet A, and diet B and C, but no significantly differences were observed between diets B and C. Fecundity varied from 234 to 1,082 eggs in 20 and 50 g females, respectively. Egg survival ranged from 87.4% up to 100%, and larvae survival (80 larvae/40 L aquarium) from 50% to 66.3% using diet B and A, respectively. Live food was better for growing fish than the commercial balanced food diet. Fecundity and survival are important factors in planning a good production of angelfish.",
"title": ""
},
{
"docid": "neg:1840313_7",
"text": "Over the last years, many papers have been published about how to use machine learning for classifying postings on microblogging platforms like Twitter, e.g., in order to assist users to reach tweets that interest them. Typically, the automatic classification results are then evaluated against a gold standard classification which consists of either (i) the hashtags of the tweets' authors, or (ii) manual annotations of independent human annotators. In this paper, we show that there are fundamental differences between these two kinds of gold standard classifications, i.e., human annotators are more likely to classify tweets like other human annotators than like the tweets' authors. Furthermore, we discuss how these differences may influence the evaluation of automatic classifications, like they may be achieved by Latent Dirichlet Allocation (LDA). We argue that researchers who conduct machine learning experiments for tweet classification should pay particular attention to the kind of gold standard they use. One may even argue that hashtags are not appropriate as a gold standard for tweet classification.",
"title": ""
},
{
"docid": "neg:1840313_8",
"text": "Creating short summaries of documents with respect to a query has applications in for example search engines, where it may help inform users of the most relevant results. Constructing such a summary automatically, with the potential expressiveness of a human-written summary, is a difficult problem yet to be fully solved. In this thesis, a neural network model for this task is presented. We adapt an existing dataset of news article summaries for the task and train a pointer-generator model using this dataset to summarize such articles. The generated summaries are then evaluated by measuring similarity to reference summaries. We observe that the generated summaries exhibit abstractive properties, but also that they have issues, such as rarely being truthful. However, we show that a neural network summarization model, similar to existing neural network models for abstractive summarization, can be constructed to make use of queries for more targeted summaries.",
"title": ""
},
{
"docid": "neg:1840313_9",
"text": "The definition and phenomenological features of 'burnout' and its eventual relationship with depression and other clinical conditions are reviewed. Work is an indispensable way to make a decent and meaningful way of living, but can also be a source of stress for a variety of reasons. Feelings of inadequate control over one's work, frustrated hopes and expectations and the feeling of losing of life's meaning, seem to be independent causes of burnout, a term that describes a condition of professional exhaustion. It is not synonymous with 'job stress', 'fatigue', 'alienation' or 'depression'. Burnout is more common than generally believed and may affect every aspect of the individual's functioning, have a deleterious effect on interpersonal and family relationships and lead to a negative attitude towards life in general. Empirical research suggests that burnout and depression are separate entities, although they may share several 'qualitative' characteristics, especially in the more severe forms of burnout, and in vulnerable individuals, low levels of satisfaction derived from their everyday work. These final issues need further clarification and should be the focus of future clinical research.",
"title": ""
},
{
"docid": "neg:1840313_10",
"text": "Obfuscation-based private web search (OB-PWS) solutions allow users to search for information in the Internet while concealing their interests. The basic privacy mechanism in OB-PWS is the automatic generation of dummy queries that are sent to the search engine along with users' real requests. These dummy queries prevent the accurate inference of search profiles and provide query deniability. In this paper we propose an abstract model and an associated analysis framework to systematically evaluate the privacy protection offered by OB-PWS systems. We analyze six existing OB-PWS solutions using our framework and uncover vulnerabilities in their designs. Based on these results, we elicit a set of features that must be taken into account when analyzing the security of OB-PWS designs to avoid falling into the same pitfalls as previous proposals.",
"title": ""
},
{
"docid": "neg:1840313_11",
"text": "This thesis explores the design and application of artificial immune systems (AISs), problem-solving systems inspired by the human and other immune systems. AISs to date have largely been modelled on the biological adaptive immune system and have taken little inspiration from the innate immune system. The first part of this thesis examines the biological innate immune system, which controls the adaptive immune system. The importance of the innate immune system suggests that AISs should also incorporate models of the innate immune system as well as the adaptive immune system. This thesis presents and discusses a number of design principles for AISs which are modelled on both innate and adaptive immunity. These novel design principles provided a structured framework for developing AISs which incorporate innate and adaptive immune systems in general. These design principles are used to build a software system which allows such AISs to be implemented and explored. AISs, as well as being inspired by the biological immune system, are also built to solve problems. In this thesis, using the software system and design principles we have developed, we implement several novel AISs and apply them to the problem of detecting attacks on computer systems. These AISs monitor programs running on a computer and detect whether the program is behaving abnormally or being attacked. The development of these AISs shows in more detail how AISs built on the design principles can be instantiated. In particular, we show how the use of AISs which incorporate both innate and adaptive immune system mechanisms can be used to reduce the number of false alerts and improve the performance of current approaches.",
"title": ""
},
{
"docid": "neg:1840313_12",
"text": "In psychology the Rubber Hand Illusion (RHI) is an experiment where participants get the feeling that a fake hand is becoming their own. Recently, new testing methods using an action based paradigm have induced stronger RHI. However, these experiments are facing limitations because they are difficult to implement and lack of rigorous experimental conditions. This paper proposes a low-cost open source robotic hand which is easy to manufacture and removes these limitations. This device reproduces fingers movement of the participants in real time. A glove containing sensors is worn by the participant and records fingers flexion. Then a microcontroller drives hobby servo-motors on the robotic hand to reproduce the corresponding fingers position. A connection between the robotic device and a computer can be established, enabling the experimenters to tune precisely the desired parameters using Matlab. Since this is the first time a robotic hand is developed for the RHI, a validation study has been conducted. This study confirms previous results found in the literature. This study also illustrates the fact that the robotic hand can be used to conduct innovative experiments in the RHI field. Understanding such RHI is important because it can provide guidelines for prosthetic design.",
"title": ""
},
{
"docid": "neg:1840313_13",
"text": "In 1988 Kennedy and Chua introduced the dynamical canonical nonlinear programming circuit (NPC) to solve in real time nonlinear programming problems where the objective function and the constraints are smooth (twice continuously differentiable) functions. In this paper, a generalized circuit is introduced (G-NPC), which is aimed at solving in real time a much wider class of nonsmooth nonlinear programming problems where the objective function and the constraints are assumed to satisfy only the weak condition of being regular functions. G-NPC, which derives from a natural extension of NPC, has a neural-like architecture and also features the presence of constraint neurons modeled by ideal diodes with infinite slope in the conducting region. By using the Clarke's generalized gradient of the involved functions, G-NPC is shown to obey a gradient system of differential inclusions, and its dynamical behavior and optimization capabilities, both for convex and nonconvex problems, are rigorously analyzed in the framework of nonsmooth analysis and the theory of differential inclusions. In the special important case of linear and quadratic programming problems, salient dynamical features of G-NPC, namely the presence of sliding modes , trajectory convergence in finite time, and the ability to compute the exact optimal solution of the problem being modeled, are uncovered and explained in the developed analytical framework.",
"title": ""
},
{
"docid": "neg:1840313_14",
"text": "Document clustering has been recognized as a central problem in text data management. Such a problem becomes particularly challenging when document contents are characterized by subtopical discussions that are not necessarily relevant to each other. Existing methods for document clustering have traditionally assumed that a document is an indivisible unit for text representation and similarity computation, which may not be appropriate to handle documents with multiple topics. In this paper, we address the problem of multi-topic document clustering by leveraging the natural composition of documents in text segments that are coherent with respect to the underlying subtopics. We propose a novel document clustering framework that is designed to induce a document organization from the identification of cohesive groups of segment-based portions of the original documents. We empirically give evidence of the significance of our segment-based approach on large collections of multi-topic documents, and we compare it to conventional methods for document clustering.",
"title": ""
},
{
"docid": "neg:1840313_15",
"text": "Acoustic structures of sound in Gunnison's prairie dog alarm calls are described, showing how these acoustic structures may encode information about three different predator species (red-tailed hawk-Buteo jamaicensis; domestic dog-Canis familaris; and coyote-Canis latrans). By dividing each alarm call into 25 equal-sized partitions and using resonant frequencies within each partition, commonly occurring acoustic structures were identified as components of alarm calls for the three predators. Although most of the acoustic structures appeared in alarm calls elicited by all three predator species, the frequency of occurrence of these acoustic structures varied among the alarm calls for the different predators, suggesting that these structures encode identifying information for each of the predators. A classification analysis of alarm calls elicited by each of the three predators showed that acoustic structures could correctly classify 67% of the calls elicited by domestic dogs, 73% of the calls elicited by coyotes, and 99% of the calls elicited by red-tailed hawks. The different distributions of acoustic structures associated with alarm calls for the three predator species suggest a duality of function, one of the design elements of language listed by Hockett [in Animal Sounds and Communication, edited by W. E. Lanyon and W. N. Tavolga (American Institute of Biological Sciences, Washington, DC, 1960), pp. 392-430].",
"title": ""
},
{
"docid": "neg:1840313_16",
"text": "BACKGROUND\nAlthough fine-needle aspiration (FNA) is a safe and accurate diagnostic procedure for assessing thyroid nodules, it has limitations in diagnosing follicular neoplasms due to its relatively high false-positive rate. The purpose of the present study was to evaluate the diagnostic role of core-needle biopsy (CNB) for thyroid nodules with follicular neoplasm (FN) in comparison with FNA.\n\n\nMETHODS\nA series of 107 patients (24 men, 83 women; mean age, 47.4 years) from 231 FNAs and 107 patients (29 men, 78 women; mean age, 46.3 years) from 186 CNBs with FN readings, all of whom underwent surgery, from October 2008 to December 2013 were retrospectively analyzed. The false-positive rate, unnecessary surgery rate, and malignancy rate for the FNA and CNB patients according to the final diagnosis following surgery were evaluated.\n\n\nRESULTS\nThe CNB showed a significantly lower false-positive and unnecessary surgery rate than the FNA (4.7% versus 30.8%, 3.7% versus 26.2%, p < 0.001, respectively). In the FNA group, 33 patients (30.8%) had non-neoplasms, including nodular hyperplasia (n = 32) and chronic lymphocytic thyroiditis (n = 1). In the CNB group, 5 patients (4.7%) had non-neoplasms, all of which were nodular hyperplasia. Moreover, the CNB group showed a significantly higher malignancy rate than FNA (57.9% versus 28%, p < 0.001).\n\n\nCONCLUSIONS\nCNB showed a significantly lower false-positive rate and a higher malignancy rate than FNA in diagnosing FN. Therefore, CNB could minimize unnecessary surgery and provide diagnostic confidence when managing patients with FN to perform surgery.",
"title": ""
},
{
"docid": "neg:1840313_17",
"text": "The te rm \"reactive system\" was introduced by David Harel and Amir Pnueli [HP85], and is now commonly accepted to designate permanent ly operating systems, and to distinguish them from \"trans]ormational systems\" i.e, usual programs whose role is to terminate with a result, computed from an initial da ta (e.g., a compiler). In synchronous programming, we understand it in a more restrictive way, distinguishing between \"interactive\" and \"reactive\" systems: Interactive systems permanent ly communicate with their environment, but at their own speed. They are able to synchronize with their environment, i.e., making it wait. Concurrent processes considered in operat ing systems or in data-base management , are generally interactive. Reactive systems, in our meaning, have to react to an environment which cannot wait. Typical examples appear when the environment is a physical process. The specific features of reactive systems have been pointed out many times [Ha193,BCG88,Ber89]:",
"title": ""
},
{
"docid": "neg:1840313_18",
"text": "Photobacterium damselae subsp. piscicida is the causative agent of pasteurellosis in wild and farmed marine fish worldwide. Although serologically homogeneous, recent molecular advances have led to the discovery of distinct genetic clades, depending on geographical origin. Further details of the strategies for host colonisation have arisen including information on the role of capsule, susceptibility to oxidative stress, confirmation of intracellular survival in host epithelial cells, and induced apoptosis of host macrophages. This improved understanding has given rise to new ideas and advances in vaccine technologies, which are reviewed in this paper.",
"title": ""
}
] |
1840314 | Deep Learning Strong Parts for Pedestrian Detection | [
{
"docid": "pos:1840314_0",
"text": "We propose a simple yet effective detector for pedestrian detection. The basic idea is to incorporate common sense and everyday knowledge into the design of simple and computationally efficient features. As pedestrians usually appear up-right in image or video data, the problem of pedestrian detection is considerably simpler than general purpose people detection. We therefore employ a statistical model of the up-right human body where the head, the upper body, and the lower body are treated as three distinct components. Our main contribution is to systematically design a pool of rectangular templates that are tailored to this shape model. As we incorporate different kinds of low-level measurements, the resulting multi-modal & multi-channel Haar-like features represent characteristic differences between parts of the human body yet are robust against variations in clothing or environmental settings. Our approach avoids exhaustive searches over all possible configurations of rectangle features and neither relies on random sampling. It thus marks a middle ground among recently published techniques and yields efficient low-dimensional yet highly discriminative features. Experimental results on the INRIA and Caltech pedestrian datasets show that our detector reaches state-of-the-art performance at low computational costs and that our features are robust against occlusions.",
"title": ""
},
{
"docid": "pos:1840314_1",
"text": "The idea behind the experiments in section 4.1 of the main paper is to demonstrate that, within a single framework, varying the features can replicate the jump in detection performance over a ten-year span (2004 2014), i.e. the jump in performance between VJ and the current state-of-the-art. See figure 1 for results on INRIA and Caltech-USA of the following methods (all based on SquaresChnFtrs, described in section 4 of the paper):",
"title": ""
}
] | [
{
"docid": "neg:1840314_0",
"text": "Nostalgia fulfills pivotal functions for individuals, but lacks an empirically derived and comprehensive definition. We examined lay conceptions of nostalgia using a prototype approach. In Study 1, participants generated open-ended features of nostalgia, which were coded into categories. In Study 2, participants rated the centrality of these categories, which were subsequently classified as central (e.g., memories, relationships, happiness) or peripheral (e.g., daydreaming, regret, loneliness). Central (as compared with peripheral) features were more often recalled and falsely recognized (Study 3), were classified more quickly (Study 4), were judged to reflect more nostalgia in a vignette (Study 5), better characterized participants' own nostalgic (vs. ordinary) experiences (Study 6), and prompted higher levels of actual nostalgia and its intrapersonal benefits when used to trigger a personal memory, regardless of age (Study 7). These findings highlight that lay people view nostalgia as a self-relevant and social blended emotional and cognitive state, featuring a mixture of happiness and loss. The findings also aid understanding of nostalgia's functions and identify new methods for future research.",
"title": ""
},
{
"docid": "neg:1840314_1",
"text": "Product reviews are now widely used by individuals and organizations for decision making (Litvin et al., 2008; Jansen, 2010). And because of the profits at stake, people have been known to try to game the system by writing fake reviews to promote target products. As a result, the task of deceptive review detection has been gaining increasing attention. In this paper, we propose a generative LDA-based topic modeling approach for fake review detection. Our model can aptly detect the subtle differences between deceptive reviews and truthful ones and achieves about 95% accuracy on review spam datasets, outperforming existing baselines by a large margin.",
"title": ""
},
{
"docid": "neg:1840314_2",
"text": "We introduce a light-weight, power efficient, and general purpose convolutional neural network, ESPNetv2 , for modeling visual and sequential data. Our network uses group point-wise and depth-wise dilated separable convolutions to learn representations from a large effective receptive field with fewer FLOPs and parameters. The performance of our network is evaluated on three different tasks: (1) object classification, (2) semantic segmentation, and (3) language modeling. Experiments on these tasks, including image classification on the ImageNet and language modeling on the PenTree bank dataset, demonstrate the superior performance of our method over the state-ofthe-art methods. Our network has better generalization properties than ShuffleNetv2 when tested on the MSCOCO multi-object classification task and the Cityscapes urban scene semantic segmentation task. Our experiments show that ESPNetv2 is much more power efficient than existing state-of-the-art efficient methods including ShuffleNets and MobileNets. Our code is open-source and available at https://github.com/sacmehta/ESPNetv2.",
"title": ""
},
{
"docid": "neg:1840314_3",
"text": "Computer generated battleeeld agents need to be able to explain the rationales for their actions. Such explanations make it easier to validate agent behavior, and can enhance the eeectiveness of the agents as training devices. This paper describes an explanation capability called Debrief that enables agents implemented in Soar to describe and justify their decisions. Debrief determines the motivation for decisions by recalling the context in which decisions were made, and determining what factors were critical to those decisions. In the process Debrief learns to recognize similar situations where the same decision would be made for the same reasons. Debrief currently being used by the TacAir-Soar tactical air agent to explain its actions , and is being evaluated for incorporation into other reactive planning agents.",
"title": ""
},
{
"docid": "neg:1840314_4",
"text": "In his famous thought experiments on synthetic vehicles, Valentino Braitenberg stipulated that simple stimulus-response reactions in an organism could evoke the appearance of complex behavior, which, to the unsuspecting human observer, may even appear to be driven by emotions such as fear, aggression, and even love (Braitenberg, Vehikel. Experimente mit künstlichen Wesen, Lit Verlag, 2004). In fact, humans appear to have a strong propensity to anthropomorphize, driven by our inherent desire for predictability that will quickly lead us to discern patterns, cause-and-effect relationships, and yes, emotions, in animated entities, be they natural or artificial. But might there be reasons, that we should intentionally “implement” emotions into artificial entities, such as robots? How would we proceed in creating robot emotions? And what, if any, are the ethical implications of creating “emotional” robots? The following article aims to shed some light on these questions with a multi-disciplinary review of recent empirical investigations into the various facets of emotions in robot psychology.",
"title": ""
},
{
"docid": "neg:1840314_5",
"text": "Healthcare scientific applications, such as body area network, require of deploying hundreds of interconnected sensors to monitor the health status of a host. One of the biggest challenges is the streaming data collected by all those sensors, which needs to be processed in real time. Follow-up data analysis would normally involve moving the collected big data to a cloud data center for status reporting and record tracking purpose. Therefore, an efficient cloud platform with very elastic scaling capacity is needed to support such kind of real time streaming data applications. The current cloud platform either lacks of such a module to process streaming data, or scales in regard to coarse-grained compute nodes. In this paper, we propose a task-level adaptive MapReduce framework. This framework extends the generic MapReduce architecture by designing each Map and Reduce task as a consistent running loop daemon. The beauty of this new framework is the scaling capability being designed at the Map and Task level, rather than being scaled from the compute-node level. This strategy is capable of not only scaling up and down in real time, but also leading to effective use of compute resources in cloud data center. As a first step towards implementing this framework in real cloud, we developed a simulator that captures workload strength, and provisions the amount of Map and Reduce tasks just in need and in real time. To further enhance the framework, we applied two streaming data workload prediction methods, smoothing and Kalman filter, to estimate the unknown workload characteristics. We see 63.1% performance improvement by using the Kalman filter method to predict the workload. We also use real streaming data workload trace to test the framework. Experimental results show that this framework schedules the Map and Reduce tasks very efficiently, as the streaming data changes its arrival rate. © 2014 Elsevier B.V. All rights reserved. ∗ Corresponding author at: Kavli Institute for Astrophysics and Space Research, Massachusetts Institute of Technology, Cambridge, MA 02139, USA. Tel.: +1",
"title": ""
},
{
"docid": "neg:1840314_6",
"text": "Big Data is a new term used to identify datasets that we cannot manage with current methodologies or data mining software tools due to their large size and complexity. Big Data mining is the capability of extracting useful information from these large datasets or streams of data. New mining techniques are necessary due to the volume, variability, and velocity, of such data. MOA is a software framework with classification, regression, and frequent pattern methods, and the new APACHE SAMOA is a distributed streaming software for mining data streams.",
"title": ""
},
{
"docid": "neg:1840314_7",
"text": "Software defect prediction is one of the most active research areas in software engineering. We can build a prediction model with defect data collected from a software project and predict defects in the same project, i.e. within-project defect prediction (WPDP). Researchers also proposed cross-project defect prediction (CPDP) to predict defects for new projects lacking in defect data by using prediction models built by other projects. In recent studies, CPDP is proved to be feasible. However, CPDP requires projects that have the same metric set, meaning the metric sets should be identical between projects. As a result, current techniques for CPDP are difficult to apply across projects with heterogeneous metric sets. To address the limitation, we propose heterogeneous defect prediction (HDP) to predict defects across projects with heterogeneous metric sets. Our HDP approach conducts metric selection and metric matching to build a prediction model between projects with heterogeneous metric sets. Our empirical study on 28 subjects shows that about 68% of predictions using our approach outperform or are comparable to WPDP with statistical significance.",
"title": ""
},
{
"docid": "neg:1840314_8",
"text": "We develop a computational model for binocular stereopsis, attempting to explain the process by which the information detailing the 3-D geometry of object surfaces is encoded in a pair of stereo images. We design our model within a Bayesian framework, making explicit all of our assumptions about the nature of image coding and the structure of the world. We start by deriving our model for image formation, introducing a definition of half-occluded regions and deriving simple equations relating these regions to the disparity function. We show that the disparity function alone contains enough information to determine the half-occluded regions. We use these relations to derive a model for image formation in which the half-occluded regions are explicitly represented and computed. Next, we present our prior model in a series of three stages, or “worlds,” where each world considers an additional complication to the prior. We eventually argue that the prior model must be constructed from all of the local quantities in the scene geometry-i.e., depth, surface orientation, object boundaries, and surface creases. In addition, we present a new dynamic programming strategy for estimating these quantities. Throughout the article, we provide motivation for the development of our model by psychophysical examinations of the human visual system.",
"title": ""
},
{
"docid": "neg:1840314_9",
"text": "We propose a novel approach for content based color image classification using Support Vector Machine (SVM). Traditional classification approaches deal poorly on content based image classification tasks being one of the reasons of high dimensionality of the feature space. In this paper, color image classification is done on features extracted from histograms of color components. The benefit of using color image histograms are better efficiency, and insensitivity to small changes in camera view-point i.e. translation and rotation. As a case study for validation purpose, experimental trials were done on a database of about 500 images divided into four different classes has been reported and compared on histogram features for RGB, CMYK, Lab, YUV, YCBCR, HSV, HVC and YIQ color spaces. Results based on the proposed approach are found encouraging in terms of color image classification accuracy.",
"title": ""
},
{
"docid": "neg:1840314_10",
"text": "A method is developed that processes Global Navigation Satellite System (GNSS) beat carrier phase measurements from a single moving antenna in order to determine whether the GNSS signals are being spoofed. This technique allows a specially equipped GNSS receiver to detect sophisticated spoofing that cannot be detected using receiver autonomous integrity monitoring techniques. It works for both encrypted military signals and for unencrypted civilian signals. It does not require changes to the signal structure of unencrypted civilian GNSS signals. The method uses a short segment of beat carrier-phase time histories that are collected while the receiver's single antenna is undergoing a known, highfrequency motion profile, typically one pre-programmed into an antenna articulation system. The antenna also can be moving in an unknown way at lower frequencies, as might be the case if it were mounted on a ground vehicle, a ship, an airplane, or a spacecraft. The spoofing detection algorithm correlates high-pass-filtered versions of the known motion component with high-pass-filtered versions of the carrier phase variations. True signals produce a specific correlation pattern, and spoofed signals produce a recognizably different correlation pattern if the spoofer transmits its false signals from a single antenna. The most pronounced difference is that non-spoofed signals display variations between the beat carrier phase responses of multiple signals, but all signals' responses are identical in the spoofed case. These differing correlation characteristics are used to develop a hypothesis test in order to detect a spoofing attack or the lack thereof. For moving-base receivers, there is no need for prior knowledge of the vehicle's attitude. Instead, the detection calculations also provide a rough attitude measurement. Several versions of this spoofing detection system have been designed and tested. Some have been tested only with truth-model data, but one has been tested with actual live-signal data from the Global Positioning System (GPS) C/A code on the L1 frequency. The livedata tests correctly identified spoofing attacks in the 4 cases out of 8 trials that had actual attacks. These detections used worst-case false-alarm probabilities of 10 , and their worst-case probabilities of missed detection were no greater than 1.6x10. The ranges of antenna motion used to detect spoofing in these trials were between 4 and 6 cm, i.e., on the order of a quarter-cycle of the GPS L1 carrier wavelength.",
"title": ""
},
{
"docid": "neg:1840314_11",
"text": "This paper describes a new prototype system for detecting the demeanor of patients in emergency situations using the Intel RealSense camera system [1]. It describes how machine learning, a support vector machine (SVM) and the RealSense facial detection system can be used to track patient demeanour for pain monitoring. In a lab setting, the application has been trained to detect four different intensities of pain and provide demeanour information about the patient's eyes, mouth, and agitation state. Its utility as a basis for evaluating the condition of patients in situations using video, machine learning and 5G technology is discussed.",
"title": ""
},
{
"docid": "neg:1840314_12",
"text": "We investigated the normal and parallel ground reaction forces during downhill and uphill running. Our rationale was that these force data would aid in the understanding of hill running injuries and energetics. Based on a simple spring-mass model, we hypothesized that the normal force peaks, both impact and active, would increase during downhill running and decrease during uphill running. We anticipated that the parallel braking force peaks would increase during downhill running and the parallel propulsive force peaks would increase during uphill running. But, we could not predict the magnitude of these changes. Five male and five female subjects ran at 3m/s on a force treadmill mounted on the level and on 3 degrees, 6 degrees, and 9 degrees wedges. During downhill running, normal impact force peaks and parallel braking force peaks were larger compared to the level. At -9 degrees, the normal impact force peaks increased by 54%, and the parallel braking force peaks increased by 73%. During uphill running, normal impact force peaks were smaller and parallel propulsive force peaks were larger compared to the level. At +9 degrees, normal impact force peaks were absent, and parallel propulsive peaks increased by 75%. Neither downhill nor uphill running affected normal active force peaks. Combined with previous biomechanics studies, our normal impact force data suggest that downhill running substantially increases the probability of overuse running injury. Our parallel force data provide insight into past energetic studies, which show that the metabolic cost increases during downhill running at steep angles.",
"title": ""
},
{
"docid": "neg:1840314_13",
"text": "Deep Neural Networks (DNNs) have been demonstrated to perform exceptionally well on most recognition tasks such as image classification and segmentation. However, they have also been shown to be vulnerable to adversarial examples. This phenomenon has recently attracted a lot of attention but it has not been extensively studied on multiple, large-scale datasets and complex tasks such as semantic segmentation which often require more specialised networks with additional components such as CRFs, dilated convolutions, skip-connections and multiscale processing. In this paper, we present what to our knowledge is the first rigorous evaluation of adversarial attacks on modern semantic segmentation models, using two large-scale datasets. We analyse the effect of different network architectures, model capacity and multiscale processing, and show that many observations made on the task of classification do not always transfer to this more complex task. Furthermore, we show how mean-field inference in deep structured models and multiscale processing naturally implement recently proposed adversarial defenses. Our observations will aid future efforts in understanding and defending against adversarial examples. Moreover, in the shorter term, we show which segmentation models should currently be preferred in safety-critical applications due to their inherent robustness.",
"title": ""
},
{
"docid": "neg:1840314_14",
"text": "This paper presents a static race detection analysis for multithreaded Java programs. Our analysis is based on a formal type system that is capable of capturing many common synchronization patterns. These patterns include classes with internal synchronization, classes thatrequire client-side synchronization, and thread-local classes. Experience checking over 40,000 lines of Java code with the type system demonstrates that it is an effective approach for eliminating races conditions. On large examples, fewer than 20 additional type annotations per 1000 lines of code were required by the type checker, and we found a number of races in the standard Java libraries and other test programs.",
"title": ""
},
{
"docid": "neg:1840314_15",
"text": "Event-related desynchronization/synchronization patterns during right/left motor imagery (MI) are effective features for an electroencephalogram-based brain-computer interface (BCI). As MI tasks are subject-specific, selection of subject-specific discriminative frequency components play a vital role in distinguishing these patterns. This paper proposes a new discriminative filter bank (FB) common spatial pattern algorithm to extract subject-specific FB for MI classification. The proposed method enhances the classification accuracy in BCI competition III dataset IVa and competition IV dataset IIb. Compared to the performance offered by the existing FB-based method, the proposed algorithm offers error rate reductions of 17.42% and 8.9% for BCI competition datasets III and IV, respectively.",
"title": ""
},
{
"docid": "neg:1840314_16",
"text": "The ability to identify authors of computer programs based on their coding style is a direct threat to the privacy and anonymity of programmers. While recent work found that source code can be attributed to authors with high accuracy, attribution of executable binaries appears to be much more difficult. Many distinguishing features present in source code, e.g. variable names, are removed in the compilation process, and compiler optimization may alter the structure of a program, further obscuring features that are known to be useful in determining authorship. We examine programmer de-anonymization from the standpoint of machine learning, using a novel set of features that include ones obtained by decompiling the executable binary to source code. We adapt a powerful set of techniques from the domain of source code authorship attribution along with stylistic representations embedded in assembly, resulting in successful deanonymization of a large set of programmers. We evaluate our approach on data from the Google Code Jam, obtaining attribution accuracy of up to 96% with 100 and 83% with 600 candidate programmers. We present an executable binary authorship attribution approach, for the first time, that is robust to basic obfuscations, a range of compiler optimization settings, and binaries that have been stripped of their symbol tables. We perform programmer de-anonymization using both obfuscated binaries, and real-world code found “in the wild” in single-author GitHub repositories and the recently leaked Nulled.IO hacker forum. We show that programmers who would like to remain anonymous need to take extreme countermeasures to protect their privacy.",
"title": ""
},
{
"docid": "neg:1840314_17",
"text": "A human stress monitoring patch integrates three sensors of skin temperature, skin conductance, and pulsewave in the size of stamp (25 mm × 15 mm × 72 μm) in order to enhance wearing comfort with small skin contact area and high flexibility. The skin contact area is minimized through the invention of an integrated multi-layer structure and the associated microfabrication process; thus being reduced to 1/125 of that of the conventional single-layer multiple sensors. The patch flexibility is increased mainly by the development of flexible pulsewave sensor, made of a flexible piezoelectric membrane supported by a perforated polyimide membrane. In the human physiological range, the fabricated stress patch measures skin temperature with the sensitivity of 0.31 Ω/°C, skin conductance with the sensitivity of 0.28 μV/0.02 μS, and pulse wave with the response time of 70 msec. The skin-attachable stress patch, capable to detect multimodal bio-signals, shows potential for application to wearable emotion monitoring.",
"title": ""
},
{
"docid": "neg:1840314_18",
"text": "PURPOSE\nTo review studies of healing touch and its implications for practice and research.\n\n\nDESIGN\nA review of the literature from published works, abstracts from conference proceedings, theses, and dissertations was conducted to synthesize information on healing touch. Works available until June 2003 were referenced.\n\n\nMETHODS\nThe studies were categorized by target of interventions and outcomes were evaluated.\n\n\nFINDINGS AND CONCLUSIONS\nOver 30 studies have been conducted with healing touch as the independent variable. Although no generalizable results were found, a foundation exists for further research to test its benefits.",
"title": ""
}
] |
1840315 | Query Rewriting for Horn-SHIQ Plus Rules | [
{
"docid": "pos:1840315_0",
"text": "We present the OWL API, a high level Application Programming Interface (API) for working with OWL ontologies. The OWL API is closely aligned with the OWL 2 structural specification. It supports parsing and rendering in the syntaxes defined in the W3C specification (Functional Syntax, RDF/XML, OWL/XML and the Manchester OWL Syntax); manipulation of ontological structures; and the use of reasoning engines. The reference implementation of the OWL API, written in Java, includes validators for the various OWL 2 profiles OWL 2 QL, OWL 2 EL and OWL 2 RL. The OWL API has widespread usage in a variety of tools and applications.",
"title": ""
},
{
"docid": "pos:1840315_1",
"text": "Towards the integration of rules and ontologies in the Semantic Web, we propose a combination of logic programming under the answer set semantics with the description logics SHIF(D) and SHOIN (D), which underly the Web ontology languages OWL Lite and OWL DL, respectively. This combination allows for building rules on top of ontologies but also, to a limited extent, building ontologies on top of rules. We introduce description logic programs (dl-programs), which consist of a description logic knowledge base L and a finite set of description logic rules (dl-rules) P . Such rules are similar to usual rules in logic programs with negation as failure, but may also contain queries to L, possibly default-negated, in their bodies. We define Herbrand models for dl-programs, and show that satisfiable positive dl-programs have a unique least Herbrand model. More generally, consistent stratified dl-programs can be associated with a unique minimal Herbrand model that is characterized through iterative least Herbrand models. We then generalize the (unique) minimal Herbrand model semantics for positive and stratified dl-programs to a strong answer set semantics for all dl-programs, which is based on a reduction to the least model semantics of positive dl-programs. We also define a weak answer set semantics based on a reduction to the answer sets of ordinary logic programs. Strong answer sets are weak answer sets, and both properly generalize answer sets of ordinary normal logic programs. We then give fixpoint characterizations for the (unique) minimal Herbrand model semantics of positive and stratified dl-programs, and show how to compute these models by finite fixpoint iterations. Furthermore, we give a precise picture of the complexity of deciding strong and weak answer set existence for a dl-program. 1Institut für Informationssysteme, Technische Universität Wien, Favoritenstraße 9-11, A-1040 Vienna, Austria; e-mail: {eiter, lukasiewicz, roman, tompits}@kr.tuwien.ac.at. 2Dipartimento di Informatica e Sistemistica, Università di Roma “La Sapienza”, Via Salaria 113, I-00198 Rome, Italy; e-mail: lukasiewicz@dis.uniroma1.it. Acknowledgements: This work has been partially supported by the Austrian Science Fund project Z29N04 and a Marie Curie Individual Fellowship of the European Community programme “Human Potential” under contract number HPMF-CT-2001-001286 (disclaimer: The authors are solely responsible for information communicated and the European Commission is not responsible for any views or results expressed). We would like to thank Ian Horrocks and Ulrike Sattler for providing valuable information on complexityrelated issues during the preparation of this paper. Copyright c © 2004 by the authors INFSYS RR 1843-03-13 I",
"title": ""
}
] | [
{
"docid": "neg:1840315_0",
"text": "With the increasing user demand for elastic provisioning of resources coupled with ubiquitous and on-demand access to data, cloud computing has been recognized as an emerging technology to meet such dynamic user demands. In addition, with the introduction and rising use of mobile devices, the Internet of Things (IoT) has recently received considerable attention since the IoT has brought physical devices and connected them to the Internet, enabling each device to share data with surrounding devices and virtualized technologies in real-time. Consequently, the exploding data usage requires a new, innovative computing platform that can provide robust real-time data analytics and resource provisioning to clients. As a result, fog computing has recently been introduced to provide computation, storage and networking services between the end-users and traditional cloud computing data centers. This paper proposes a policy-based management of resources in fog computing, expanding the current fog computing platform to support secure collaboration and interoperability between different user-requested resources in fog computing.",
"title": ""
},
{
"docid": "neg:1840315_1",
"text": "An enhanced automated material handling system (AMHS) that uses a local FOUP buffer at each tool is presented as a method of enabling lot size reduction and parallel metrology sampling in the photolithography (litho) bay. The local FOUP buffers can be integrated with current OHT AMHS systems in existing fabs with little or no change to the AMHS or process equipment. The local buffers enhance the effectiveness of the OHT by eliminating intermediate moves to stockers, increasing the move rate capacity by 15-20%, and decreasing the loadport exchange time to 30 seconds. These enhancements can enable the AMHS to achieve the high move rates compatible with lot size reduction down to 12-15 wafers per FOUP. The implementation of such a system in a photolithography bay could result in a 60-74% reduction in metrology delay time, which is the time between wafer exposure at a litho tool and collection of metrology and inspection data.",
"title": ""
},
{
"docid": "neg:1840315_2",
"text": "Work on the semantics of questions has argued that the relation between a question and its answer(s) can be cast in terms of logical entailment. In this paper, we demonstrate how computational systems designed to recognize textual entailment can be used to enhance the accuracy of current open-domain automatic question answering (Q/A) systems. In our experiments, we show that when textual entailment information is used to either filter or rank answers returned by a Q/A system, accuracy can be increased by as much as 20% overall.",
"title": ""
},
{
"docid": "neg:1840315_3",
"text": "Coreless substrates have been used in more and more advanced package designs for their benefits in electrical performance and reduction in thickness. However, coreless substrate causes severe package warpage due to the lack of a rigid and low CTE core. In this paper, both experimental measured warpage data and model simulation data are presented and illustrate that asymmetric designs in substrate thickness direction are capable of improving package warpage when compared to the traditional symmetric design. A few asymmetric design options are proposed, including Cu layer thickness asymmetric design, dielectric layer thickness asymmetric design and dielectric material property asymmetric design. These design options are then studied in depth by simulation to understand their mechanism and quantify their effectiveness for warpage improvement. From the results, it is found that the dielectric material property asymmetric design is the most effective option to improve package warpage, especially when using a lower CTE dielectric in the bottom layers of the substrate and a high CTE dielectric in top layers. Cu layer thickness asymmetric design is another effective way for warpage reduction. The bottom Cu layers should be thinner than the top Cu layers. It is also found that the dielectric layer thickness asymmetric design is only effective for high layer count substrate. It is not effective for low layer count substrate. In this approach, the bottom dielectric layers should be thicker than the top dielectric layers. Furthermore, the results show the asymmetric substrate designs are usually more effective for warpage improvement at high temperature than at room temperature. They are also more effective for a high layer count substrate than a low layer count substrate.",
"title": ""
},
{
"docid": "neg:1840315_4",
"text": "Being a corner stone of the New testament and Christian religion, the evangelical narration about Jesus Christ crucifixion had been drawing attention of many millions people, both Christians and representatives of other religions and convictions, almost for two thousand years.If in the last centuries the crucifixion was considered mainly from theological and historical positions, the XX century was marked by surge of medical and biological researches devoted to investigation of thanatogenesis of the crucifixion. However the careful analysis of the suggested concepts of death at the crucifixion shows that not all of them are well-founded. Moreover, some authors sometimes do not consider available historic facts.Not only the analysis of the original Greek text of the Gospel is absent in the published works but authors ignore the Gospel itself at times.",
"title": ""
},
{
"docid": "neg:1840315_5",
"text": "The problem of auto-focusing has been studied for long, but most techniques found in literature do not always work well for low-contrast images. In this paper, a robust focus measure based on the energy of the image is proposed. It performs equally well on ordinary and low-contrast images. In addition, it is computationally efficient.",
"title": ""
},
{
"docid": "neg:1840315_6",
"text": "We provide data on the extent to which computer-related audit procedures are used and whether two factors, control risk assessment and audit firm size, influence computer-related audit procedures use. We used a field-based questionnaire to collect data from 181 auditors representing Big 4, national, regional, and local firms. Results indicate that computer-related audit procedures are generally used when obtaining an understanding of the client system and business processes and testing computer controls. Furthermore, 42.9 percent of participants indicate that they relied on internal controls; however, this percentage increases significantly for auditors at Big 4 firms. Finally, our results raise questions for future research regarding computer-related audit procedure use.",
"title": ""
},
{
"docid": "neg:1840315_7",
"text": "In this paper, we review recent emerging theoretical and technological advances of artificial intelligence (AI) in the big data settings. We conclude that integrating data-driven machine learning with human knowledge (common priors or implicit intuitions) can effectively lead to explainable, robust, and general AI, as follows: from shallow computation to deep neural reasoning; from merely data-driven model to data-driven with structured logic rules models; from task-oriented (domain-specific) intelligence (adherence to explicit instructions) to artificial general intelligence in a general context (the capability to learn from experience). Motivated by such endeavors, the next generation of AI, namely AI 2.0, is positioned to reinvent computing itself, to transform big data into structured knowledge, and to enable better decision-making for our society.",
"title": ""
},
{
"docid": "neg:1840315_8",
"text": "Sequential models achieve state-of-the-art results in audio, visual and textual domains with respect to both estimating the data distribution and generating high-quality samples. Efficient sampling for this class of models has however remained an elusive problem. With a focus on text-to-speech synthesis, we describe a set of general techniques for reducing sampling time while maintaining high output quality. We first describe a single-layer recurrent neural network, the WaveRNN, with a dual softmax layer that matches the quality of the state-of-the-art WaveNet model. The compact form of the network makes it possible to generate 24 kHz 16-bit audio 4× faster than real time on a GPU. Second, we apply a weight pruning technique to reduce the number of weights in the WaveRNN. We find that, for a constant number of parameters, large sparse networks perform better than small dense networks and this relationship holds for sparsity levels beyond 96%. The small number of weights in a Sparse WaveRNN makes it possible to sample high-fidelity audio on a mobile CPU in real time. Finally, we propose a new generation scheme based on subscaling that folds a long sequence into a batch of shorter sequences and allows one to generate multiple samples at once. The Subscale WaveRNN produces 16 samples per step without loss of quality and offers an orthogonal method for increasing sampling efficiency.",
"title": ""
},
{
"docid": "neg:1840315_9",
"text": "Educational spaces play an important role in enhancing learning productivity levels of society people as the most important places to human train. Considering the cost, time and energy spending on these spaces, trying to design efficient and optimized environment is a necessity. Achieving efficient environments requires changing environmental criteria so that they can have a positive impact on the activities and learning in users. Therefore, creating suitable conditions for promoting learning in users requires full utilization of the comprehensive knowledge of architecture and the design of the physical environment with respect to the environmental, social and aesthetic dimensions; Which will naturally increase the usefulness of people in space and make optimal use of the expenses spent on building schools and the time spent on education and training.The main aim of this study was to find physical variables affecting on increasing productivity in learning environments. This study is quantitative-qualitative and was done in two research methods: a) survey research methods (survey) b) correlation method. The samples were teachers and students in secondary schools’ in Zahedan city, the sample size was 310 people. Variables were extracted using the literature review and deep interviews with professors and experts. The questionnaire was obtained using variables and it is used to collect the views of teachers and students. Cronbach’s alpha coefficient was 0.89 which indicates that the information gathering tool is acceptable. The findings shows that there are four main physical factor as: 1. Physical comfort, 2. Space layouts, 3. Psychological factors and 4. Visual factors thet they are affecting positively on space productivity. Each of the environmental factors play an important role in improving the learning quality and increasing interest in attending learning environments; therefore, the desired environment improves the productivity of the educational spaces by improving the components of productivity.",
"title": ""
},
{
"docid": "neg:1840315_10",
"text": "Plants are a tremendous source for the discovery of new products of medicinal value for drug development. Today several distinct chemicals derived from plants are important drugs currently used in one or more countries in the world. Many of the drugs sold today are simple synthetic modifications or copies of the naturally obtained substances. The evolving commercial importance of secondary metabolites has in recent years resulted in a great interest in secondary metabolism, particularly in the possibility of altering the production of bioactive plant metabolites by means of tissue culture technology. Plant cell culture technologies were introduced at the end of the 1960’s as a possible tool for both studying and producing plant secondary metabolites. Different strategies, using an in vitro system, have been extensively studied to improve the production of plant chemicals. The focus of the present review is the application of tissue culture technology for the production of some important plant pharmaceuticals. Also, we describe the results of in vitro cultures and production of some important secondary metabolites obtained in our laboratory.",
"title": ""
},
{
"docid": "neg:1840315_11",
"text": "This paper proposes a new usability evaluation checklist, UseLearn, and a related method for eLearning systems. UseLearn is a comprehensive checklist which incorporates both quality and usability evaluation perspectives in eLearning systems. Structural equation modeling is deployed to validate the UseLearn checklist quantitatively. The experimental results show that the UseLearn method supports the determination of usability problems by criticality metric analysis and the definition of relevant improvement strategies. The main advantage of the UseLearn method is the adaptive selection of the most influential usability problems, and thus significant reduction of the time and effort for usability evaluation can be achieved. At the sketching and/or design stage of eLearning systems, it will provide an effective guidance to usability analysts as to what problems should be focused on in order to improve the usability perception of the end-users. Relevance to industry: During the sketching or design stage of eLearning platforms, usability problems should be revealed and eradicated to create more usable and quality eLearning systems to satisfy the end-users. The UseLearn checklist along with its quantitative methodology proposed in this study would be helpful for usability experts to achieve this goal. 2010 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "neg:1840315_12",
"text": "Satisfaction prediction is one of the prime concerns in search performance evaluation. It is a non-trivial task for two major reasons: (1) The definition of satisfaction is rather subjective and different users may have different opinions in satisfaction judgement. (2) Most existing studies on satisfaction prediction mainly rely on users' click-through or query reformulation behaviors but there are many sessions without such kind of interactions. To shed light on these research questions, we construct an experimental search engine that could collect users' satisfaction feedback as well as mouse click-through/movement data. Different from existing studies, we compare for the first time search users' and external assessors' opinions on satisfaction. We find that search users pay more attention to the utility of results while external assessors emphasize on the efforts spent in search sessions. Inspired by recent studies in predicting result relevance based on mouse movement patterns (namely motifs), we propose to estimate the utilities of search results and the efforts in search sessions with motifs extracted from mouse movement data on search result pages (SERPs). Besides the existing frequency-based motif selection method, two novel selection strategies (distance-based and distribution-based) are also adopted to extract high quality motifs for satisfaction prediction. Experimental results on over 1,000 user sessions show that the proposed strategies outperform existing methods and also have promising generalization capability for different users and queries.",
"title": ""
},
{
"docid": "neg:1840315_13",
"text": "The decentralized cryptocurrency Bitcoin has experienced great success but also encountered many challenges. One of the challenges has been the long confirmation time and low transaction throughput. Another challenge is the lack of incentives at certain steps of the protocol, raising concerns for transaction withholding, selfish mining, etc. To address these challenges, we propose Solidus, a decentralized cryptocurrency based on permissionless Byzantine consensus. A core technique in Solidus is to use proof of work for leader election to adapt the Practical Byzantine Fault Tolerance (PBFT) protocol to a permissionless setting. We also design Solidus to be incentive compatible and to mitigate selfish mining. Solidus improves on Bitcoin in confirmation time, and provides safety and liveness assuming Byzantine players and the largest coalition of rational players collectively control less than one-third of the computation power.",
"title": ""
},
{
"docid": "neg:1840315_14",
"text": "It is difficult to fully assess the quality of software inhouse, outside the actual time and context in which it will execute after deployment. As a result, it is common for software to manifest field failures, failures that occur on user machines due to untested behavior. Field failures are typically difficult to recreate and investigate on developer platforms, and existing techniques based on crash reporting provide only limited support for this task. In this paper, we present a technique for recording, reproducing, and minimizing failing executions that enables and supports inhouse debugging of field failures. We also present a tool that implements our technique and an empirical study that evaluates the technique on a widely used e-mail client.",
"title": ""
},
{
"docid": "neg:1840315_15",
"text": "ZHENG, Traditional Chinese Medicine syndrome, is an integral and essential part of Traditional Chinese Medicine theory. It defines the theoretical abstraction of the symptom profiles of individual patients and thus, used as a guideline in disease classification in Chinese medicine. For example, patients suffering from gastritis may be classified as Cold or Hot ZHENG, whereas patients with different diseases may be classified under the same ZHENG. Tongue appearance is a valuable diagnostic tool for determining ZHENG in patients. In this paper, we explore new modalities for the clinical characterization of ZHENG using various supervised machine learning algorithms. We propose a novel-color-space-based feature set, which can be extracted from tongue images of clinical patients to build an automated ZHENG classification system. Given that Chinese medical practitioners usually observe the tongue color and coating to determine a ZHENG type and to diagnose different stomach disorders including gastritis, we propose using machine-learning techniques to establish the relationship between the tongue image features and ZHENG by learning through examples. The experimental results obtained over a set of 263 gastritis patients, most of whom suffering Cold Zheng or Hot ZHENG, and a control group of 48 healthy volunteers demonstrate an excellent performance of our proposed system.",
"title": ""
},
{
"docid": "neg:1840315_16",
"text": "In this paper, one new machine family, i.e. named as flux-modulation machines which produce steady torque based on flux-modulation effect is proposed. The typical model including three components-one flux modulator, one armature and one excitation field exciters of flux-modulation machines is built. The torque relationships among the three components are developed based on the principle of electromechanical energy conversion. Then, some structure and performance features of flux-modulation machines are summarized, through which the flux-modulation topology distinguish criterion is proposed for the first time. Flux-modulation topologies can be further classified into stationary flux modulator, stationary excitation field, stationary armature field and dual-mechanical port flux-modulation machines. Many existed topologies, such as vernier, switched flux, flux reversal and transverse machines, are demonstrated that they can be classified into the flux-modulation family based on the criterion, and the processes how to convert typical models of flux-modulation machines to these machines are also given in this paper. Furthermore, in this new machine family, developed and developing theories on the vernier, switched flux, flux reversal and transverse machines can be shared with each other as well as some novel topologies in such a machine category. Based on the flux modulation principle, the nature and general theory, such as torque, power factor expressions and so on, of the flux-modulation machines are investigated. In additions, flux-modulation induction and electromagnetic transmission topologies are predicted and analyzed to enrich the flux-modulation electromagnetic topology family and the prospective applications are highlighted. Finally, one vernier permanent magnet prototype has been built and tested to verify the analysis results.",
"title": ""
},
{
"docid": "neg:1840315_17",
"text": "Recent improvements in both the performance and scalability of shared-nothing, transactional, in-memory NewSQL databases have reopened the research question of whether distributed metadata for hierarchical file systems can be managed using commodity databases. In this paper, we introduce HopsFS, a next generation distribution of the Hadoop Distributed File System (HDFS) that replaces HDFS’ single node in-memory metadata service, with a distributed metadata service built on a NewSQL database. By removing the metadata bottleneck, HopsFS enables an order of magnitude larger and higher throughput clusters compared to HDFS. Metadata capacity has been increased to at least 37 times HDFS’ capacity, and in experiments based on a workload trace from Spotify, we show that HopsFS supports 16 to 37 times the throughput of Apache HDFS. HopsFS also has lower latency for many concurrent clients, and no downtime during failover. Finally, as metadata is now stored in a commodity database, it can be safely extended and easily exported to external systems for online analysis and free-text search.",
"title": ""
},
{
"docid": "neg:1840315_18",
"text": "There has been extensive research focusing on developing smart environments by integrating data mining techniques into environments that are equipped with sensors and actuators. The ultimate goal is to reduce the energy consumption in buildings while maintaining a maximum comfort level for occupants. However, there are few studies successfully demonstrating energy savings from occupancy behavioural patterns that have been learned in a smart environment because of a lack of a formal connection to building energy management systems. In this study, the objective is to develop and implement algorithms for sensor-based modelling and prediction of user behaviour in intelligent buildings and connect the behavioural patterns to building energy and comfort management systems through simulation tools. The results are tested on data from a room equipped with a distributed set of sensors, and building simulations through EnergyPlus suggest potential energy savings of 30% while maintaining an indoor comfort level when compared with other basic energy savings HVAC control strategies.",
"title": ""
},
{
"docid": "neg:1840315_19",
"text": "Table: Coherence evaluation results on Discrimination and Insertion tasks. † indicates a neural model is significantly superior to its non-neural counterpart with p-value < 0.01. Discr. Ins. Acc F1 Random 50.00 50.00 12.60 Graph-based (G&S) 64.23 65.01 11.93 Dist. sentence (L&H) 77.54 77.54 19.32 Grid-all nouns (E&C) 81.58 81.60 22.13 Extended Grid (E&C) 84.95 84.95 23.28 Grid-CNN 85.57† 85.57† 23.12 Extended Grid-CNN 88.69† 88.69† 25.95†",
"title": ""
}
] |
1840316 | A Continuously Growing Dataset of Sentential Paraphrases | [
{
"docid": "pos:1840316_0",
"text": "We address the problem of sentence alignment for monolingual corpora, a phenomenon distinct from alignment in parallel corpora. Aligning large comparable corpora automatically would provide a valuable resource for learning of text-totext rewriting rules. We incorporate context into the search for an optimal alignment in two complementary ways: learning rules for matching paragraphs using topic structure and further refining the matching through local alignment to find good sentence pairs. Evaluation shows that our alignment method outperforms state-of-the-art systems developed for the same task.",
"title": ""
}
] | [
{
"docid": "neg:1840316_0",
"text": "Video shot boundary detection (SBD) is the first and essential step for content-based video management and structural analysis. Great efforts have been paid to develop SBD algorithms for years. However, the high computational cost in the SBD becomes a block for further applications such as video indexing, browsing, retrieval, and representation. Motivated by the requirement of the real-time interactive applications, a unified fast SBD scheme is proposed in this paper. We adopted a candidate segment selection and singular value decomposition (SVD) to speed up the SBD. Initially, the positions of the shot boundaries and lengths of gradual transitions are predicted using adaptive thresholds and most non-boundary frames are discarded at the same time. Only the candidate segments that may contain the shot boundaries are preserved for further detection. Then, for all frames in each candidate segment, their color histograms in the hue-saturation-value) space are extracted, forming a frame-feature matrix. The SVD is then performed on the frame-feature matrices of all candidate segments to reduce the feature dimension. The refined feature vector of each frame in the candidate segments is obtained as a new metric for boundary detection. Finally, cut and gradual transitions are identified using our pattern matching method based on a new similarity measurement. Experiments on TRECVID 2001 test data and other video materials show that the proposed scheme can achieve a high detection speed and excellent accuracy compared with recent SBD schemes.",
"title": ""
},
{
"docid": "neg:1840316_1",
"text": "This paper provides a comprehensive study of interleave-division multiple-access (IDMA) systems. The IDMA receiver principles for different modulation and channel conditions are outlined. A semi-analytical technique is developed based on the density evolution technique to estimate the bit-error-rate (BER) of the system. It provides a fast and relatively accurate method to predict the performance of the IDMA scheme. With simple convolutional/repetition codes, overall throughputs of 3 bits/chip with one receive antenna and 6 bits/chip with two receive antennas are observed for IDMA systems involving as many as about 100 users.",
"title": ""
},
{
"docid": "neg:1840316_2",
"text": "Theories of insight problems are often tested by formulating hypotheses about the particular difficulties of individual insight problems. Such evaluations often implicitly assume that there is a single difficulty. We argue that the quantitatively small effects of many studies arise because the difficulty of many insight problems is determined by multiple factors, so the removal of 1 factor has limited effect on the solution rate. Difficulties can reside either in problem perception, in prior knowledge, or in the processing of the problem information. We support this multiple factors perspective through 3 experiments on the 9-dot problem (N.R.F. Maier, 1930). Our results lead to a significant reformulation of the classical hypothesis as to why this problem is difficult. The results have general implications for our understanding of insight problem solving and for the interpretation of data from studies that aim to evaluate hypotheses about the sources of difficulty of particular insight problems.",
"title": ""
},
{
"docid": "neg:1840316_3",
"text": "Flow visualization has been a very attractive component of scientific visualization research for a long time. Usually very large multivariate datasets require processing. These datasets often consist of a large number of sample locations and several time steps. The steadily increasing performance of computers has recently become a driving factor for a reemergence in flow visualization research, especially in texture-based techniques. In this paper, dense, texture-based flow visualization techniques are discussed. This class of techniques attempts to provide a complete, dense representation of the flow field with high spatio-temporal coherency. An attempt of categorizing closely related solutions is incorporated and presented. Fundamentals are shortly addressed as well as advantages and disadvantages of the methods.",
"title": ""
},
{
"docid": "neg:1840316_4",
"text": "Requirements engineering is concerned with the identification of high-level goals to be achieved by the system envisioned, the refinement of such goals, the operationalization of goals into services and constraints, and the assignment of responsibilities for the resulting requirements to agents such as humans, devices and programs. Goal refinement and operationalization is a complex process which is not well supported by current requirements engineering technology. Ideally some form of formal support should be provided, but formal methods are difficult and costly to apply at this stage.This paper presents an approach to goal refinement and operationalization which is aimed at providing constructive formal support while hiding the underlying mathematics. The principle is to reuse generic refinement patterns from a library structured according to strengthening/weakening relationships among patterns. The patterns are once for all proved correct and complete. They can be used for guiding the refinement process or for pointing out missing elements in a refinement. The cost inherent to the use of a formal method is thus reduced significantly. Tactics are proposed to the requirements engineer for grounding pattern selection on semantic criteria.The approach is discussed in the context of the multi-paradigm language used in the KAOS method; this language has an external semantic net layer for capturing goals, constraints, agents, objects and actions together with their links, and an inner formal assertion layer that includes a real-time temporal logic for the specification of goals and constraints. Some frequent refinement patterns are high-lighted and illustrated through a variety of examples.The general principle is somewhat similar in spirit to the increasingly popular idea of design patterns, although it is grounded on a formal framework here.",
"title": ""
},
{
"docid": "neg:1840316_5",
"text": "Support vector machine (SVM) is a supervised machine learning approach that was recognized as a statistical learning apotheosis for the small-sample database. SVM has shown its excellent learning and generalization ability and has been extensively employed in many areas. This paper presents a performance analysis of six types of SVMs for the diagnosis of the classical Wisconsin breast cancer problem from a statistical point of view. The classification performance of standard SVM (St-SVM) is analyzed and compared with those of the other modified classifiers such as proximal support vector machine (PSVM) classifiers, Lagrangian support vector machines (LSVM), finite Newton method for Lagrangian support vector machine (NSVM), Linear programming support vector machines (LPSVM), and smooth support vector machine (SSVM). The experimental results reveal that these SVM classifiers achieve very fast, simple, and efficient breast cancer diagnosis. The training results indicated that LSVM has the lowest accuracy of 95.6107 %, while St-SVM performed better than other methods for all performance indices (accuracy = 97.71 %) and is closely followed by LPSVM (accuracy = 97.3282). However, in the validation phase, the overall accuracies of LPSVM achieved 97.1429 %, which was superior to LSVM (95.4286 %), SSVM (96.5714 %), PSVM (96 %), NSVM (96.5714 %), and St-SVM (94.86 %). Value of ROC and MCC for LPSVM achieved 0.9938 and 0.9369, respectively, which outperformed other classifiers. The results strongly suggest that LPSVM can aid in the diagnosis of breast cancer.",
"title": ""
},
{
"docid": "neg:1840316_6",
"text": "Many natural language generation tasks, such as abstractive summarization and text simplification, are paraphrase-orientated. In these tasks, copying and rewriting are two main writing modes. Most previous sequence-to-sequence (Seq2Seq) models use a single decoder and neglect this fact. In this paper, we develop a novel Seq2Seq model to fuse a copying decoder and a restricted generative decoder. The copying decoder finds the position to be copied based on a typical attention model. The generative decoder produces words limited in the source-specific vocabulary. To combine the two decoders and determine the final output, we develop a predictor to predict the mode of copying or rewriting. This predictor can be guided by the actual writing mode in the training data. We conduct extensive experiments on two different paraphrase datasets. The result shows that our model outperforms the stateof-the-art approaches in terms of both informativeness and language quality.",
"title": ""
},
{
"docid": "neg:1840316_7",
"text": "The Extended Kalman Filter (EKF) has become a standard technique used in a number of nonlinear estimation and machine learning applications. These include estimating the state of a nonlinear dynamic system, estimating parameters for nonlinear system identification (e.g., learning the weights of a neural network), and dual estimation (e.g., the ExpectationMaximization (EM) algorithm)where both states and parameters are estimated simultaneously. This paper points out the flaws in using the EKF, and introduces an improvement, the Unscented Kalman Filter (UKF), proposed by Julier and Uhlman [5]. A central and vital operation performed in the Kalman Filter is the propagation of a Gaussian random variable (GRV) through the system dynamics. In the EKF, the state distribution is approximated by a GRV, which is then propagated analytically through the first-order linearization of the nonlinear system. This can introduce large errors in the true posterior mean and covariance of the transformed GRV, which may lead to sub-optimal performance and sometimes divergence of the filter. The UKF addresses this problem by using a deterministic sampling approach. The state distribution is again approximated by a GRV, but is now represented using a minimal set of carefully chosen sample points. These sample points completely capture the true mean and covariance of the GRV, and when propagated through the true nonlinear system, captures the posterior mean and covariance accurately to the 3rd order (Taylor series expansion) for any nonlinearity. The EKF, in contrast, only achieves first-order accuracy. Remarkably, the computational complexity of the UKF is the same order as that of the EKF. Julier and Uhlman demonstrated the substantial performance gains of the UKF in the context of state-estimation for nonlinear control. Machine learning problems were not considered. We extend the use of the UKF to a broader class of nonlinear estimation problems, including nonlinear system identification, training of neural networks, and dual estimation problems. Our preliminary results were presented in [13]. In this paper, the algorithms are further developed and illustrated with a number of additional examples. This work was sponsored by the NSF under grant grant IRI-9712346",
"title": ""
},
{
"docid": "neg:1840316_8",
"text": "Background estimation and foreground segmentation are important steps in many high-level vision tasks. Many existing methods estimate background as a low-rank component and foreground as a sparse matrix without incorporating the structural information. Therefore, these algorithms exhibit degraded performance in the presence of dynamic backgrounds, photometric variations, jitter, shadows, and large occlusions. We observe that these backgrounds often span multiple manifolds. Therefore, constraints that ensure continuity on those manifolds will result in better background estimation. Hence, we propose to incorporate the spatial and temporal sparse subspace clustering into the robust principal component analysis (RPCA) framework. To that end, we compute a spatial and temporal graph for a given sequence using motion-aware correlation coefficient. The information captured by both graphs is utilized by estimating the proximity matrices using both the normalized Euclidean and geodesic distances. The low-rank component must be able to efficiently partition the spatiotemporal graphs using these Laplacian matrices. Embedded with the RPCA objective function, these Laplacian matrices constrain the background model to be spatially and temporally consistent, both on linear and nonlinear manifolds. The solution of the proposed objective function is computed by using the linearized alternating direction method with adaptive penalty optimization scheme. Experiments are performed on challenging sequences from five publicly available datasets and are compared with the 23 existing state-of-the-art methods. The results demonstrate excellent performance of the proposed algorithm for both the background estimation and foreground segmentation.",
"title": ""
},
{
"docid": "neg:1840316_9",
"text": "Violent video game playing is correlated with aggression, but its relation to antisocial behavior in correctional and juvenile justice samples is largely unknown. Based on a data from a sample of institutionalized juvenile delinquents, behavioral and attitudinal measures relating to violent video game playing were associated with a composite measure of delinquency and a more specific measure of violent delinquency after controlling for the effects of screen time, years playing video games, age, sex, race, delinquency history, and psychopathic personality traits. Violent video games are associated with antisociality even in a clinical sample, and these effects withstand the robust influences of multiple correlates of juvenile delinquency and youth violence most notably psychopathy.",
"title": ""
},
{
"docid": "neg:1840316_10",
"text": "We propose a mathematical formulation for the notion of optimal projective cluster, starting from natural requirements on the density of points in subspaces. This allows us to develop a Monte Carlo algorithm for iteratively computing projective clusters. We prove that the computed clusters are good with high probability. We implemented a modified version of the algorithm, using heuristics to speed up computation. Our extensive experiments show that our method is significantly more accurate than previous approaches. In particular, we use our techniques to build a classifier for detecting rotated human faces in cluttered images.",
"title": ""
},
{
"docid": "neg:1840316_11",
"text": "Humans prefer to interact with each other using speech. Since this is the most natural mode of communication, the humans also want to interact with machines using speech only. So, automatic speech recognition has gained a lot of popularity. Different approaches for speech recognition exists like Hidden Markov Model (HMM), Dynamic Time Warping (DTW), Vector Quantization (VQ), etc. This paper uses Neural Network (NN) along with Mel Frequency Cepstrum Coefficients (MFCC) for speech recognition. Mel Frequency Cepstrum Coefiicients (MFCC) has been used for the feature extraction of speech. This gives the feature of the waveform. For pattern matching FeedForward Neural Network with Back propagation algorithm has been applied. The paper analyzes the various training algorithms present for training the Neural Network and uses train scg for the experiment. The work has been done on MATLAB and experimental results show that system is able to recognize words at sufficiently high accuracy.",
"title": ""
},
{
"docid": "neg:1840316_12",
"text": "We investigate the use of syntactically related pairs of words for the task of text classification. The set of all pairs of syntactically related words should intuitively provide a better description of what a document is about, than the set of proximity-based N-grams or selective syntactic phrases. We generate syntactically related word pairs using a dependency parser. We experimented with Support Vector Machines and Decision Tree learners on the 10 most frequent classes from the Reuters-21578 corpus. Results show that syntactically related pairs of words produce better results in terms of accuracy and precision when used alone or combined with unigrams, compared to unigrams alone.",
"title": ""
},
{
"docid": "neg:1840316_13",
"text": "INTRODUCTION\nPhysical training for United States military personnel requires a combination of injury prevention and performance optimization to counter unintentional musculoskeletal injuries and maximize warrior capabilities. Determining the most effective activities and tasks to meet these goals requires a systematic, research-based approach that is population specific based on the tasks and demands of the warrior.\n\n\nOBJECTIVE\nWe have modified the traditional approach to injury prevention to implement a comprehensive injury prevention and performance optimization research program with the 101st Airborne Division (Air Assault) at Ft. Campbell, KY. This is Part I of two papers that presents the research conducted during the first three steps of the program and includes Injury Surveillance, Task and Demand Analysis, and Predictors of Injury and Optimal Performance.\n\n\nMETHODS\nInjury surveillance based on a self-report of injuries was collected on all Soldiers participating in the study. Field-based analyses of the tasks and demands of Soldiers performing typical tasks of 101st Soldiers were performed to develop 101st-specific laboratory testing and to assist with the design of the intervention (Eagle Tactical Athlete Program (ETAP)). Laboratory testing of musculoskeletal, biomechanical, physiological, and nutritional characteristics was performed on Soldiers and benchmarked to triathletes to determine predictors of injury and optimal performance and to assist with the design of ETAP.\n\n\nRESULTS\nInjury surveillance demonstrated that Soldiers of the 101st are at risk for a wide range of preventable unintentional musculoskeletal injuries during physical training, tactical training, and recreational/sports activities. The field-based analyses provided quantitative data and qualitative information essential to guiding 101st specific laboratory testing and intervention design. Overall the laboratory testing revealed that Soldiers of the 101st would benefit from targeted physical training to meet the specific demands of their job and that sub-groups of Soldiers would benefit from targeted injury prevention activities.\n\n\nCONCLUSIONS\nThe first three steps of the injury prevention and performance research program revealed that Soldiers of the 101st suffer preventable musculoskeletal injuries, have unique physical demands, and would benefit from targeted training to improve performance and prevent injury.",
"title": ""
},
{
"docid": "neg:1840316_14",
"text": "In this paper, we explore how privacy settings and privacy policy consumption (reading the privacy policy) affect the relationship between privacy attitudes and disclosure behaviors. We present results from a survey completed by 122 users of Facebook regarding their information disclosure practices and their attitudes about privacy. Based on our data, we develop and evaluate a model for understanding factors that affect how privacy attitudes influence disclosure and discuss implications for social network sites. Our analysis shows that the relationship between privacy attitudes and certain types of disclosures (those furthering contact) are controlled by privacy policy consumption and privacy behaviors. This provides evidence that social network sites could help mitigate concerns about disclosure by providing transparent privacy policies and privacy controls. 2010 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "neg:1840316_15",
"text": "Inter-network interference is a significant source of difficulty for wireless body area networks. Movement, proximity and the lack of central coordination all contribute to this problem. We compare the interference power of multiple Body Area Network (BAN) devices when a group of people move randomly within an office area. We find that the path loss trend is dominated by local variations in the signal, and not free-space path loss exponent.",
"title": ""
},
{
"docid": "neg:1840316_16",
"text": "We show that the Winternitz one-time signature scheme is existentially unforgeable under adaptive chosen message attacks when instantiated with a family of pseudorandom functions. Our result halves the signature size at the same security level, compared to previous results, which require a collision resistant hash function. We also consider security in the strong sense and show that the Winternitz one-time signature scheme is strongly unforgeable assuming additional properties of the pseudorandom function family. In this context we formally define several key-based security notions for function families and investigate their relation to pseudorandomness. All our reductions are exact and in the standard model and can directly be used to estimate the output length of the hash function required to meet a certain security level.",
"title": ""
},
{
"docid": "neg:1840316_17",
"text": "In order to plan a safe maneuver, self-driving vehicles need to understand the intent of other traffic participants. We define intent as a combination of discrete high level behaviors as well as continuous trajectories describing future motion. In this paper we develop a one-stage detector and forecaster that exploits both 3D point clouds produced by a LiDAR sensor as well as dynamic maps of the environment. Our multi-task model achieves better accuracy than the respective separate modules while saving computation, which is critical to reduce reaction time in self-driving applications.",
"title": ""
},
{
"docid": "neg:1840316_18",
"text": "In Mexico, local empirical knowledge about medicinal properties of plants is the basis for their use as home remedies. It is generally accepted by many people in Mexico and elsewhere in the world that beneficial medicinal effects can be obtained by ingesting plant products. In this review, we focus on the potential pharmacologic bases for herbal plant efficacy, but we also raise concerns about the safety of these agents, which have not been fully assessed. Although numerous randomized clinical trials of herbal medicines have been published and systematic reviews and meta-analyses of these studies are available, generalizations about the efficacy and safety of herbal medicines are clearly not possible. Recent publications have also highlighted the unintended consequences of herbal product use, including morbidity and mortality. It has been found that many phytochemicals have pharmacokinetic or pharmacodynamic interactions with drugs. The present review is limited to some herbal medicines that are native or cultivated in Mexico and that have significant use. We discuss the cultural uses, phytochemistry, pharmacological, and toxicological properties of the following plant species: nopal (Opuntia ficus), peppermint (Mentha piperita), chaparral (Larrea divaricata), dandlion (Taraxacum officinale), mullein (Verbascum densiflorum), chamomile (Matricaria recutita), nettle or stinging nettle (Urtica dioica), passionflower (Passiflora incarnata), linden flower (Tilia europea), and aloe (Aloe vera). We conclude that our knowledge of the therapeutic benefits and risks of some herbal medicines used in Mexico is still limited and efforts to elucidate them should be intensified.",
"title": ""
}
] |
1840317 | Adversarial Distillation of Bayesian Neural Network Posteriors | [
{
"docid": "pos:1840317_0",
"text": "Deep Learning models are vulnerable to adversarial examples, i.e. images obtained via deliberate imperceptible perturbations, such that the model misclassifies them with high confidence. However, class confidence by itself is an incomplete picture of uncertainty. We therefore use principled Bayesian methods to capture model uncertainty in prediction for observing adversarial misclassification. We provide an extensive study with different Bayesian neural networks attacked in both white-box and black-box setups. The behaviour of the networks for noise, attacks and clean test data is compared. We observe that Bayesian neural networks are uncertain in their predictions for adversarial perturbations, a behaviour similar to the one observed for random Gaussian perturbations. Thus, we conclude that Bayesian neural networks can be considered for detecting adversarial examples.",
"title": ""
},
{
"docid": "pos:1840317_1",
"text": "We consider the two related problems of detecting if an example is misclassified or out-of-distribution. We present a simple baseline that utilizes probabilities from softmax distributions. Correctly classified examples tend to have greater maximum softmax probabilities than erroneously classified and out-of-distribution examples, allowing for their detection. We assess performance by defining several tasks in computer vision, natural language processing, and automatic speech recognition, showing the effectiveness of this baseline across all. We then show the baseline can sometimes be surpassed, demonstrating the room for future research on these underexplored detection tasks.",
"title": ""
},
{
"docid": "pos:1840317_2",
"text": "Deep neural networks (DNNs) are powerful nonlinear architectures that are known to be robust to random perturbations of the input. However, these models are vulnerable to adversarial perturbations—small input changes crafted explicitly to fool the model. In this paper, we ask whether a DNN can distinguish adversarial samples from their normal and noisy counterparts. We investigate model confidence on adversarial samples by looking at Bayesian uncertainty estimates, available in dropout neural networks, and by performing density estimation in the subspace of deep features learned by the model. The result is a method for implicit adversarial detection that is oblivious to the attack algorithm. We evaluate this method on a variety of standard datasets including MNIST and CIFAR-10 and show that it generalizes well across different architectures and attacks. Our findings report that 85-93% ROC-AUC can be achieved on a number of standard classification tasks with a negative class that consists of both normal and noisy samples.",
"title": ""
},
{
"docid": "pos:1840317_3",
"text": "Most existing machine learning classifiers are highly vulnerable to adversarial examples. An adversarial example is a sample of input data which has been modified very slightly in a way that is intended to cause a machine learning classifier to misclassify it. In many cases, these modifications can be so subtle that a human observer does not even notice the modification at all, yet the classifier still makes a mistake. Adversarial examples pose security concerns because they could be used to perform an attack on machine learning systems, even if the adversary has no access to the underlying model. Up to now, all previous work have assumed a threat model in which the adversary can feed data directly into the machine learning classifier. This is not always the case for systems operating in the physical world, for example those which are using signals from cameras and other sensors as an input. This paper shows that even in such physical world scenarios, machine learning systems are vulnerable to adversarial examples. We demonstrate this by feeding adversarial images obtained from cell-phone camera to an ImageNet Inception classifier and measuring the classification accuracy of the system. We find that a large fraction of adversarial examples are classified incorrectly even when perceived through the camera.",
"title": ""
}
] | [
{
"docid": "neg:1840317_0",
"text": "IP-based solutions to accommodate mobile hosts within existing internetworks do not address the distinctive features of wireless mobile computing. IP-based transport protocols thus suffer from poor performance when a mobile host communicates with a host on the fixed network. This is caused by frequent disruptions in network layer connectivity due to — i) mobility and ii) unreliable nature of the wireless link. We describe the design and implementation of I-TCP, which is an indirect transport layer protocol for mobile hosts. I-TCP utilizes the resources of Mobility Support Routers (MSRs) to provide transport layer communication between mobile hosts and hosts on the fixed network. With I-TCP, the problems related to mobility and the unreliability of wireless link are handled entirely within the wireless link; the TCP/IP software on the fixed hosts is not modified. Using I-TCP on our testbed, the throughput between a fixed host and a mobile host improved substantially in comparison to regular TCP.",
"title": ""
},
{
"docid": "neg:1840317_1",
"text": "Innovation is defined as the development and implementation of new ^^eas by people who over time engage in transactions with others within an institutional order. Thxs defmUion focuses on four basic factors (new ideas, people, transactions, and ms itut.onal context)^An understanding of how these factors are related leads to four basic problems confronting most general managers: (1) a human problem of managing attention, (2) a process probleni in manlgng new ideas into good currency, (3) a structural problem of managing part-whole TelatLnships, and (4) a strategic problem of institutional leadership. This paper discusses thes four basic problems and concludes by suggesting how they fit together into an overall framework to guide longitudinal study of the management of innovation. (ORGANIZATIONAL EFFECTIVENESS; INNOVATION)",
"title": ""
},
{
"docid": "neg:1840317_2",
"text": "The goal of process design is the construction of a process model that is a priori optimal w.r.t. the goal(s) of the business owning the process. Process design is therefore a major factor in determining the process performance and ultimately the success of a business. Despite this importance, the designed process is often less than optimal. This is due to two major challenges: First, since the design is an a priori ability, no actual execution data is available to provide the foundations for design decisions. Second, since modeling decision support is typically basic at best, the quality of the design largely depends on the ability of business analysts to make the ”right” design choices. To address these challenges, we present in this paper our deep Business Optimization Platform that enables (semi-) automated process optimization during process design based on actual execution data. Our platform achieves this task by matching new processes to existing processes stored in a repository based on similarity metrics and by using a set of formalized best-practice process optimization patterns.",
"title": ""
},
{
"docid": "neg:1840317_3",
"text": "Recently, deep learning has gained prominence due to the potential it portends for machine learning. For this reason, deep learning techniques have been applied in many fields, such as recognizing some kinds of patterns or classification. Intrusion detection analyses got data from monitoring security events to get situation assessment of network. Lots of traditional machine learning method has been put forward to intrusion detection, but it is necessary to improvement the detection performance and accuracy. This paper discusses different methods which were used to classify network traffic. We decided to use different methods on open data set and did experiment with these methods to find out a best way to intrusion detection.",
"title": ""
},
{
"docid": "neg:1840317_4",
"text": "Considering the difficult technical and sociological issues affecting the regulation of artificial intelligence research and applications.",
"title": ""
},
{
"docid": "neg:1840317_5",
"text": "Background. Imperforate hymen is usually treated with hymenotomy, and the management after its spontaneous rupture is not very well known. Case. In this paper, we present spontaneous rupture of the imperforate hymen in a 13-year-old adolescent girl with hematocolpometra just before a planned hymenotomy operation. The patient was managed conservatively with a satisfactory outcome. Conclusion. Hymenotomy may not be needed in cases with spontaneous rupture of the imperforate hymen if adequate opening for menstrual discharge is warranted.",
"title": ""
},
{
"docid": "neg:1840317_6",
"text": "D-galactose injection has been shown to induce many changes in mice that represent accelerated aging. This mouse model has been widely used for pharmacological studies of anti-aging agents. The underlying mechanism of D-galactose induced aging remains unclear, however, it appears to relate to glucose and 1ipid metabolic disorders. Currently, there has yet to be a study that focuses on investigating gene expression changes in D-galactose aging mice. In this study, integrated analysis of gas chromatography/mass spectrometry-based metabonomics and gene expression profiles was used to investigate the changes in transcriptional and metabolic profiles in mimetic aging mice injected with D-galactose. Our findings demonstrated that 48 mRNAs were differentially expressed between control and D-galactose mice, and 51 potential biomarkers were identified at the metabolic level. The effects of D-galactose on aging could be attributed to glucose and 1ipid metabolic disorders, oxidative damage, accumulation of advanced glycation end products (AGEs), reduction in abnormal substance elimination, cell apoptosis, and insulin resistance.",
"title": ""
},
{
"docid": "neg:1840317_7",
"text": "Background: Feedback of the weak areas of knowledge in RPD using continuous competency or other test forms is very essential to develop the student knowledge and the syllabus as well. This act should be a regular practice. Aim: To use the outcome of competency test and the objectives structured clinical examination of removable partial denture as a reliable measure to provide a continuous feedback to the teaching system. Method: This sectional study was performed on sixty eight, fifth year students for the period from 2009 to 2010. The experiment was divided into two parts: continuous assessment and the final examination. In the first essay; some basic removable partial denture knowledge, surveying technique, and designing of the metal framework were used to estimate the learning outcome. While in the second essay, some components of the objectives structured clinical examination were compared to the competency test to see the difference in learning outcome. Results: The students’ performance was improved in the final assessment just in some aspects of removable partial denture. However, for the surveying, the students faced some problems. Conclusion: the continuous and final tests can provide a simple tool to advice the teachers for more effective teaching of the RPD. So that the weakness in specific aspects of the RPD syllabus can be detected and corrected continuously from the beginning, during and at the end of the course.",
"title": ""
},
{
"docid": "neg:1840317_8",
"text": "Alzheimer's disease (AD) is a neurodegenerative disorder associated with loss of memory and cognitive abilities. Previous evidence suggested that exercise ameliorates learning and memory deficits by increasing brain derived neurotrophic factor (BDNF) and activating downstream pathways in AD animal models. However, upstream pathways related to increase BDNF induced by exercise in AD animal models are not well known. We investigated the effects of moderate treadmill exercise on Aβ-induced learning and memory impairment as well as the upstream pathway responsible for increasing hippocampal BDNF in an animal model of AD. Animals were divided into five groups: Intact, Sham, Aβ1-42, Sham-exercise (Sham-exe) and Aβ1-42-exercise (Aβ-exe). Aβ was microinjected into the CA1 area of the hippocampus and then animals in the exercise groups were subjected to moderate treadmill exercise (for 4 weeks with 5 sessions per week) 7 days after microinjection. In the present study the Morris water maze (MWM) test was used to assess spatial learning and memory. Hippocampal mRNA levels of BDNF, peroxisome proliferator-activated receptor gamma co-activator 1 alpha (PGC-1α), fibronectin type III domain-containing 5 (FNDC5) as well as protein levels of AMPK-activated protein kinase (AMPK), PGC-1α, BDNF, phosphorylation of AMPK were measured. Our results showed that intra-hippocampal injection of Aβ1-42 impaired spatial learning and memory which was accompanied by reduced AMPK activity (p-AMPK/total-AMPK ratio) and suppression of the PGC-1α/FNDC5/BDNF pathway in the hippocampus of rats. In contrast, moderate treadmill exercise ameliorated the Aβ1-42-induced spatial learning and memory deficit, which was accompanied by restored AMPK activity and PGC-1α/FNDC5/BDNF levels. Our results suggest that the increased AMPK activity and up-regulation of the PGC-1α/FNDC5/BDNF pathway by exercise are likely involved in mediating the beneficial effects of exercise on Aβ-induced learning and memory impairment.",
"title": ""
},
{
"docid": "neg:1840317_9",
"text": "The recent success of deep neural networks relies on massive amounts of labeled data. For a target task where labeled data is unavailable, domain adaptation can transfer a learner from a different source domain. In this paper, we propose a new approach to domain adaptation in deep networks that can jointly learn adaptive classifiers and transferable features from labeled data in the source domain and unlabeled data in the target domain. We relax a shared-classifier assumption made by previous methods and assume that the source classifier and target classifier differ by a residual function. We enable classifier adaptation by plugging several layers into deep network to explicitly learn the residual function with reference to the target classifier. We fuse features of multiple layers with tensor product and embed them into reproducing kernel Hilbert spaces to match distributions for feature adaptation. The adaptation can be achieved in most feed-forward models by extending them with new residual layers and loss functions, which can be trained efficiently via back-propagation. Empirical evidence shows that the new approach outperforms state of the art methods on standard domain adaptation benchmarks.",
"title": ""
},
{
"docid": "neg:1840317_10",
"text": "This paper presents an adaptive fuzzy sliding-mode dynamic controller (AFSMDC) of the car-like mobile robot (CLMR) for the trajectory tracking issue. First, a kinematics model of the nonholonomic CLMR is introduced. Then, according to the Lagrange formula, a dynamic model of the CLMR is created. For a real time trajectory tracking problem, an optimal controller capable of effectively driving the CLMR to track the desired trajectory is necessary. Therefore, an AFSMDC is proposed to accomplish the tracking task and to reduce the effect of the external disturbances and system uncertainties of the CLMR. The proposed controller could reduce the tracking errors between the output of the velocity controller and the real velocity of the CLMR. Therefore, the CLMR could track the desired trajectory without posture and orientation errors. Additionally, the stability of the proposed controller is proven by utilizing the Lyapunov stability theory. Finally, the simulation results validate the effectiveness of the proposed AFSMDC.",
"title": ""
},
{
"docid": "neg:1840317_11",
"text": "Text preprocessing is an essential stage in text categorization (TC) particularly and text mining generally. Morphological tools can be used in text preprocessing to reduce multiple forms of the word to one form. There has been a debate among researchers about the benefits of using morphological tools in TC. Studies in the English language illustrated that performing stemming during the preprocessing stage degrades the performance slightly. However, they have a great impact on reducing the memory requirement and storage resources needed. The effect of the preprocessing tools on Arabic text categorization is an area of research. This work provides an evaluation study of several morphological tools for Arabic Text Categorization. The study includes using the raw text, the stemmed text, and the root text. The stemmed and root text are obtained using two different preprocessing tools. The results illustrated that using light stemmer combined with a good performing feature selection method enhances the performance of Arabic Text Categorization especially for small threshold values.",
"title": ""
},
{
"docid": "neg:1840317_12",
"text": "OBJECTIVES\nIn this paper we present a contemporary understanding of \"nursing informatics\" and relate it to applications in three specific contexts, hospitals, community health, and home dwelling, to illustrate achievements that contribute to the overall schema of health informatics.\n\n\nMETHODS\nWe identified literature through database searches in MEDLINE, EMBASE, CINAHL, and the Cochrane Library. Database searching was complemented by one author search and hand searches in six relevant journals. The literature review helped in conceptual clarification and elaborate on use that are supported by applications in different settings.\n\n\nRESULTS\nConceptual clarification of nursing data, information and knowledge has been expanded to include wisdom. Information systems and support for nursing practice benefits from conceptual clarification of nursing data, information, knowledge, and wisdom. We introduce three examples of information systems and point out core issues for information integration and practice development.\n\n\nCONCLUSIONS\nExploring interplays of data, information, knowledge, and wisdom, nursing informatics takes a practice turn, accommodating to processes of application design and deployment for purposeful use by nurses in different settings. Collaborative efforts will be key to further achievements that support task shifting, mobility, and ubiquitous health care.",
"title": ""
},
{
"docid": "neg:1840317_13",
"text": "Tele-operated hydraulic underwater manipulators are commonly used to perform remote underwater intervention tasks such as weld inspection or mating of connectors. Automation of these tasks to use tele-assistance requires a suitable hybrid position/force control scheme, to specify simultaneously the robot motion and contact forces. Classical linear control does not allow for the highly non-linear and time varying robot dynamics in this situation. Adequate control performance requires more advanced controllers. This paper presents and compares two different advanced hybrid control algorithms. The first is based on a modified Variable Structure Control (VSC-HF) with a virtual environment, and the second uses a multivariable self-tuning adaptive controller. A direct comparison of the two proposed control schemes is performed in simulation, using a model of the dynamics of a hydraulic underwater manipulator (a Slingsby TA9) in contact with a surface. These comparisons look at the performance of the controllers under a wide variety of operating conditions, including different environment stiffnesses, positions of the robot and",
"title": ""
},
{
"docid": "neg:1840317_14",
"text": "Innovative ship design projects often require an extensive concept design phase to allow a wide range of potential solutions to be investigated, identifying which best suits the requirements. In these situations, the majority of ship design tools do not provide the best solution, limiting quick reconfiguration by focusing on detailed definition only. Parametric design, including generation of the hull surface, can model topology as well as geometry offering advantages often not exploited. Paramarine is an integrated ship design environment that is based on an objectorientated framework which allows the parametric connection of all aspects of both the product model and analysis together. Design configuration is managed to ensure that relationships within the model are topologically correct and kept up to date. While this offers great flexibility, concept investigation is streamlined by the Early Stage Design module, based on the (University College London) Functional Building Block methodology, collating design requirements, product model definition and analysis together to establish the form, function and layout of the design. By bringing this information together, the complete design requirements for the hull surface itself are established and provide the opportunity for parametric hull form generation techniques to have a fully integrated role in the concept design process. This paper explores several different hull form generation techniques which have been combined with the Early Stage Design module to demonstrate the capability of this design partnership.",
"title": ""
},
{
"docid": "neg:1840317_15",
"text": "This paper introduced a detail ElGamal digital signature scheme, and mainly analyzed the existing problems of the ElGamal digital signature scheme. Then improved the scheme according to the existing problems of ElGamal digital signature scheme, and proposed an implicit ElGamal type digital signature scheme with the function of message recovery. As for the problem that message recovery not being allowed by ElGamal signature scheme, this article approached a method to recover message. This method will make ElGamal signature scheme have the function of message recovery. On this basis, against that part of signature was used on most attacks for ElGamal signature scheme, a new implicit signature scheme with the function of message recovery was formed, after having tried to hid part of signature message and refining forthcoming implicit type signature scheme. The safety of the refined scheme was anlyzed, and its results indicated that the new scheme was better than the old one.",
"title": ""
},
{
"docid": "neg:1840317_16",
"text": "We show that generating English Wikipedia articles can be approached as a multidocument summarization of source documents. We use extractive summarization to coarsely identify salient information and a neural abstractive model to generate the article. For the abstractive model, we introduce a decoder-only architecture that can scalably attend to very long sequences, much longer than typical encoderdecoder architectures used in sequence transduction. We show that this model can generate fluent, coherent multi-sentence paragraphs and even whole Wikipedia articles. When given reference documents, we show it can extract relevant factual information as reflected in perplexity, ROUGE scores and human evaluations.",
"title": ""
},
{
"docid": "neg:1840317_17",
"text": "Volumetric lesion segmentation via medical imaging is a powerful means to precisely assess multiple time-point lesion/tumor changes. Because manual 3D segmentation is prohibitively time consuming and requires radiological experience, current practices rely on an imprecise surrogate called response evaluation criteria in solid tumors (RECIST). Despite their coarseness, RECIST marks are commonly found in current hospital picture and archiving systems (PACS), meaning they can provide a potentially powerful, yet extraordinarily challenging, source of weak supervision for full 3D segmentation. Toward this end, we introduce a convolutional neural network based weakly supervised self-paced segmentation (WSSS) method to 1) generate the initial lesion segmentation on the axial RECISTslice; 2) learn the data distribution on RECIST-slices; 3) adapt to segment the whole volume slice by slice to finally obtain a volumetric segmentation. In addition, we explore how super-resolution images (2 ∼ 5 times beyond the physical CT imaging), generated from a proposed stacked generative adversarial network, can aid the WSSS performance. We employ the DeepLesion dataset, a comprehensive CTimage lesion dataset of 32, 735 PACS-bookmarked findings, which include lesions, tumors, and lymph nodes of varying sizes, categories, body regions and surrounding contexts. These are drawn from 10, 594 studies of 4, 459 patients. We also validate on a lymph-node dataset, where 3D ground truth masks are available for all images. For the DeepLesion dataset, we report mean Dice coefficients of 93% on RECIST-slices and 76% in 3D lesion volumes. We further validate using a subjective user study, where an experienced ∗Indicates equal contribution. †This work is done during Jinzheng Cai’s internship at National Institutes of Health. Le Lu is now with Nvidia Corp (lel@nvidia.com). CN N Initial 2D Segmentation Self-Paced 3D Segmentation CN N CN N CN N Image Image",
"title": ""
}
] |
1840318 | Privacy Preserving Social Network Data Publication | [
{
"docid": "pos:1840318_0",
"text": "Lipschitz extensions were recently proposed as a tool for designing node differentially private algorithms. However, efficiently computable Lipschitz extensions were known only for 1-dimensional functions (that is, functions that output a single real value). In this paper, we study efficiently computable Lipschitz extensions for multi-dimensional (that is, vector-valued) functions on graphs. We show that, unlike for 1-dimensional functions, Lipschitz extensions of higher-dimensional functions on graphs do not always exist, even with a non-unit stretch. We design Lipschitz extensions with small stretch for the sorted degree list and for the degree distribution of a graph. Crucially, our extensions are efficiently computable. We also develop new tools for employing Lipschitz extensions in the design of differentially private algorithms. Specifically, we generalize the exponential mechanism, a widely used tool in data privacy. The exponential mechanism is given a collection of score functions that map datasets to real values. It attempts to return the name of the function with nearly minimum value on the data set. Our generalized exponential mechanism provides better accuracy when the sensitivity of an optimal score function is much smaller than the maximum sensitivity of score functions. We use our Lipschitz extension and the generalized exponential mechanism to design a nodedifferentially private algorithm for releasing an approximation to the degree distribution of a graph. Our algorithm is much more accurate than algorithms from previous work. ∗Computer Science and Engineering Department, Pennsylvania State University. {asmith,sofya}@cse.psu.edu. Supported by NSF awards CDI-0941553 and IIS-1447700 and a Google Faculty Award. Part of this work was done while visiting Boston University’s Hariri Institute for Computation. 1 ar X iv :1 50 4. 07 91 2v 1 [ cs .C R ] 2 9 A pr 2 01 5",
"title": ""
}
] | [
{
"docid": "neg:1840318_0",
"text": "We present a new autoencoder-type architecture that is trainable in an unsupervised mode, sustains both generation and inference, and has the quality of conditional and unconditional samples boosted by adversarial learning. Unlike previous hybrids of autoencoders and adversarial networks, the adversarial game in our approach is set up directly between the encoder and the generator, and no external mappings are trained in the process of learning. The game objective compares the divergences of each of the real and the generated data distributions with the prior distribution in the latent space. We show that direct generator-vs-encoder game leads to a tight coupling of the two components, resulting in samples and reconstructions of a comparable quality to some recently-proposed more complex",
"title": ""
},
{
"docid": "neg:1840318_1",
"text": "Motivation for the investigation of position and waypoint controllers is the demand for Unattended Aerial Systems (UAS) capable of fulfilling e.g. surveillance tasks in contaminated or in inaccessible areas. Hence, this paper deals with the development of a 2D GPS-based position control system for 4 Rotor Helicopters able to keep positions above given destinations as well as to navigate between waypoints while minimizing trajectory errors. Additionally, the novel control system enables permanent full speed flight with reliable altitude keeping considering that the resulting lift is decreasing while changing pitch or roll angles for position control. In the following chapters the control procedure for position control and waypoint navigation is described. The dynamic behavior was simulated by means of Matlab/Simulink and results are shown. Further, the control strategies were implemented on a flight demonstrator for validation, experimental results are provided and a comparison is discussed.",
"title": ""
},
{
"docid": "neg:1840318_2",
"text": "Models in science may be used for various purposes: organizing data, synthesizing information, and making predictions. However, the value of model predictions is undermined by their uncertainty, which arises primarily from the fact that our models of complex natural systems are always open. Models can never fully specify the systems that they describe, and therefore their predictions are always subject to uncertainties that we cannot fully specify. Moreover, the attempt to make models capture the complexities of natural systems leads to a paradox: the more we strive for realism by incorporating as many as possible of the different processes and parameters that we believe to be operating in the system, the more difficult it is for us to know if our tests of the model are meaningful. A complex model may be more realistic, yet it is ironic that as we add more factors to a model, the certainty of its predictions may decrease even as our intuitive faith in the model increases. For this and other reasons, model output should not be viewed as an accurate prediction of the future state of the system. Short timeframe model output can and should be used to evaluate models and suggest avenues for future study. Model output can also generate “what if” scenarios that can help to evaluate alternative courses of action (or inaction), including worst-case and best-case outcomes. But scientists should eschew long-range deterministic predictions, which are likely to be erroneous and may damage the credibility of the communities that generate them.",
"title": ""
},
{
"docid": "neg:1840318_3",
"text": "s R esum es Canadian Undergraduate Mathematics Conference 1998 | Part 3 The Brachistochrone Problem Nils Johnson The University of British Columbia The brachistochrone problem is to nd the curve between two points down which a bead will slide in the shortest amount of time, neglecting friction and assuming conservation of energy. To solve the problem, an integral is derived that computes the amount of time it would take a bead to slide down a given curve y(x). This integral is minimized over all possible curves and yields the di erential equation y(1 + (y)) = k as a constraint for the minimizing function y(x). Solving this di erential equation shows that a cycloid (the path traced out by a point on the rim of a rolling wheel) is the solution to the brachistochrone problem. First proposed in 1696 by Johann Bernoulli, this problem is credited with having led to the development of the calculus of variations. The solution presented assumes knowledge of one-dimensional calculus and elementary di erential equations. The Theory of Error-Correcting Codes Dennis Hill University of Ottawa Coding theory is concerned with the transfer of data. There are two issues of fundamental importance. First, the data must be transferred accurately. But equally important is that the transfer be done in an e cient manner. It is the interplay of these two issues which is the core of the theory of error-correcting codes. Typically, the data is represented as a string of zeros and ones. Then a code consists of a set of such strings, each of the same length. The most fruitful approach to the subject is to consider the set f0; 1g as a two-element eld. We will then only",
"title": ""
},
{
"docid": "neg:1840318_4",
"text": "The present study explored the relationship between risky cybersecurity behaviours, attitudes towards cybersecurity in a business environment, Internet addiction, and impulsivity. 538 participants in part-time or full-time employment in the UK completed an online questionnaire, with responses from 515 being used in the data analysis. The survey included an attitude towards cybercrime and cybersecurity in business scale, a measure of impulsivity, Internet addiction and a 'risky' cybersecurity behaviours scale. The results demonstrated that Internet addiction was a significant predictor for risky cybersecurity behaviours. A positive attitude towards cybersecurity in business was negatively related to risky cybersecurity behaviours. Finally, the measure of impulsivity revealed that both attentional and motor impulsivity were both significant positive predictors of risky cybersecurity behaviours, with non-planning being a significant negative predictor. The results present a further step in understanding the individual differences that may govern good cybersecurity practices, highlighting the need to focus directly on more effective training and awareness mechanisms.",
"title": ""
},
{
"docid": "neg:1840318_5",
"text": "I show that a functional representation of self-similarity (as the one occurring in fractals) is provided by squeezed coherent states. In this way, the dissipative model of brain is shown to account for the self-similarity in brain background activity suggested by power-law distributions of power spectral densities of electrocorticograms. I also briefly discuss the action-perception cycle in the dissipative model with reference to intentionality in terms of trajectories in the memory state space.",
"title": ""
},
{
"docid": "neg:1840318_6",
"text": "BACKGROUND\nSystematic reviews are most helpful if they are up-to-date. We did a systematic review of strategies and methods describing when and how to update systematic reviews.\n\n\nOBJECTIVES\nTo identify, describe and assess strategies and methods addressing: 1) when to update systematic reviews and 2) how to update systematic reviews.\n\n\nSEARCH STRATEGY\nWe searched MEDLINE (1966 to December 2005), PsycINFO, the Cochrane Methodology Register (Issue 1, 2006), and hand searched the 2005 Cochrane Colloquium proceedings.\n\n\nSELECTION CRITERIA\nWe included methodology reports, updated systematic reviews, commentaries, editorials, or other short reports describing the development, use, or comparison of strategies and methods for determining the need for updating or updating systematic reviews in healthcare.\n\n\nDATA COLLECTION AND ANALYSIS\nWe abstracted information from each included report using a 15-item questionnaire. The strategies and methods for updating systematic reviews were assessed and compared descriptively with respect to their usefulness, comprehensiveness, advantages, and disadvantages.\n\n\nMAIN RESULTS\nFour updating strategies, one technique, and two statistical methods were identified. Three strategies addressed steps for updating and one strategy presented a model for assessing the need to update. One technique discussed the use of the \"entry date\" field in bibliographic searching. Statistical methods were cumulative meta-analysis and predicting when meta-analyses are outdated.\n\n\nAUTHORS' CONCLUSIONS\nLittle research has been conducted on when and how to update systematic reviews and the feasibility and efficiency of the identified approaches is uncertain. These shortcomings should be addressed in future research.",
"title": ""
},
{
"docid": "neg:1840318_7",
"text": "Layer-by-layer deposition of materials to manufacture parts—better known as three-dimensional (3D) printing or additive manufacturing—has been flourishing as a fabrication process in the past several years and now can create complex geometries for use as models, assembly fixtures, and production molds. Increasing interest has focused on the use of this technology for direct manufacturing of production parts; however, it remains generally limited to single-material fabrication, which can limit the end-use functionality of the fabricated structures. The next generation of 3D printing will entail not only the integration of dissimilar materials but the embedding of active components in order to deliver functionality that was not possible previously. Examples could include arbitrarily shaped electronics with integrated microfluidic thermal management and intelligent prostheses custom-fit to the anatomy of a specific patient. We review the state of the art in multiprocess (or hybrid) 3D printing, in which complementary processes, both novel and traditional, are combined to advance the future of manufacturing.",
"title": ""
},
{
"docid": "neg:1840318_8",
"text": "The development of brain metastases in patients with advanced stage melanoma is common, but the molecular mechanisms responsible for their development are poorly understood. Melanoma brain metastases cause significant morbidity and mortality and confer a poor prognosis; traditional therapies including whole brain radiation, stereotactic radiotherapy, or chemotherapy yield only modest increases in overall survival (OS) for these patients. While recently approved therapies have significantly improved OS in melanoma patients, only a small number of studies have investigated their efficacy in patients with brain metastases. Preliminary data suggest that some responses have been observed in intracranial lesions, which has sparked new clinical trials designed to evaluate the efficacy in melanoma patients with brain metastases. Simultaneously, recent advances in our understanding of the mechanisms of melanoma cell dissemination to the brain have revealed novel and potentially therapeutic targets. In this review, we provide an overview of newly discovered mechanisms of melanoma spread to the brain, discuss preclinical models that are being used to further our understanding of this deadly disease and provide an update of the current clinical trials for melanoma patients with brain metastases.",
"title": ""
},
{
"docid": "neg:1840318_9",
"text": "This paper presents a hybrid tele-manipulation system, comprising of a sensorized 3-D-printed soft robotic gripper and a soft fabric-based haptic glove that aim at improving grasping manipulation and providing sensing feedback to the operators. The flexible 3-D-printed soft robotic gripper broadens what a robotic gripper can do, especially for grasping tasks where delicate objects, such as glassware, are involved. It consists of four pneumatic finger actuators, casings with through hole for housing the actuators, and adjustable base. The grasping length and width can be configured easily to suit a variety of objects. The soft haptic glove is equipped with flex sensors and soft pneumatic haptic actuator, which enables the users to control the grasping, to determine whether the grasp is successful, and to identify the grasped object shape. The fabric-based soft pneumatic haptic actuator can simulate haptic perception by producing force feedback to the users. Both the soft pneumatic finger actuator and haptic actuator involve simple fabrication technique, namely 3-D-printed approach and fabric-based approach, respectively, which reduce fabrication complexity as compared to the steps involved in a traditional silicone-based approach. The sensorized soft robotic gripper is capable of picking up and holding a wide variety of objects in this study, ranging from lightweight delicate object weighing less than 50 g to objects weighing 1100 g. The soft haptic actuator can produce forces of up to 2.1 N, which is more than the minimum force of 1.5 N needed to stimulate haptic perception. The subjects are able to differentiate the two objects with significant shape differences in the pilot test. Compared to the existing soft grippers, this is the first soft sensorized 3-D-printed gripper, coupled with a soft fabric-based haptic glove that has the potential to improve the robotic grasping manipulation by introducing haptic feedback to the users.",
"title": ""
},
{
"docid": "neg:1840318_10",
"text": "In this paper, we describe an approach for the automatic medical annotation task of the 2008 CLEF cross-language image retrieval campaign (ImageCLEF). The data comprise 12076 fully annotated images according to the IRMA code. This work is focused on the process of feature extraction from images and hierarchical multi-label classification. To extract features from the images we used a technique called: local distribution of edges. With this techniques each image was described with 80 variables. The goal of the classification task was to classify an image according to the IRMA code. The IRMA code is organized hierarchically. Hence, as classifer we selected an extension of the predictive clustering trees (PCTs) that is able to handle this type of data. Further more, we constructed ensembles (Bagging and Random Forests) that use PCTs as base classifiers.",
"title": ""
},
{
"docid": "neg:1840318_11",
"text": "PURPOSE\nTo explore the association of angiographic nonperfusion in focal and diffuse recalcitrant diabetic macular edema (DME) in diabetic retinopathy (DR).\n\n\nDESIGN\nA retrospective, observational case series of patients with the diagnosis of recalcitrant DME for at least 2 years placed into 1 of 4 cohorts based on the degree of DR.\n\n\nMETHODS\nA total of 148 eyes of 76 patients met the inclusion criteria at 1 academic institution. Ultra-widefield fluorescein angiography (FA) images and spectral-domain optical coherence tomography (SD OCT) images were obtained on all patients. Ultra-widefield FA images were graded for quantity of nonperfusion, which was used to calculate ischemic index. Main outcome measures were mean ischemic index, mean change in central macular thickness (CMT), and mean number of macular photocoagulation treatments over the 2-year study period.\n\n\nRESULTS\nThe mean ischemic index was 47% (SD 25%; range 0%-99%). The mean ischemic index of eyes within Cohorts 1, 2, 3, and 4 was 0%, 34% (range 16%-51%), 53% (range 32%-89%), and 65% (range 47%-99%), respectively. The mean percentage decrease in CMT in Cohorts 1, 2, 3, and 4 were 25.2%, 19.1%, 11.6%, and 7.2%, respectively. The mean number of macular photocoagulation treatments in Cohorts 1, 2, 3, and 4 was 2.3, 4.8, 5.3, and 5.7, respectively.\n\n\nCONCLUSIONS\nEyes with larger areas of retinal nonperfusion and greater severity of DR were found to have the most recalcitrant DME, as evidenced by a greater number of macular photocoagulation treatments and less reduction in SD OCT CMT compared with eyes without retinal nonperfusion. Areas of untreated retinal nonperfusion may generate biochemical mediators that promote ischemia and recalcitrant DME.",
"title": ""
},
{
"docid": "neg:1840318_12",
"text": "This study presents an experimental evaluation of neural networks for nonlinear time-series forecasting. The e!ects of three main factors * input nodes, hidden nodes and sample size, are examined through a simulated computer experiment. Results show that neural networks are valuable tools for modeling and forecasting nonlinear time series while traditional linear methods are not as competent for this task. The number of input nodes is much more important than the number of hidden nodes in neural network model building for forecasting. Moreover, large sample is helpful to ease the over\"tting problem.",
"title": ""
},
{
"docid": "neg:1840318_13",
"text": "Various aspects of the theory of random walks on graphs are surveyed. In particular, estimates on the important parameters of access time, commute time, cover time and mixing time are discussed. Connections with the eigenvalues of graphs and with electrical networks, and the use of these connections in the study of random walks is described. We also sketch recent algorithmic applications of random walks, in particular to the problem of sampling.",
"title": ""
},
{
"docid": "neg:1840318_14",
"text": "Previous empirical studies examining the relationship between IT capability and accountingbased measures of firm performance report mixed results. We argue that extant research (1) has relied on aggregate overall measures of the firm’s IT capability, ignoring the specific type and nature of IT capability; and (2) has not fully considered important contextual (environmental) conditions that influence the IT capability-firm performance relationship. Drawing on the resource-based view (RBV), we advance a contingency perspective and propose that IT capabilities’ impact on firm resources is contingent on the “fit” between the type of IT capability/resource a firm possesses and the demands of the environment (industry) in which it competes. Specifically, using publicly available rankings as proxies for two types of IT capabilities (internally-focused and externally-focused capabilities), we empirically examines the degree to which three industry characteristics (dynamism, munificence, and complexity) influence the impact of each type of IT capability on measures of financial performance. After controlling for prior performance, the findings provide general support for the posited contingency model of IT impact. The implications of these findings on practice and research are discussed.",
"title": ""
},
{
"docid": "neg:1840318_15",
"text": "Oral Squamous Cell Carcinoma (OSCC) is a common type of cancer of the oral epithelium. Despite their high impact on mortality, sufficient screening methods for early diagnosis of OSCC often lack accuracy and thus OSCCs are mostly diagnosed at a late stage. Early detection and accurate outline estimation of OSCCs would lead to a better curative outcome and a reduction in recurrence rates after surgical treatment. Confocal Laser Endomicroscopy (CLE) records sub-surface micro-anatomical images for in vivo cell structure analysis. Recent CLE studies showed great prospects for a reliable, real-time ultrastructural imaging of OSCC in situ. We present and evaluate a novel automatic approach for OSCC diagnosis using deep learning technologies on CLE images. The method is compared against textural feature-based machine learning approaches that represent the current state of the art. For this work, CLE image sequences (7894 images) from patients diagnosed with OSCC were obtained from 4 specific locations in the oral cavity, including the OSCC lesion. The present approach is found to outperform the state of the art in CLE image recognition with an area under the curve (AUC) of 0.96 and a mean accuracy of 88.3% (sensitivity 86.6%, specificity 90%).",
"title": ""
},
{
"docid": "neg:1840318_16",
"text": "T profitability of remanufacturing systems for different cost, technology, and logistics structures has been extensively investigated in the literature. We provide an alternative and somewhat complementary approach that considers demand-related issues, such as the existence of green segments, original equipment manufacturer competition, and product life-cycle effects. The profitability of a remanufacturing system strongly depends on these issues as well as on their interactions. For a monopolist, we show that there exist thresholds on the remanufacturing cost savings, the green segment size, market growth rate, and consumer valuations for the remanufactured products, above which remanufacturing is profitable. More important, we show that under competition remanufacturing can become an effective marketing strategy, which allows the manufacturer to defend its market share via price discrimination.",
"title": ""
},
{
"docid": "neg:1840318_17",
"text": "Predicting the stock market is considered to be a very difficult task due to its non-linear and dynamic nature. Our proposed system is designed in such a way that even a layman can use it. It reduces the burden on the user. The user’s job is to give only the recent closing prices of a stock as input and the proposed Recommender system will instruct him when to buy and when to sell if it is profitable or not to buy share in case if it is not profitable to do trading. Using soft computing based techniques is considered to be more suitable for predicting trends in stock market where the data is chaotic and large in number. The soft computing based systems are capable of extracting relevant information from large sets of data by discovering hidden patterns in the data. Here regression trees are used for dimensionality reduction and clustering is done with the help of Self Organizing Maps (SOM). The proposed system is designed to assist stock market investors identify possible profit-making opportunities and also help in developing a better understanding on how to extract the relevant information from stock price data. © 2016 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "neg:1840318_18",
"text": "The lack of realistic and open benchmarking datasets for pedestrian visual-inertial odometry has made it hard to pinpoint differences in published methods. Existing datasets either lack a full six degree-of-freedom ground-truth or are limited to small spaces with optical tracking systems. We take advantage of advances in pure inertial navigation, and develop a set of versatile and challenging real-world computer vision benchmark sets for visual-inertial odometry. For this purpose, we have built a test rig equipped with an iPhone, a Google Pixel Android phone, and a Google Tango device. We provide a wide range of raw sensor data that is accessible on almost any modern-day smartphone together with a high-quality ground-truth track. We also compare resulting visual-inertial tracks from Google Tango, ARCore, and Apple ARKit with two recent methods published in academic forums. The data sets cover both indoor and outdoor cases, with stairs, escalators, elevators, office environments, a shopping mall, and metro station.",
"title": ""
},
{
"docid": "neg:1840318_19",
"text": "This article provides minimum requirements for having confidence in the accuracy of EC50/IC50 estimates. Two definitions of EC50/IC50s are considered: relative and absolute. The relative EC50/IC50 is the parameter c in the 4-parameter logistic model and is the concentration corresponding to a response midway between the estimates of the lower and upper plateaus. The absolute EC50/IC50 is the response corresponding to the 50% control (the mean of the 0% and 100% assay controls). The guidelines first describe how to decide whether to use the relative EC50/IC50 or the absolute EC50/IC50. Assays for which there is no stable 100% control must use the relative EC50/IC50. Assays having a stable 100% control but for which there may be more than 5% error in the estimate of the 50% control mean should use the relative EC50/IC50. Assays that can be demonstrated to produce an accurate and stable 100% control and less than 5% error in the estimate of the 50% control mean may gain efficiency as well as accuracy by using the absolute EC50/IC50. Next, the guidelines provide rules for deciding when the EC50/IC50 estimates are reportable. The relative EC50/IC50 should only be used if there are at least two assay concentrations beyond the lower and upper bend points. The absolute EC50/IC50 should only be used if there are at least two assay concentrations whose predicted response is less than 50% and two whose predicted response is greater than 50%. A wide range of typical assay conditions are considered in the development of the guidelines.",
"title": ""
}
] |
1840319 | Upsampling range data in dynamic environments | [
{
"docid": "pos:1840319_0",
"text": "Neighborhood filters are nonlocal image and movie filters which reduce the noise by averaging similar pixels. The first object of the paper is to present a unified theory of these filters and reliable criteria to compare them to other filter classes. A CCD noise model will be presented justifying the involvement of neighborhood filters. A classification of neighborhood filters will be proposed, including classical image and movie denoising methods and discussing further a recently introduced neighborhood filter, NL-means. In order to compare denoising methods three principles will be discussed. The first principle, “method noise”, specifies that only noise must be removed from an image. A second principle will be introduced, “noise to noise”, according to which a denoising method must transform a white noise into a white noise. Contrarily to “method noise”, this principle, which characterizes artifact-free methods, eliminates any subjectivity and can be checked by mathematical arguments and Fourier analysis. “Noise to noise” will be proven to rule out most denoising methods, with the exception of neighborhood filters. This is why a third and new comparison principle, the “statistical optimality”, is needed and will be introduced to compare the performance of all neighborhood filters. The three principles will be applied to compare ten different image and movie denoising methods. It will be first shown that only wavelet thresholding methods and NL-means give an acceptable method noise. Second, that neighborhood filters are the only ones to satisfy the “noise to noise” principle. Third, that among them NL-means is closest to statistical optimality. A particular attention will be paid to the application of the statistical optimality criterion for movie denoising methods. It will be pointed out that current movie denoising methods are motion compensated neighborhood filters. This amounts to say that they are neighborhood filters and that the ideal neighborhood of a pixel is its trajectory. Unfortunately the aperture problem makes it impossible to estimate ground true trajectories. It will be demonstrated that computing trajectories and restricting the neighborhood to them is harmful for denoising purposes and that space-time NL-means preserves more movie details.",
"title": ""
}
] | [
{
"docid": "neg:1840319_0",
"text": "Hatebusters is a web application for actively reporting YouTube hate speech, aiming to establish an online community of volunteer citizens. Hatebusters searches YouTube for videos with potentially hateful comments, scores their comments with a classifier trained on human-annotated data and presents users those comments with the highest probability of being hate speech. It also employs gamification elements, such as achievements and leaderboards, to drive user engagement.",
"title": ""
},
{
"docid": "neg:1840319_1",
"text": "Wireless local area networks (WLANs) based on the IEEE 802.11 standards are one of today’s fastest growing technologies in businesses, schools, and homes, for good reasons. As WLAN deployments increase, so does the challenge to provide these networks with security. Security risks can originate either due to technical lapse in the security mechanisms or due to defects in software implementations. Standard Bodies and researchers have mainly used UML state machines to address the implementation issues. In this paper we propose the use of GSE methodology to analyse the incompleteness and uncertainties in specifications. The IEEE 802.11i security protocol is used as an example to compare the effectiveness of the GSE and UML models. The GSE methodology was found to be more effective in identifying ambiguities in specifications and inconsistencies between the specification and the state machines. Resolving all issues, we represent the robust security network (RSN) proposed in the IEEE 802.11i standard using different GSE models.",
"title": ""
},
{
"docid": "neg:1840319_2",
"text": "This paper presents a new approach to power system automation, based on distributed intelligence rather than traditional centralized control. The paper investigates the interplay between two international standards, IEC 61850 and IEC 61499, and proposes a way of combining of the application functions of IEC 61850-compliant devices with IEC 61499-compliant “glue logic,” using the communication services of IEC 61850-7-2. The resulting ability to customize control and automation logic will greatly enhance the flexibility and adaptability of automation systems, speeding progress toward the realization of the smart grid concept.",
"title": ""
},
{
"docid": "neg:1840319_3",
"text": "In this paper we introduce the task of fact checking, i.e. the assessment of the truthfulness of a claim. The task is commonly performed manually by journalists verifying the claims made by public figures. Furthermore, ordinary citizens need to assess the truthfulness of the increasing volume of statements they consume. Thus, developing fact checking systems is likely to be of use to various members of society. We first define the task and detail the construction of a publicly available dataset using statements fact-checked by journalists available online. Then, we discuss baseline approaches for the task and the challenges that need to be addressed. Finally, we discuss how fact checking relates to mainstream natural language processing tasks and can stimulate further research.",
"title": ""
},
{
"docid": "neg:1840319_4",
"text": "The rising popularity of intelligent mobile devices and the daunting computational cost of deep learning-based models call for efficient and accurate on-device inference schemes. We propose a quantization scheme that allows inference to be carried out using integer-only arithmetic, which can be implemented more efficiently than floating point inference on commonly available integer-only hardware. We also co-design a training procedure to preserve end-to-end model accuracy post quantization. As a result, the proposed quantization scheme improves the tradeoff between accuracy and on-device latency. The improvements are significant even on MobileNets, a model family known for run-time efficiency, and are demonstrated in ImageNet classification and COCO detection on popular CPUs.",
"title": ""
},
{
"docid": "neg:1840319_5",
"text": "Attack detection is usually approached as a classification problem. However, standard classification tools often perform poorly, because an adaptive attacker can shape his attacks in response to the algorithm. This has led to the recent interest in developing methods for adversarial classification, but to the best of our knowledge, there have been a very few prior studies that take into account the attacker’s tradeoff between adapting to the classifier being used against him with his desire to maintain the efficacy of his attack. Including this effect is a key to derive solutions that perform well in practice. In this investigation, we model the interaction as a game between a defender who chooses a classifier to distinguish between attacks and normal behavior based on a set of observed features and an attacker who chooses his attack features (class 1 data). Normal behavior (class 0 data) is random and exogenous. The attacker’s objective balances the benefit from attacks and the cost of being detected while the defender’s objective balances the benefit of a correct attack detection and the cost of false alarm. We provide an efficient algorithm to compute all Nash equilibria and a compact characterization of the possible forms of a Nash equilibrium that reveals intuitive messages on how to perform classification in the presence of an attacker. We also explore qualitatively and quantitatively the impact of the non-attacker and underlying parameters on the equilibrium strategies.",
"title": ""
},
{
"docid": "neg:1840319_6",
"text": "The paper presents application of data mining methods for recognizing the most significant genes and gene sequences (treated as features) stored in a dataset of gene expression microarray. The investigations are performed for autism data. Few chosen methods of feature selection have been applied and their results integrated in the final outcome. In this way we find the contents of small set of the most important genes associated with autism. They have been applied in the classification procedure aimed on recognition of autism from reference group members. The results of numerical experiments concerning selection of the most important genes and classification of the cases on the basis of the selected genes will be discussed. The main contribution of the paper is in developing the fusion system of the results of many selection approaches into the final set, most closely associated with autism. We have also proposed special procedure of estimating the number of highest rank genes used in classification procedure. 2014 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "neg:1840319_7",
"text": "Underwater images often suffer from color distortion and low contrast, because light is scattered and absorbed when traveling through water. Such images with different color tones can be shot in various lighting conditions, making restoration and enhancement difficult. We propose a depth estimation method for underwater scenes based on image blurriness and light absorption, which can be used in the image formation model (IFM) to restore and enhance underwater images. Previous IFM-based image restoration methods estimate scene depth based on the dark channel prior or the maximum intensity prior. These are frequently invalidated by the lighting conditions in underwater images, leading to poor restoration results. The proposed method estimates underwater scene depth more accurately. Experimental results on restoring real and synthesized underwater images demonstrate that the proposed method outperforms other IFM-based underwater image restoration methods.",
"title": ""
},
{
"docid": "neg:1840319_8",
"text": "A fully integrated 8-channel phased-array receiver at 24 GHz is demonstrated. Each channel achieves a gain of 43 dB, noise figure of 8 dB, and an IIP3 of -11dBm, consuming 29 mA of current from a 2.5 V supply. The 8-channel array has a beam-forming resolution of 22.5/spl deg/, a peak-to-null ratio of 20 dB (4-bits), a total array gain of 61 dB, and improves the signal-to-noise ratio by 9 dB.",
"title": ""
},
{
"docid": "neg:1840319_9",
"text": "Behavioral targeting (BT) is a widely used technique for online advertising. It leverages information collected on an individual's web-browsing behavior, such as page views, search queries and ad clicks, to select the ads most relevant to user to display. With the proliferation of social networks, it is possible to relate the behavior of individuals and their social connections. Although the similarity among connected individuals are well established (i.e., homophily), it is still not clear whether and how we can leverage the activities of one's friends for behavioral targeting; whether forecasts derived from such social information are more accurate than standard behavioral targeting models. In this paper, we strive to answer these questions by evaluating the predictive power of social data across 60 consumer domains on a large online network of over 180 million users in a period of two and a half months. To our best knowledge, this is the most comprehensive study of social data in the context of behavioral targeting on such an unprecedented scale. Our analysis offers interesting insights into the value of social data for developing the next generation of targeting services.",
"title": ""
},
{
"docid": "neg:1840319_10",
"text": "In this paper, we review the background and state-of-the-art of big data. We first introduce the general background of big data and review related technologies, such as could computing, Internet of Things, data centers, and Hadoop. We then focus on the four phases of the value chain of big data, i.e., data generation, data acquisition, data storage, and data analysis. For each phase, we introduce the general background, discuss the technical challenges, and review the latest advances. We finally examine the several representative applications of big data, including enterprise management, Internet of Things, online social networks, medial applications, collective intelligence, and smart grid. These discussions aim to provide a comprehensive overview and big-picture to readers of this exciting area. This survey is concluded with a discussion of open problems and future directions.",
"title": ""
},
{
"docid": "neg:1840319_11",
"text": "The CTD which stands for “Conductivity-Temperature-Depth” is one of the most used instruments for the oceanographic measurements. MEMS based CTD sensor components consist of a conductivity sensor (C), temperature sensor (T) and a piezo resistive pressure sensor (D). CTDs are found in every marine related institute and navy throughout the world as they are used to produce the salinity profile for the area of the ocean under investigation and are also used to determine different oceanic parameters. This research paper provides the design, fabrication and initial test results on a prototype CTD sensor.",
"title": ""
},
{
"docid": "neg:1840319_12",
"text": "Evaluating surgeon skill has predominantly been a subjective task. Development of objective methods for surgical skill assessment are of increased interest. Recently, with technological advances such as robotic-assisted minimally invasive surgery (RMIS), new opportunities for objective and automated assessment frameworks have arisen. In this paper, we applied machine learning methods to automatically evaluate performance of the surgeon in RMIS. Six important movement features were used in the evaluation including completion time, path length, depth perception, speed, smoothness and curvature. Different classification methods applied to discriminate expert and novice surgeons. We test our method on real surgical data for suturing task and compare the classification result with the ground truth data (obtained by manual labeling). The experimental results show that the proposed framework can classify surgical skill level with relatively high accuracy of 85.7%. This study demonstrates the ability of machine learning methods to automatically classify expert and novice surgeons using movement features for different RMIS tasks. Due to the simplicity and generalizability of the introduced classification method, it is easy to implement in existing trainers. .",
"title": ""
},
{
"docid": "neg:1840319_13",
"text": "\"Stealth Dicing (SD) \" was developed to solve such inherent problems of dicing process as debris contaminants and unnecessary thermal damage on work wafer. In SD, laser beam power of transmissible wavelength is absorbed only around focal point in the wafer by utilizing temperature dependence of absorption coefficient of the wafer. And these absorbed power forms modified layer in the wafer, which functions as the origin of separation in followed separation process. Since only the limited interior region of a wafer is processed by laser beam irradiation, damages and debris contaminants can be avoided in SD. Besides characteristics of devices will not be affected. Completely dry process of SD is another big advantage over other dicing methods.",
"title": ""
},
{
"docid": "neg:1840319_14",
"text": "This paper proposes a high-efficiency dual-band on-chip rectifying antenna (rectenna) at 35 and 94 GHz for wireless power transmission. The rectenna is designed in slotline (SL) and finite-width ground coplanar waveguide (FGCPW) transmission lines in a CMOS 0.13-μm process. The rectenna comprises a high gain linear tapered slot antenna (LTSA), an FGCPW to SL transition, a bandpass filter, and a full-wave rectifier. The LTSA achieves a VSWR=2 fractional bandwidth of 82% and 41%, and a gain of 7.4 and 6.5 dBi at the frequencies of 35 and 94 GHz. The measured power conversion efficiencies are 53% and 37% in free space at 35 and 94 GHz, while the incident radiation power density is 30 mW/cm2 . The fabricated rectenna occupies a compact size of 2.9 mm2.",
"title": ""
},
{
"docid": "neg:1840319_15",
"text": "The language of deaf and dumb which uses body parts to convey the message is known as sign language. Here, we are doing a study to convert speech into sign language used for conversation. In this area we have many developed method to recognize alphabets and numerals of ISL (Indian sign language). There are various approaches for recognition of ISL and we have done a comparative studies between them [1].",
"title": ""
},
{
"docid": "neg:1840319_16",
"text": "In this paper, a modular interleaved boost converter is first proposed by integrating a forward energy-delivering circuit with a voltage-doubler to achieve high step-up ratio and high efficiency for dc-microgrid applications. Then, steady-state analyses are made to show the merits of the proposed converter module. For closed-loop control design, the corresponding small-signal model is also derived. It is seen that, for higher power applications, more modules can be paralleled to increase the power rating and the dynamic performance. As an illustration, closed-loop control of a 450-W rating converter consisting of two paralleled modules with 24-V input and 200-V output is implemented for demonstration. Experimental results show that the modular high step-up boost converter can achieve an efficiency of 95.8% approximately.",
"title": ""
},
{
"docid": "neg:1840319_17",
"text": "A miniature coplanar antenna on a perovskite substrate is analyzed and designed using short circuit technique. The overall dimensions are minimized to 0.09 λ × 0.09 λ. The antenna geometry, the design concept, as well as the simulated and the measured results are discussed in this paper.",
"title": ""
},
{
"docid": "neg:1840319_18",
"text": "This paper presents Circe, an environment for the analysis of natural language requirements. Circe is first presented in terms of its architecture, based on a transformational paradigm. Details are then given for the various transformation steps, including (i) a novel technique for parsing natural language requirements, and (ii) an expert system based on modular agents, embodying intensional knowledge about software systems in general. The result of all the transformations is a set of models for the requirements document, for the system described by the requirements, and for the requirements writing process. These models can be inspected, measured, and validated against a given set of criteria. Some of the features of the environment are shown by means of an example. Various stages of requirements analysis are covered, from initial sketches to pseudo-code and UML models.",
"title": ""
},
{
"docid": "neg:1840319_19",
"text": "The influence of bilingualism on cognitive test performance in older adults has received limited attention in the neuropsychology literature. The aim of this study was to examine the impact of bilingualism on verbal fluency and repetition tests in older Hispanic bilinguals. Eighty-two right-handed participants (28 men and 54 women) with a mean age of 61.76 years (SD = 9.30; range = 50-84) and a mean educational level of 14.8 years (SD = 3.6; range 2-23) were selected. Forty-five of the participants were English monolinguals, 18 were Spanish monolinguals, and 19 were Spanish-English bilinguals. Verbal fluency was tested by electing a verbal description of a picture and by asking participants to generate words within phonemic and semantic categories. Repetition was tested using a sentence-repetition test. The bilinguals' test scores were compared to English monolinguals' and Spanish monolinguals' test scores. Results demonstrated equal performance of bilingual and monolingual participants in all tests except that of semantic verbal fluency. Bilinguals who learned English before age 12 performed significantly better on the English repetition test and produced a higher number of words in the description of a picture than the bilinguals who learned English after age 12. Variables such as task demands, language interference, linguistic mode, and level of bilingualism are addressed in the Discussion section.",
"title": ""
}
] |
1840320 | Circularly Polarized Substrate-Integrated Waveguide Tapered Slot Antenna for Millimeter-Wave Applications | [
{
"docid": "pos:1840320_0",
"text": "A substrate integrated waveguide (SIW)-fed circularly polarized (CP) antenna array with a broad bandwidth of axial ratio (AR) is presented for 60-GHz wireless personal area networks (WPAN) applications. The widened AR bandwidth of an antenna element is achieved by positioning a slot-coupled rotated strip above a slot cut onto the broadwall of an SIW. A 4 × 4 antenna array is designed and fabricated using low temperature cofired ceramic (LTCC) technology. A metal-topped via fence is introduced around the strip to reduce the mutual coupling between the elements of the array. The measured results show that the AR bandwidth is more than 7 GHz. A stable boresight gain is greater than 12.5 dBic across the desired bandwidth of 57-64 GHz.",
"title": ""
},
{
"docid": "pos:1840320_1",
"text": "In this communication, a compact circularly polarized (CP) substrate integrated waveguide (SIW) horn antenna is proposed and investigated. Through etching a sloping slot on the common broad wall of two SIWs, mode coupling is generated between the top and down SIWs, and thus, a new field component as TE01 mode is produced. During the coupling process along the sloping slot, the difference in guide wavelengths of the two orthogonal modes also brings a phase shift between the two modes, which provides a possibility for radiating the CP wave. Moreover, the two different ports will generate the electric field components of TE01 mode with the opposite direction, which indicates the compact SIW horn antenna with a dual CP property can be realized as well. Measured results indicate that the proposed antenna operates with a wide 3-dB axial ratio bandwidth of 11.8% ranging from 17.6 to 19.8 GHz. The measured results are in good accordance with the simulated ones.",
"title": ""
}
] | [
{
"docid": "neg:1840320_0",
"text": "Purpose – The aim of this paper is to propose a novel evaluation framework to explore the “root causes” that hinder the acceptance of using internal cloud services in a university. Design/methodology/approach – The proposed evaluation framework incorporates the duo-theme DEMATEL (decision making trial and evaluation laboratory) with TAM (technology acceptance model). The operational procedures were proposed and tested on a university during the post-implementation phase after introducing the internal cloud services. Findings – According to the results, clear understanding and operational ease under the theme perceived ease of use (PEOU) are more imperative; whereas improved usefulness and productivity under the theme perceived usefulness (PU) are more urgent to foster the usage of internal clouds in the case university. Research limitations/implications – Based on the findings, some intervention activities were suggested to enhance the level of users’ acceptance of internal cloud solutions in the case university. However, the results should not be generalized to apply to other educational establishments. Practical implications – To reduce the resistance from using internal clouds, some necessary intervention activities such as developing attractive training programs, creating interesting workshops, and rewriting user friendly manual or handbook are recommended. Originality/value – The novel two-theme DEMATEL has greatly contributed to the conventional one-theme DEMATEL theory. The proposed two-theme DEMATEL procedures were the first attempt to evaluate the acceptance of using internal clouds in university. The results have provided manifest root-causes under two distinct themes, which help derive effectual intervention activities to foster the acceptance of usage of internal clouds in a university.",
"title": ""
},
{
"docid": "neg:1840320_1",
"text": "CNNs have proven to be a very successful yet computationally expensive technique which made them slow to be adopted in mobile and embedded systems. There is a number of possible optimizations: minimizing the memory footprint, using lower precision and approximate computation, reducing computation cost of convolutions with FFTs. These have been explored recently and were shown to work. This project take ideas of using FFTs further and develops an alternative way to computing CNN – purely in frequency domain. As a side result it develops intuition about nonlinear elements: why do they work and how new types can be created.",
"title": ""
},
{
"docid": "neg:1840320_2",
"text": "The rapid growth of the Internet has brought with it an exponential increase in the type and frequency of cyber attacks. Many well-known cybersecurity solutions are in place to counteract these attacks. However, the generation of Big Data over computer networks is rapidly rendering these traditional solutions obsolete. To cater for this problem, corporate research is now focusing on Security Analytics, i.e., the application of Big Data Analytics techniques to cybersecurity. Analytics can assist network managers particularly in the monitoring and surveillance of real-time network streams and real-time detection of both malicious and suspicious (outlying) patterns. Such a behavior is envisioned to encompass and enhance all traditional security techniques. This paper presents a comprehensive survey on the state of the art of Security Analytics, i.e., its description, technology, trends, and tools. It hence aims to convince the reader of the imminent application of analytics as an unparalleled cybersecurity solution in the near future.",
"title": ""
},
{
"docid": "neg:1840320_3",
"text": "OBJECTIVES\nTo evaluate the clinical response at 12 month in a cohort of patients with rheumatoid arthritis treated with Etanar (rhTNFR:Fc), and to register the occurrence of adverse effects.\n\n\nMETHODS\nThis is a multicentre observational cohort study. It included patients over 18 years of age with an active rheumatoid arthritis diagnosis for which the treating physician had begun a treatment scheme of 25 mg of subcutaneous etanercept (Etanar ® 25 mg: biologic type rhTNFR:Fc), twice per week. Follow-up was done during 12 months, with assessments at weeks 12, 24, 36 and 48. Evaluated outcomes included tender joint count, swollen joint count, ACR20, ACR50, ACR70, HAQ and DAS28.\n\n\nRESULTS\nOne-hundred and five (105) subjects were entered into the cohort. The median of tender and swollen joint count, ranged from 19 and 14, respectively at onset to 1 at the 12th month. By month 12, 90.5% of the subjects reached ACR20, 86% ACR50, and 65% ACR70. The median of DAS28 went from 4.7 to 2, and the median HAQ went from 1.3 to 0.2. The rate of adverse effects was 14 for every 100 persons per year. No serious adverse effects were reported. The most frequent were pruritus (5 cases), and rhinitis (3 cases).\n\n\nCONCLUSIONS\nAfter a year of following up a patient cohort treated with etanercept 25 mg twice per week, significant clinical results were observed, resulting in adequate disease control in a high percentage of patients with an adequate level of safety.",
"title": ""
},
{
"docid": "neg:1840320_4",
"text": "While Bitcoin (Peer-to-Peer Electronic Cash) [Nak]solved the double spend problem and provided work withtimestamps on a public ledger, it has not to date extendedthe functionality of a blockchain beyond a transparent andpublic payment system. Satoshi Nakamoto's original referenceclient had a decentralized marketplace service which was latertaken out due to a lack of resources [Deva]. We continued withNakamoto's vision by creating a set of commercial-grade ser-vices supporting a wide variety of business use cases, includinga fully developed blockchain-based decentralized marketplace,secure data storage and transfer, and unique user aliases thatlink the owner to all services controlled by that alias.",
"title": ""
},
{
"docid": "neg:1840320_5",
"text": "Forensic dentistry can be defined in many ways. One of the more elegant definitions is simply that forensic dentistry represents the overlap between the dental and the legal professions. This two-part series presents the field of forensic dentistry by outlining two of the major aspects of the profession: human identification and bite marks. This first paper examines the use of the human dentition and surrounding structures to enable the identification of found human remains. Conventional and novel techniques are presented.",
"title": ""
},
{
"docid": "neg:1840320_6",
"text": "Recent years have witnessed the significant advance in fine-grained visual categorization, which targets to classify the objects belonging to the same species. To capture enough subtle visual differences and build discriminative visual description, most of the existing methods heavily rely on the artificial part annotations, which are expensive to collect in real applications. Motivated to conquer this issue, this paper proposes a multi-level coarse-to-fine object description. This novel description only requires the original image as input, but could automatically generate visual descriptions discriminative enough for fine-grained visual categorization. This description is extracted from five sources representing coarse-to-fine visual clues: 1) original image is used as the source of global visual clue; 2) object bounding boxes are generated using convolutional neural network (CNN); 3) with the generated bounding box, foreground is segmented using the proposed k nearest neighbour-based co-segmentation algorithm; and 4) two types of part segmentations are generated by dividing the foreground with an unsupervised part learning strategy. The final description is generated by feeding these sources into CNN models and concatenating their outputs. Experiments on two public benchmark data sets show the impressive performance of this coarse-to-fine description, i.e., classification accuracy achieves 82.5% on CUB-200-2011, and 86.9% on fine-grained visual categorization-Aircraft, respectively, which outperform many recent works.",
"title": ""
},
{
"docid": "neg:1840320_7",
"text": "In recent years, there has been a substantial amount of work on large-scale data analytics using Hadoop-based platforms running on large clusters of commodity machines. A lessexplored topic is how those data, dominated by application logs, are collected and structured to begin with. In this paper, we present Twitter’s production logging infrastructure and its evolution from application-specific logging to a unified “client events” log format, where messages are captured in common, well-formatted, flexible Thrift messages. Since most analytics tasks consider the user session as the basic unit of analysis, we pre-materialize “session sequences”, which are compact summaries that can answer a large class of common queries quickly. The development of this infrastructure has streamlined log collection and data analysis, thereby improving our ability to rapidly experiment and iterate on various aspects of the service.",
"title": ""
},
{
"docid": "neg:1840320_8",
"text": "Advances in information technology, particularly in the e-business arena, are enabling firms to rethink their supply chain strategies and explore new avenues for inter-organizational cooperation. However, an incomplete understanding of the value of information sharing and physical flow coordination hinder these efforts. This research attempts to help fill these gaps by surveying prior research in the area, categorized in terms of information sharing and flow coordination. We conclude by highlighting gaps in the current body of knowledge and identifying promising areas for future research. Subject Areas: e-Business, Inventory Management, Supply Chain Management, and Survey Research.",
"title": ""
},
{
"docid": "neg:1840320_9",
"text": "A pre-trained convolutional deep neural network (CNN) is a feed-forward computation perspective, which is widely used for the embedded systems, requires high power-and-area efficiency. This paper realizes a binarized CNN which treats only binary 2-values (+1/-1) for the inputs and the weights. In this case, the multiplier is replaced into an XNOR circuit instead of a dedicated DSP block. For hardware implementation, using binarized inputs and weights is more suitable. However, the binarized CNN requires the batch normalization techniques to retain the classification accuracy. In that case, the additional multiplication and addition require extra hardware, also, the memory access for its parameters reduces system performance. In this paper, we propose the batch normalization free CNN which is mathematically equivalent to the CNN using batch normalization. The proposed CNN treats the binarized inputs and weights with the integer bias. We implemented the VGG-16 benchmark CNN on the NetFPGA-SUME FPGA board, which has the Xilinx Inc. Virtex7 FPGA and three off-chip QDR II+ Synchronous SRAMs. Compared with the conventional FPGA realizations, although the classification error rate is 6.5% decayed, the performance is 2.82 times faster, the power efficiency is 1.76 times lower, and the area efficiency is 11.03 times smaller. Thus, our method is suitable for the embedded computer system.",
"title": ""
},
{
"docid": "neg:1840320_10",
"text": "68 Computer Music Journal As digitization and information technologies advance, document analysis and optical-characterrecognition technologies have become more widely used. Optical Music Recognition (OMR), also commonly known as OCR (Optical Character Recognition) for Music, was first attempted in the 1960s (Pruslin 1966). Standard OCR techniques cannot be used in music-score recognition, because music notation has a two-dimensional structure. In a staff, the horizontal position denotes different durations of notes, and the vertical position defines the height of the note (Roth 1994). Models for nonmusical OCR assessment have been proposed and largely used (Kanai et al. 1995; Ventzislav 2003). An ideal system that could reliably read and “understand” music notation could be used in music production for educational and entertainment applications. OMR is typically used today to accelerate the conversion from image music sheets into a symbolic music representation that can be manipulated, thus creating new and revised music editions. Other applications use OMR systems for educational purposes (e.g., IMUTUS; see www.exodus.gr/imutus), generating customized versions of music exercises. A different use involves the extraction of symbolic music representations to be used as incipits or as descriptors in music databases and related retrieval systems (Byrd 2001). OMR systems can be classified on the basis of the granularity chosen to recognize the music score’s symbols. The architecture of an OMR system is tightly related to the methods used for symbol extraction, segmentation, and recognition. Generally, the music-notation recognition process can be divided into four main phases: (1) the segmentation of the score image to detect and extract symbols; (2) the recognition of symbols; (3) the reconstruction of music information; and (4) the construction of the symbolic music notation model to represent the information (Bellini, Bruno, and Nesi 2004). Music notation may present very complex constructs and several styles. This problem has been recently addressed by the MUSICNETWORK and Motion Picture Experts Group (MPEG) in their work on Symbolic Music Representation (www .interactivemusicnetwork.org/mpeg-ahg). Many music-notation symbols exist, and they can be combined in different ways to realize several complex configurations, often without using well-defined formatting rules (Ross 1970; Heussenstamm 1987). Despite various research systems for OMR (e.g., Prerau 1970; Tojo and Aoyama 1982; Rumelhart, Hinton, and McClelland 1986; Fujinaga 1988, 1996; Carter 1989, 1994; Kato and Inokuchi 1990; Kobayakawa 1993; Selfridge-Field 1993; Ng and Boyle 1994, 1996; Coüasnon and Camillerapp 1995; Bainbridge and Bell 1996, 2003; Modayur 1996; Cooper, Ng, and Boyle 1997; Bellini and Nesi 2001; McPherson 2002; Bruno 2003; Byrd 2006) as well as commercially available products, optical music recognition—and more generally speaking, music recognition—is a research field affected by many open problems. The meaning of “music recognition” changes depending on the kind of applications and goals (Blostein and Carter 1992): audio generation from a musical score, music indexing and searching in a library database, music analysis, automatic transcription of a music score into parts, transcoding a score into interchange data formats, etc. For such applications, we must employ common tools to provide answers to questions such as “What does a particular percentagerecognition rate that is claimed by this particular algorithm really mean?” and “May I invoke a common methodology to compare different OMR tools on the basis of my music?” As mentioned in Blostein and Carter (1992) and Miyao and Haralick (2000), there is no standard for expressing the results of the OMR process. Assessing Optical Music Recognition Tools",
"title": ""
},
{
"docid": "neg:1840320_11",
"text": "Hashing, or learning binary embeddings of data, is frequently used in nearest neighbor retrieval. In this paper, we develop learning to rank formulations for hashing, aimed at directly optimizing ranking-based evaluation metrics such as Average Precision (AP) and Normalized Discounted Cumulative Gain (NDCG). We first observe that the integer-valued Hamming distance often leads to tied rankings, and propose to use tie-aware versions of AP and NDCG to evaluate hashing for retrieval. Then, to optimize tie-aware ranking metrics, we derive their continuous relaxations, and perform gradient-based optimization with deep neural networks. Our results establish the new state-of-the-art for image retrieval by Hamming ranking in common benchmarks.",
"title": ""
},
{
"docid": "neg:1840320_12",
"text": "Crude extracts of curcuminoids and essential oil of Curcuma longa varieties Kasur, Faisalabad and Bannu were studied for their antibacterial activity against 4 bacterial strains viz., Bacillus subtilis, Bacillus macerans, Bacillus licheniformis and Azotobacter using agar well diffusion method. Solvents used to determine antibacterial activity were ethanol and methanol. Ethanol was used for the extraction of curcuminoids. Essential oil was extracted by hydrodistillation and diluted in methanol by serial dilution method. Both Curcuminoids and oil showed zone of inhibition against all tested strains of bacteria. Among all the three turmeric varieties, Kasur variety had the most inhibitory effect on the growth of all bacterial strains tested as compared to Faisalabad and Bannu varieties. Among all the bacterial strains B. subtilis was the most sensitive to turmeric extracts of curcuminoids and oil. The MIC value for different strains and varieties ranged from 3.0 to 20.6 mm in diameter.",
"title": ""
},
{
"docid": "neg:1840320_13",
"text": "This paper proposes a scalable approach for distinguishing malicious files from clean files by investigating the behavioural features using logs of various API calls. We also propose, as an alternative to the traditional method of manually identifying malware files, an automated classification system using runtime features of malware files. For both projects, we use an automated tool running in a virtual environment to extract API call features from executables and apply pattern recognition algorithms and statistical methods to differentiate between files. Our experimental results, based on a dataset of 1368 malware and 456 cleanware files, provide an accuracy of over 97% in distinguishing malware from cleanware. Our techniques provide a similar accuracy for classifying malware into families. In both cases, our results outperform comparable previously published techniques.",
"title": ""
},
{
"docid": "neg:1840320_14",
"text": "Renewable energy is currently the main direction of development of electric power. Because of its own characteristics, the reliability of renewable energy generation is low. Renewable energy generation system needs lots of energy conversion devices which are made of power electronic devices. Too much power electronic components can damage power quality in microgrid. High Frequency AC (HFAC) microgrid is an effective way to solve the problems of renewable energy generation system. Transmitting electricity by means of HFAC is a novel idea in microgrid. Although the HFAC will cause more loss of power, it can improve the power quality in microgrid. HFAC can also reduce the impact of fluctuations of renewable energy in microgrid. This paper mainly simulates the HFAC with Matlab/Simulink and analyzes the feasibility of HFAC in microgrid.",
"title": ""
},
{
"docid": "neg:1840320_15",
"text": "AIM\nThe aim of the study was to evaluate the bleaching effect, morphological changes, and variations in calcium (Ca) and phosphate (P) in the enamel with hydrogen peroxide (HP) and carbamide peroxide (CP) after the use of different application regimens.\n\n\nMATERIALS AND METHODS\nFour groups of five teeth were randomly assigned, according to the treatment protocol: HP 37.5% applied for 30 or 60 minutes (HP30, HP60), CP 16% applied for 14 or 28 hours (CP14, CP28). Changes in dental color were evaluated, according to the following formula: ΔE = [(La-Lb)2+(aa-ab)2 + (ba-bb)2]1/2. Enamel morphology and Ca and P compositions were evaluated by confocal laser scanning microscope and environmental scanning electron microscopy.\n\n\nRESULTS\nΔE HP30 was significantly greater than CP14 (10.37 ± 2.65/8.56 ± 1.40), but not between HP60 and CP28. HP60 shows greater morphological changes than HP30. No morphological changes were observed in the groups treated with CP. The reduction in Ca and P was significantly greater in HP60 than in CP28 (p < 0.05).\n\n\nCONCLUSION\nBoth formulations improved tooth color; HP produced morphological changes and Ca and P a gradual decrease, while CP produced no morphological changes, and the decrease in mineral component was smaller.\n\n\nCLINICAL SIGNIFICANCE\nCP 16% applied during 2 weeks could be equally effective and safer for tooth whitening than to administer two treatment sessions with HP 37.5%.",
"title": ""
},
{
"docid": "neg:1840320_16",
"text": "Wireless multimedia sensor networks (WMSNs) attracts significant attention in the field of agriculture where disease detection plays an important role. To improve the cultivation yield of plants it is necessary to detect the onset of diseases in plants and provide advice to farmers who will act based on the received suggestion. Due to the limitations of WMSN, it is necessary to design a simple system which can provide higher accuracy with less complexity. In this paper a novel disease detection system (DDS) is proposed to detect and classify the diseases in leaves. Statistical based thresholding strategy is proposed for segmentation which is less complex compared to k-means clustering method. The features extracted from the segmented image will be transmitted through sensor nodes to the monitoring site where the analysis and classification is done using Support Vector Machine classifier. The performance of the proposed DDS has been evaluated in terms of accuracy and is compared with the existing k-means clustering technique. The results show that the proposed method provides an overall accuracy of around 98%. The transmission energy is also analyzed in real time using TelosB nodes.",
"title": ""
},
{
"docid": "neg:1840320_17",
"text": "NK fitness landscapes are stochastically generated fitness functions on bit strings, parameterized (with genes and interactions between genes) so as to make them tunably ‘rugged’. Under the ‘natural’ genetic operators of bit-flipping mutation or recombination, NK landscapes produce multiple domains of attraction for the evolutionary dynamics. NK landscapes have been used in models of epistatic gene interactions, coevolution, genome growth, and Wright’s shifting balance model of adaptation. Theory for adaptive walks on NK landscapes has been derived, and generalizations that extend beyond Kauffman’s original framework have been utilized in these applications.",
"title": ""
},
{
"docid": "neg:1840320_18",
"text": "We propose a new generative language model for sentences that first samples a prototype sentence from the training corpus and then edits it into a new sentence. Compared to traditional language models that generate from scratch either left-to-right or by first sampling a latent sentence vector, our prototype-then-edit model improves perplexity on language modeling and generates higher quality outputs according to human evaluation. Furthermore, the model gives rise to a latent edit vector that captures interpretable semantics such as sentence similarity and sentence-level analogies.",
"title": ""
},
{
"docid": "neg:1840320_19",
"text": "This paper presents the analysis and implementation of an LCLC resonant converter working as maximum power point tracker (MPPT) in a PV system. This converter must guarantee a constant DC output voltage and must vary its effective input resistance in order to extract the maximum power of the PV generator. Preliminary analysis concludes that not all resonant load topologies can achieve the design conditions for a MPPT. Only the LCLC and LLC converter are suitable for this purpose.",
"title": ""
}
] |
1840321 | Towards a Reduced-Wire Interface for CMUT-Based Intravascular Ultrasound Imaging Systems | [
{
"docid": "pos:1840321_0",
"text": "Three experimental techniques based on automatic swept-frequency network and impedance analysers were used to measure the dielectric properties of tissue in the frequency range 10 Hz to 20 GHz. The technique used in conjunction with the impedance analyser is described. Results are given for a number of human and animal tissues, at body temperature, across the frequency range, demonstrating that good agreement was achieved between measurements using the three pieces of equipment. Moreover, the measured values fall well within the body of corresponding literature data.",
"title": ""
},
{
"docid": "pos:1840321_1",
"text": "This paper discusses two antennas monolithically integrated on-chip to be used respectively for wireless powering and UWB transmission of a tag designed and fabricated in 0.18-μm CMOS technology. A multiturn loop-dipole structure with inductive and resistive stubs is chosen for both antennas. Using these on-chip antennas, the chip employs asymmetric communication links: at downlink, the tag captures the required supply wirelessly from the received RF signal transmitted by a reader and, for the uplink, ultra-wideband impulse-radio (UWB-IR), in the 3.1-10.6-GHz band, is employed instead of backscattering to achieve extremely low power and a high data rate up to 1 Mb/s. At downlink with the on-chip power-scavenging antenna and power-management unit circuitry properly designed, 7.5-cm powering distance has been achieved, which is a huge improvement in terms of operation distance compared with other reported tags with on-chip antenna. Also, 7-cm operating distance is achieved with the implemented on-chip UWB antenna. The tag can be powered up at all the three ISM bands of 915 MHz and 2.45 GHz, with off-chip antennas, and 5.8 GHz with the integrated on-chip antenna. The tag receives its clock and the commands wirelessly through the modulated RF powering-up signal. Measurement results show that the tag can operate up to 1 Mb/s data rate with a minimum input power of -19.41 dBm at 915-MHz band, corresponding to 15.7 m of operation range with an off-chip 0-dB gain antenna. This is a great improvement compared with conventional passive RFIDs in term of data rate and operation distance. The power consumption of the chip is measured to be just 16.6 μW at the clock frequency of 10 MHz at 1.2-V supply. In addition, in this paper, for the first time, the radiation pattern of an on-chip antenna at such a frequency is measured. The measurement shows that the antenna has an almost omnidirectional radiation pattern so that the chip's performance is less direction-dependent.",
"title": ""
}
] | [
{
"docid": "neg:1840321_0",
"text": "Using soft tissue fillers to correct postrhinoplasty deformities in the nose is appealing. Fillers are minimally invasive and can potentially help patients who are concerned with the financial expense, anesthetic risk, or downtime generally associated with a surgical intervention. A variety of filler materials are currently available and have been used for facial soft tissue augmentation. Of these, hyaluronic acid (HA) derivatives, calcium hydroxylapatite gel (CaHA), and silicone have most frequently been used for treating nasal deformities. While effective, silicone is known to cause severe granulomatous reactions in some patients and should be avoided. HA and CaHA are likely safer, but still may occasionally lead to complications such as infection, thinning of the skin envelope, and necrosis. Nasal injection technique must include sub-SMAS placement to eliminate visible or palpable nodularity. Restricting the use of fillers to the nasal dorsum and sidewalls minimizes complications because more adverse events occur after injections to the nasal tip and alae. We believe that HA and CaHA are acceptable for the treatment of postrhinoplasty deformities in carefully selected patients; however, patients who are treated must be followed closely for complications. The use of any soft tissue filler in the nose should always be approached with great caution and with a thorough consideration of a patient's individual circumstances.",
"title": ""
},
{
"docid": "neg:1840321_1",
"text": "A key advantage of scientific workflow systems over traditional scripting approaches is their ability to automatically record data and process dependencies introduced during workflow runs. This information is often represented through provenance graphs, which can be used by scientists to better understand, reproduce, and verify scientific results. However, while most systems record and store data and process dependencies, few provide easy-to-use and efficient approaches for accessing and querying provenance information. Instead, users formulate provenance graph queries directly against physical data representations (e.g., relational, XML, or RDF), leading to queries that are difficult to express and expensive to evaluate. We address these problems through a high-level query language tailored for expressing provenance graph queries. The language is based on a general model of provenance supporting scientific workflows that process XML data and employ update semantics. Query constructs are provided for querying both structure and lineage information. Unlike other languages that return sets of nodes as answers, our query language is closed, i.e., answers to lineage queries are sets of lineage dependencies (edges) allowing answers to be further queried. We provide a formal semantics for the language and present novel techniques for efficiently evaluating lineage queries. Experimental results on real and synthetic provenance traces demonstrate that our lineage based optimizations outperform an in-memory and standard database implementation by orders of magnitude. We also show that our strategies are feasible and can significantly reduce both provenance storage size and query execution time when compared with standard approaches.",
"title": ""
},
{
"docid": "neg:1840321_2",
"text": "The abundant spatial and contextual information provided by the advanced remote sensing technology has facilitated subsequent automatic interpretation of the optical remote sensing images (RSIs). In this paper, a novel and effective geospatial object detection framework is proposed by combining the weakly supervised learning (WSL) and high-level feature learning. First, deep Boltzmann machine is adopted to infer the spatial and structural information encoded in the low-level and middle-level features to effectively describe objects in optical RSIs. Then, a novel WSL approach is presented to object detection where the training sets require only binary labels indicating whether an image contains the target object or not. Based on the learnt high-level features, it jointly integrates saliency, intraclass compactness, and interclass separability in a Bayesian framework to initialize a set of training examples from weakly labeled images and start iterative learning of the object detector. A novel evaluation criterion is also developed to detect model drift and cease the iterative learning. Comprehensive experiments on three optical RSI data sets have demonstrated the efficacy of the proposed approach in benchmarking with several state-of-the-art supervised-learning-based object detection approaches.",
"title": ""
},
{
"docid": "neg:1840321_3",
"text": "Thirty-six blast-exposed patients and twenty-nine non-blast-exposed control subjects were tested on a battery of behavioral and electrophysiological tests that have been shown to be sensitive to central auditory processing deficits. Abnormal performance among the blast-exposed patients was assessed with reference to normative values established as the mean performance on each test by the control subjects plus or minus two standard deviations. Blast-exposed patients performed abnormally at rates significantly above that which would occur by chance on three of the behavioral tests of central auditory processing: the Gaps-In-Noise, Masking Level Difference, and Staggered Spondaic Words tests. The proportion of blast-exposed patients performing abnormally on a speech-in-noise test (Quick Speech-In-Noise) was also significantly above that expected by chance. These results suggest that, for some patients, blast exposure may lead to difficulties with hearing in complex auditory environments, even when peripheral hearing sensitivity is near normal limits.",
"title": ""
},
{
"docid": "neg:1840321_4",
"text": "Elderly adults may master challenging cognitive demands by additionally recruiting the cross-hemispheric counterparts of otherwise unilaterally engaged brain regions, a strategy that seems to be at odds with the notion of lateralized functions in cerebral cortex. We wondered whether bilateral activation might be a general coping strategy that is independent of age, task content and brain region. While using functional magnetic resonance imaging (fMRI), we pushed young and old subjects to their working memory (WM) capacity limits in verbal, spatial, and object domains. Then, we compared the fMRI signal reflecting WM maintenance between hemispheric counterparts of various task-relevant cerebral regions that are known to exhibit lateralization. Whereas language-related areas kept their lateralized activation pattern independent of age in difficult tasks, we observed bilaterality in dorsolateral and anterior prefrontal cortex across WM domains and age groups. In summary, the additional recruitment of cross-hemispheric counterparts seems to be an age-independent domain-general strategy to master cognitive challenges. This phenomenon is largely confined to prefrontal cortex, which is arguably less specialized and more flexible than other parts of the brain.",
"title": ""
},
{
"docid": "neg:1840321_5",
"text": "Accurate segmentation of the heart is an important step towards evaluating cardiac function. In this paper, we present a fully automated framework for segmentation of the left (LV) and right (RV) ventricular cavities and the myocardium (Myo) on short-axis cardiac MR images. We investigate various 2D and 3D convolutional neural network architectures for this task. Experiments were performed on the ACDC 2017 challenge training dataset comprising cardiac MR images of 100 patients, where manual reference segmentations were made available for end-diastolic (ED) and end-systolic (ES) frames. We find that processing the images in a slice-by-slice fashion using 2D networks is beneficial due to a relatively large slice thickness. However, the exact network architecture only plays a minor role. We report mean Dice coefficients of 0.950 (LV), 0.893 (RV), and 0.899 (Myo), respectively with an average evaluation time of 1.1 seconds per volume on a modern GPU.",
"title": ""
},
{
"docid": "neg:1840321_6",
"text": "Although various scalable deep learning software packages have been proposed, it remains unclear how to best leverage parallel and distributed computing infrastructure to accelerate their training and deployment. Moreover, the effectiveness of existing parallel and distributed systems varies widely based on the neural network architecture and dataset under consideration. In order to efficiently explore the space of scalable deep learning systems and quickly diagnose their effectiveness for a given problem instance, we introduce an analytical performance model called PALEO. Our key observation is that a neural network architecture carries with it a declarative specification of the computational requirements associated with its training and evaluation. By extracting these requirements from a given architecture and mapping them to a specific point within the design space of software, hardware and communication strategies, PALEO can efficiently and accurately model the expected scalability and performance of a putative deep learning system. We show that PALEO is robust to the choice of network architecture, hardware, software, communication schemes, and parallelization strategies. We further demonstrate its ability to accurately model various recently published scalability results for CNNs such as NiN, Inception and AlexNet.",
"title": ""
},
{
"docid": "neg:1840321_7",
"text": "Plenty of face detection and recognition methods have been proposed and got delightful results in decades. Common face recognition pipeline consists of: 1) face detection, 2) face alignment, 3) feature extraction, 4) similarity calculation, which are separated and independent from each other. The separated face analyzing stages lead the model redundant calculation and are hard for end-to-end training. In this paper, we proposed a novel end-to-end trainable convolutional network framework for face detection and recognition, in which a geometric transformation matrix was directly learned to align the faces, instead of predicting the facial landmarks. In training stage, our single CNN model is supervised only by face bounding boxes and personal identities, which are publicly available from WIDER FACE [36] dataset and CASIA-WebFace [37] dataset. Tested on Face Detection Dataset and Benchmark (FDDB) [11] dataset and Labeled Face in the Wild (LFW) [9] dataset, we have achieved 89.24% recall for face detection task and 98.63% verification accuracy for face recognition task simultaneously, which are comparable to state-of-the-art results.",
"title": ""
},
{
"docid": "neg:1840321_8",
"text": "Bioinformatics software quality assurance is essential in genomic medicine. Systematic verification and validation of bioinformatics software is difficult because it is often not possible to obtain a realistic \"gold standard\" for systematic evaluation. Here we apply a technique that originates from the software testing literature, namely Metamorphic Testing (MT), to systematically test three widely used short-read sequence alignment programs. MT alleviates the problems associated with the lack of gold standard by checking that the results from multiple executions of a program satisfy a set of expected or desirable properties that can be derived from the software specification or user expectations. We tested BWA, Bowtie and Bowtie2 using simulated data and one HapMap dataset. It is interesting to observe that multiple executions of the same aligner using slightly modified input FASTQ sequence file, such as after randomly re-ordering of the reads, may affect alignment results. Furthermore, we found that the list of variant calls can be affected unless strict quality control is applied during variant calling. Thorough testing of bioinformatics software is important in delivering clinical genomic medicine. This paper demonstrates a different framework to test a program that involves checking its properties, thus greatly expanding the number and repertoire of test cases we can apply in practice.",
"title": ""
},
{
"docid": "neg:1840321_9",
"text": "Although social networking sites (SNSs) have attracted increased attention and members in recent years, there has been little research on it: particularly on how a users’ extroversion or introversion can affect their intention to pay for these services and what other factors might influence them. We therefore proposed and tested a model that measured the users’ value and satisfaction perspectives by examining the influence of these factors in an empirical survey of 288 SNS members. At the same time, the differences due to their psychological state were explored. The causal model was validated using PLSGraph 3.0; six out of eight study hypotheses were supported. The results indicated that perceived value significantly influenced the intention to pay SNS subscription fees while satisfaction did not. Moreover, extroverts thought more highly of the social value of the SNS, while introverts placed more importance on its emotional and price value. The implications of these findings are discussed. Crown Copyright 2010 Published by Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "neg:1840321_10",
"text": "Taxonomies of person characteristics are well developed, whereas taxonomies of psychologically important situation characteristics are underdeveloped. A working model of situation perception implies the existence of taxonomizable dimensions of psychologically meaningful, important, and consequential situation characteristics tied to situation cues, goal affordances, and behavior. Such dimensions are developed and demonstrated in a multi-method set of 6 studies. First, the \"Situational Eight DIAMONDS\" dimensions Duty, Intellect, Adversity, Mating, pOsitivity, Negativity, Deception, and Sociality (Study 1) are established from the Riverside Situational Q-Sort (Sherman, Nave, & Funder, 2010, 2012, 2013; Wagerman & Funder, 2009). Second, their rater agreement (Study 2) and associations with situation cues and goal/trait affordances (Studies 3 and 4) are examined. Finally, the usefulness of these dimensions is demonstrated by examining their predictive power of behavior (Study 5), particularly vis-à-vis measures of personality and situations (Study 6). Together, we provide extensive and compelling evidence that the DIAMONDS taxonomy is useful for organizing major dimensions of situation characteristics. We discuss the DIAMONDS taxonomy in the context of previous taxonomic approaches and sketch future research directions.",
"title": ""
},
{
"docid": "neg:1840321_11",
"text": "Deep Neural Networks (DNN) have achieved state-of-the-art results in a wide range of tasks, with the best results obtained with large training sets and large models. In the past, GPUs enabled these breakthroughs because of their greater computational speed. In the future, faster computation at both training and test time is likely to be crucial for further progress and for consumer applications on low-power devices. As a result, there is much interest in research and development of dedicated hardware for Deep Learning (DL). Binary weights, i.e., weights which are constrained to only two possible values (e.g. -1 or 1), would bring great benefits to specialized DL hardware by replacing many multiply-accumulate operations by simple accumulations, as multipliers are the most space and powerhungry components of the digital implementation of neural networks. We introduce BinaryConnect, a method which consists in training a DNN with binary weights during the forward and backward propagations, while retaining precision of the stored weights in which gradients are accumulated. Like other dropout schemes, we show that BinaryConnect acts as regularizer and we obtain near state-of-the-art results with BinaryConnect on the permutation-invariant MNIST, CIFAR-10 and SVHN.",
"title": ""
},
{
"docid": "neg:1840321_12",
"text": "We present a novel method to obtain a 3D Euclidean reconstruction of both the background and moving objects in a video sequence. We assume that, multiple objects are moving rigidly on a ground plane observed by a moving camera. The video sequence is first segmented into static background and motion blobs by a homography-based motion segmentation method. Then classical \"Structure from Motion\" (SfM) techniques are applied to obtain a Euclidean reconstruction of the static background. The motion blob corresponding to each moving object is treated as if there were a static object observed by a hypothetical moving camera, called a \"virtual camera\". This virtual camera shares the same intrinsic parameters with the real camera but moves differently due to object motion. The same SfM techniques are applied to estimate the 3D shape of each moving object and the pose of the virtual camera. We show that the unknown scale of moving objects can be approximately determined by the ground plane, which is a key contribution of this paper. Another key contribution is that we prove that the 3D motion of moving objects can be solved from the virtual camera motion with a linear constraint imposed on the object translation. In our approach, a planartranslation constraint is formulated: \"the 3D instantaneous translation of moving objects must be parallel to the ground plane\". Results on real-world video sequences demonstrate the effectiveness and robustness of our approach.",
"title": ""
},
{
"docid": "neg:1840321_13",
"text": "The success of deep learning has been a catalyst to solving increasingly complex machine-learning problems, which often involve multiple data modalities. We review recent advances in deep multimodal learning and highlight the state-of the art, as well as gaps and challenges in this active research field. We first classify deep multimodal learning architectures and then discuss methods to fuse learned multimodal representations in deep-learning architectures. We highlight two areas of research–regularization strategies and methods that learn or optimize multimodal fusion structures–as exciting areas for future work.",
"title": ""
},
{
"docid": "neg:1840321_14",
"text": "Flavor and color of roasted peanuts are important research areas due to their significant influence on consumer preference. The aim of the present study was to explore correlations between sensory attributes of peanuts, volatile headspace compounds and color parameters. Different raw peanuts were selected to be representative of common market types, varieties, growing locations and grades used in Europe. Peanuts were roasted by a variety of processing technologies, resulting in 134 unique samples, which were analyzed for color, volatile composition and flavor profile by expert panel. Several headspace volatile compounds which positively or negatively correlated to \"roasted peanut\", \"raw bean\", \"dark roast\" and \"sweet\" attributes were identified. Results demonstrated that the correlation of CIELAB color parameters with roast related aromas, often taken for granted by the industry, is not strong when samples of different raw materials are subjected to different processing conditions.",
"title": ""
},
{
"docid": "neg:1840321_15",
"text": "In this paper, a 10 kW current-fed DC-DC converter using resonant push-pull topology is demonstrated and analyzed. The grounds for component dimensioning are given and the advantages and disadvantages of the resonant push-pull topology are discussed. The converter characteristics and efficiencies are demonstrated by calculations and prototype measurements.",
"title": ""
},
{
"docid": "neg:1840321_16",
"text": "This paper presents a proposed smartphone application for the unique SmartAbility Framework that supports interaction with technology for people with reduced physical ability, through focusing on the actions that they can perform independently. The Framework is a culmination of knowledge obtained through previously conducted technology feasibility trials and controlled usability evaluations involving the user community. The Framework is an example of ability-based design that focuses on the abilities of users instead of their disabilities. The paper includes a summary of Versions 1 and 2 of the Framework, including the results of a two-phased validation approach, conducted at the UK Mobility Roadshow and via a focus group of domain experts. A holistic model developed by adapting the House of Quality (HoQ) matrix of the Quality Function Deployment (QFD) approach is also described. A systematic literature review of sensor technologies built into smart devices establishes the capabilities of sensors in the Android and iOS operating systems. The review defines a set of inclusion and exclusion criteria, as well as search terms used to elicit literature from online repositories. The key contribution is the mapping of ability-based sensor technologies onto the Framework, to enable the future implementation of a smartphone application. Through the exploitation of the SmartAbility application, the Framework will increase technology amongst people with reduced physical ability and provide a promotional tool for assistive technology manufacturers.",
"title": ""
},
{
"docid": "neg:1840321_17",
"text": "Students, researchers and professional analysts lack effective tools to make personal and collective sense of problems while working in distributed teams. Central to this work is the process of sharing—and contesting—interpretations via different forms of argument. How does the “Web 2.0” paradigm challenge us to deliver useful, usable tools for online argumentation? This paper reviews the current state of the art in Web Argumentation, describes key features of the Web 2.0 orientation, and identifies some of the tensions that must be negotiated in bringing these worlds together. It then describes how these design principles are interpreted in Cohere, a web tool for social bookmarking, idea-linking, and argument visualization.",
"title": ""
},
{
"docid": "neg:1840321_18",
"text": "Autonomous operation is becoming an increasingly important factor for UAVs. It enables a vehicle to decide on the most appropriate action under consideration of the current vehicle and environment state. We investigated the decision-making process using the cognitive agent-based architecture Soar, which uses techniques adapted from human decision-making. Based on Soar an agent was developed which enables UAVs to autonomously make decisions and interact with a dynamic environment. One or more UAV agents were then tested in a simulation environment which has been developed using agent-based modelling. By simulating a dynamic environment, the capabilities of a UAV agent can be tested under defined conditions and additionally its behaviour can be visualised. The agent’s abilities were demonstrated using a scenario consisting of a highly dynamic border-surveillance mission with multiple autonomous UAVs. We can show that the autonomous agents are able to execute the mission successfully and can react adaptively to unforeseen events. We conclude that using a cognitive architecture is a promising approach for modelling autonomous behaviour.",
"title": ""
},
{
"docid": "neg:1840321_19",
"text": "Since the end of the 20th century, it has become clear that web browsers will play a crucial role in accessing Internet resources such as the World Wide Web. They evolved into complex software suites that are able to process a multitude of data formats. Just-In-Time (JIT) compilation was incorporated to speed up the execution of script code, but is also used besides web browsers for performance reasons. Attackers happily welcomed JIT in their own way, and until today, JIT compilers are an important target of various attacks. This includes for example JIT-Spray, JIT-based code-reuse attacks and JIT-specific flaws to circumvent mitigation techniques in order to simplify the exploitation of memory-corruption vulnerabilities. Furthermore, JIT compilers are complex and provide a large attack surface, which is visible in the steady stream of critical bugs appearing in them. In this paper, we survey and systematize the jungle of JIT compilers of major (client-side) programs, and provide a categorization of offensive techniques for abusing JIT compilation. Thereby, we present techniques used in academic as well as in non-academic works which try to break various defenses against memory-corruption vulnerabilities. Additionally, we discuss what mitigations arouse to harden JIT compilers to impede exploitation by skilled attackers wanting to abuse Just-In-Time compilers.",
"title": ""
}
] |
1840322 | Computational Technique for an Efficient Classification of Protein Sequences With Distance-Based Sequence Encoding Algorithm | [
{
"docid": "pos:1840322_0",
"text": "More than twelve years have elapsed since the first public release of WEKA. In that time, the software has been rewritten entirely from scratch, evolved substantially and now accompanies a text on data mining [35]. These days, WEKA enjoys widespread acceptance in both academia and business, has an active community, and has been downloaded more than 1.4 million times since being placed on Source-Forge in April 2000. This paper provides an introduction to the WEKA workbench, reviews the history of the project, and, in light of the recent 3.6 stable release, briefly discusses what has been added since the last stable version (Weka 3.4) released in 2003.",
"title": ""
}
] | [
{
"docid": "neg:1840322_0",
"text": "Offshore software development is a new trend in the information technology (IT) outsourcing field, fueled by the globalization of IT and the improvement of telecommunication facilities. Countries such as India, Ireland, and Israel have established a significant presence in this market. In this article, we discuss how software processes affect offshore development projects. We use data from projects in India, and focus on three measures of project performance: effort, elapsed time, and software rework.",
"title": ""
},
{
"docid": "neg:1840322_1",
"text": "This paper describes experiments in Machine Learning for text classification using a new representation of text based on WordNet hypernyms. Six binary classification tasks of varying diff iculty are defined, and the Ripper system is used to produce discrimination rules for each task using the new hypernym density representation. Rules are also produced with the commonly used bag-of-words representation, incorporating no knowledge from WordNet. Experiments show that for some of the more diff icult tasks the hypernym density representation leads to significantly more accurate and more comprehensible rules.",
"title": ""
},
{
"docid": "neg:1840322_2",
"text": "Development of a cystic mass on the nasal dorsum is a very rare complication of aesthetic rhinoplasty. Most reported cases are of mucous cyst and entrapment of the nasal mucosa in the subcutaneous space due to traumatic surgical technique has been suggested as a presumptive pathogenesis. Here, we report a case of dorsal nasal cyst that had a different pathogenesis for cyst formation. A 58-yr-old woman developed a large cystic mass on the nasal radix 30 yr after augmentation rhinoplasty with silicone material. The mass was removed via a direct open approach and the pathology findings revealed a foreign body inclusion cyst associated with silicone. Successful nasal reconstruction was performed with autologous cartilages. Discussion and a brief review of the literature will be focused on the pathophysiology of and treatment options for a postrhinoplasty dorsal cyst.",
"title": ""
},
{
"docid": "neg:1840322_3",
"text": "The increased attention to environmentalism in western societies has been accompanied by a rise in ecotourism, i.e. ecologically sensitive travel to remote areas to learn about ecosystems, as well as in cultural tourism, focusing on the people who are a part of ecosystems. Increasingly, the internet has partnered with ecotourism companies to provide information about destinations and facilitate travel arrangements. This study reviews the literature linking ecotourism and sustainable development, as well as prior research showing that cultures have been historically commodified in tourism advertising for developing countries destinations. We examine seven websites advertising ecotourism and cultural tourism and conclude that: (1) advertisements for natural and cultural spaces are not always consistent with the discourse of sustainability; and (2) earlier critiques of the commodification of culture in print advertising extend to internet advertising also.",
"title": ""
},
{
"docid": "neg:1840322_4",
"text": "Recently, efforts in the development of speech recognition systems and robots have come to fruition with an overflow of applications in our daily lives. However, we are still far from achieving natural interaction between humans and robots, given that robots do not take into account the emotional state of speakers. The objective of this research is to create an automatic emotion classifier integrated with a robot, such that the robot can understand the emotional state of a human user by analyzing the speech signals from the user. This becomes particularly relevant in the realm of using assistive robotics to tailor therapeutic techniques towards assisting children with autism spectrum disorder (ASD). Over the past two decades, the number of children being diagnosed with ASD has been rapidly increasing, yet the clinical and societal support have not been enough to cope with the needs. Therefore, finding alternative, affordable, and accessible means of therapy and assistance has become more of a concern. Improving audio-based emotion prediction for children with ASD will allow for the robotic system to properly assess the engagement level of the child and modify its responses to maximize the quality of interaction between the robot and the child and sustain an interactive learning environment.",
"title": ""
},
{
"docid": "neg:1840322_5",
"text": "Short texts usually encounter data sparsity and ambiguity problems in representations for their lack of context. In this paper, we propose a novel method to model short texts based on semantic clustering and convolutional neural network. Particularly, we first discover semantic cliques in embedding spaces by a fast clustering algorithm. Then, multi-scale semantic units are detected under the supervision of semantic cliques, which introduce useful external knowledge for short texts. These meaningful semantic units are combined and fed into convolutional layer, followed by max-pooling operation. Experimental results on two open benchmarks validate the effectiveness of the proposed method.",
"title": ""
},
{
"docid": "neg:1840322_6",
"text": "For information retrieval, it is useful to classify documents using a hierarchy of terms from a domain. One problem is that, for many domains, hierarchies of terms are not available. The task 17 of SemEval 2015 addresses the problem of structuring a set of terms from a given domain into a taxonomy without manual intervention. Here we present some simple taxonomy structuring techniques, such as term overlap and document and sentence cooccurrence in large quantities of text (English Wikipedia) to produce hypernym pairs for the eight domain lists supplied by the task organizers. Our submission ranked first in this 2015 benchmark, which suggests that overly complicated methods might need to be adapted to individual domains. We describe our generic techniques and present an initial evaluation of results.",
"title": ""
},
{
"docid": "neg:1840322_7",
"text": "Low Power Wide Area (LPWA) connectivity, a wireless wide area technology that is characterized for interconnecting devices with low bandwidth connectivity and focusing on range and power efficiency, is seen as one of the fastest-growing components of Internet-of-Things (IoT). The LPWA connectivity is used to serve a diverse range of vertical applications, including agriculture, consumer, industrial, logistic, smart building, smart city and utilities. 3GPP has defined the maiden Narrowband IoT (NB-IoT) specification in Release 13 (Rel-13) to accommodate the LPWA demand. Several major cellular operators, such as China Mobile, Deutsch Telekom and Vodafone, have announced their NB-IoT trials or commercial network in year 2017. In Telekom Malaysia, we have setup a NB-IoT trial network for End-to-End (E2E) integration study. Our experimental assessment showed that the battery lifetime target for NB-IoT devices as stated by 3GPP utilizing latest-to-date Commercial Off-The-Shelf (COTS) NB-IoT modules is yet to be realized. Finally, several recommendations on how to optimize the battery lifetime while designing firmware for NB-IoT device are also provided.",
"title": ""
},
{
"docid": "neg:1840322_8",
"text": "Correspondence Institute of Computer Science, University of Tartu, Juhan Liivi 2, 50409 Tartu, Estonia Email: orlenyslp@ut.ee Summary Blockchain platforms, such as Ethereum, allow a set of actors to maintain a ledger of transactions without relying on a central authority and to deploy scripts, called smart contracts, that are executedwhenever certain transactions occur. These features can be used as basic building blocks for executing collaborative business processes between mutually untrusting parties. However, implementing business processes using the low-level primitives provided by blockchain platforms is cumbersome and error-prone. In contrast, established business process management systems, such as those based on the standard Business Process Model and Notation (BPMN), provide convenient abstractions for rapid development of process-oriented applications. This article demonstrates how to combine the advantages of a business process management system with those of a blockchain platform. The article introduces a blockchain-based BPMN execution engine, namely Caterpillar. Like any BPMN execution engine, Caterpillar supports the creation of instances of a process model and allows users to monitor the state of process instances and to execute tasks thereof. The specificity of Caterpillar is that the state of each process instance is maintained on the (Ethereum) blockchain and the workflow routing is performed by smart contracts generated by a BPMN-to-Solidity compiler. The Caterpillar compiler supports a large array of BPMN constructs, including subprocesses, multi-instances activities and event handlers. The paper describes the architecture of Caterpillar, and the interfaces it provides to support the monitoring of process instances, the allocation and execution of work items, and the execution of service tasks.",
"title": ""
},
{
"docid": "neg:1840322_9",
"text": "The Internet of Things (IoT) describes the interconnection of objects (or Things) for various purposes including identification, communication, sensing, and data collection. “Things” in this context range from traditional computing devices like Personal Computers (PC) to general household objects embedded with capabilities for sensing and/or communication through the use of technologies such as Radio Frequency Identification (RFID). This conceptual paper, from a philosophical viewpoint, introduces an initial set of guiding principles also referred to in the paper as commandments that can be applied by all the stakeholders involved in the IoT during its introduction, deployment and thereafter. © 2011 Published by Elsevier Ltd. Selection and/or peer-review under responsibility of [name organizer]",
"title": ""
},
{
"docid": "neg:1840322_10",
"text": "This essay grew out of an examination of one-tailed significance testing. One-tailed tests were little advocated by the founders of modern statistics but are widely used and recommended nowadays in the biological, behavioral and social sciences. The high frequency of their use in ecology and animal behavior and their logical indefensibil-ity have been documented in a companion review paper. In the present one, we trace the roots of this problem and counter some attacks on significance testing in general. Roots include: the early but irrational dichotomization of the P scale and adoption of the 'significant/non-significant' terminology; the mistaken notion that a high P value is evidence favoring the null hypothesis over the alternative hypothesis; and confusion over the distinction between statistical and research hypotheses. Resultant widespread misuse and misinterpretation of significance tests have also led to other problems, such as unjustifiable demands that reporting of P values be disallowed or greatly reduced and that reporting of confidence intervals and standardized effect sizes be required in their place. Our analysis of these matters thus leads us to a recommendation that for standard types of significance assessment the paleoFisherian and Neyman-Pearsonian paradigms be replaced by a neoFisherian one. The essence of the latter is that a critical α (probability of type I error) is not specified, the terms 'significant' and 'non-significant' are abandoned, that high P values lead only to suspended judgments, and that the so-called \" three-valued logic \" of Cox, Kaiser, Tukey, Tryon and Harris is adopted explicitly. Confidence intervals and bands, power analyses, and severity curves remain useful adjuncts in particular situations. Analyses conducted under this paradigm we term neoFisherian significance assessments (NFSA). Their role is assessment of the existence, sign and magnitude of statistical effects. The common label of null hypothesis significance tests (NHST) is retained for paleoFisherian and Neyman-Pearsonian approaches and their hybrids. The original Neyman-Pearson framework has no utility outside quality control type applications. Some advocates of Bayesian, likelihood and information-theoretic approaches to model selection have argued that P values and NFSAs are of little or no value, but those arguments do not withstand critical review. Champions of Bayesian methods in particular continue to overstate their value and relevance. 312 Hurlbert & Lombardi • ANN. ZOOL. FeNNICI Vol. 46 \" … the object of statistical methods is the reduction of data. A quantity of data … is to be replaced by relatively few quantities which shall …",
"title": ""
},
{
"docid": "neg:1840322_11",
"text": "The success of Bitcoin largely relies on the perception of a fair underlying peer-to-peer protocol: blockchain. Fairness here essentially means that the reward (in bitcoins) given to any participant that helps maintain the consistency of the protocol by mining, is proportional to the computational power devoted by that participant to the mining task. Without such perception of fairness, honest miners might be disincentivized to maintain the protocol, leaving the space for dishonest miners to reach a majority and jeopardize the consistency of the entire system. We prove, in this paper, that blockchain is actually unfair, even in a distributed system of only two honest miners. In a realistic setting where message delivery is not instantaneous, the ratio between the (expected) number of blocks committed by two miners is at least exponential in the product of the message delay and the difference between the two miners’ hashrates. To obtain our result, we model the growth of blockchain, which may be of independent interest. We also apply our result to explain recent empirical observations and vulnerabilities.",
"title": ""
},
{
"docid": "neg:1840322_12",
"text": "Esthesioneuroblastoma is a rare malignant tumor of sinonasal origin. These tumors typically present with unilateral nasal obstruction and epistaxis, and diagnosis is confirmed on biopsy. Over the past 15 years, significant advances have been made in endoscopic technology and techniques that have made this tumor amenable to expanded endonasal resection. There is growing evidence supporting the feasibility of safe and effective resection of esthesioneuroblastoma via an expanded endonasal approach. This article outlines a technique for endoscopic resection of esthesioneuroblastoma and reviews the current literature on esthesioneuroblastoma with emphasis on outcomes after endoscopic resection of these malignant tumors.",
"title": ""
},
{
"docid": "neg:1840322_13",
"text": "Autumn-seeded winter cereals acquire tolerance to freezing temperatures and become vernalized by exposure to low temperature (LT). The level of accumulated LT tolerance depends on the cold acclimation rate and factors controlling timing of floral transition at the shoot apical meristem. In this study, genomic loci controlling the floral transition time were mapped in a winter wheat (T. aestivum L.) doubled haploid (DH) mapping population segregating for LT tolerance and rate of phenological development. The final leaf number (FLN), days to FLN, and days to anthesis were determined for 142 DH lines grown with and without vernalization in controlled environments. Analysis of trait data by composite interval mapping (CIM) identified 11 genomic regions that carried quantitative trait loci (QTLs) for the developmental traits studied. CIM analysis showed that the time for floral transition in both vernalized and non-vernalized plants was controlled by common QTL regions on chromosomes 1B, 2A, 2B, 6A and 7A. A QTL identified on chromosome 4A influenced floral transition time only in vernalized plants. Alleles of the LT-tolerant parent, Norstar, delayed floral transition at all QTLs except at the 2A locus. Some of the QTL alleles delaying floral transition also increased the length of vegetative growth and delayed flowering time. The genes underlying the QTLs identified in this study encode factors involved in regional adaptation of cold hardy winter wheat.",
"title": ""
},
{
"docid": "neg:1840322_14",
"text": "Most machine learning methods are known to capture and exploit biases of the training data. While some biases are beneficial for learning, others are harmful. Specifically, image captioning models tend to exaggerate biases present in training data (e.g., if a word is present in 60% of training sentences, it might be predicted in 70% of sentences at test time). This can lead to incorrect captions in domains where unbiased captions are desired, or required, due to over-reliance on the learned prior and image context. In this work we investigate generation of gender-specific caption words (e.g. man, woman) based on the person’s appearance or the image context. We introduce a new Equalizer model that encourages equal gender probability when gender evidence is occluded in a scene and confident predictions when gender evidence is present. The resulting model is forced to look at a person rather than use contextual cues to make a gender-specific prediction. The losses that comprise our model, the Appearance Confusion Loss and the Confident Loss, are general, and can be added to any description model in order to mitigate impacts of unwanted bias in a description dataset. Our proposed model has lower error than prior work when describing images with people and mentioning their gender and more closely matches the ground truth ratio of sentences including women to sentences including men. Finally, we show that our model more often looks at people when predicting their gender. 1",
"title": ""
},
{
"docid": "neg:1840322_15",
"text": "Today, digital games are available on a variety of mobile devices, such as tablet devices, portable game consoles and smart phones. Not only that, the latest mixed reality technology on mobile devices allows mobile games to integrate the real world environment into gameplay. However, little has been done to test whether the surroundings of play influence gaming experience. In this paper, we describe two studies done to test the effect of surroundings on immersion. Study One uses mixed reality games to investigate whether the integration of the real world environment reduces engagement. Whereas Study Two explored the effects of manipulating the lighting level, and therefore reducing visibility, of the surroundings. We found that immersion is reduced in the conditions where visibility of the surroundings is high. We argue that higher awareness of the surroundings has a strong impact on gaming experience.",
"title": ""
},
{
"docid": "neg:1840322_16",
"text": "Scar evaluation and revision techniques are chief among the most important skills in the facial plastic and reconstructive surgeon’s armamentarium. Often minimized in importance, these techniques depend as much on a thorough understanding of facial anatomy and aesthetics, advanced principles of wound healing, and an appreciation of the overshadowing psychological trauma as they do on thorough technical analysis and execution [1,2]. Scar revision is unique in the spectrum of facial plastic and reconstructive surgery because the initial traumatic event and its immediate treatment usually cannot be controlled. Patients who are candidates for scar revision procedures often present after significant loss of regional tissue, injury that crosses anatomically distinct facial aesthetic units, wound closure by personnel less experienced in plastic surgical technique, and poor post injury wound management [3,4]. While no scar can be removed completely, plastic surgeons can often improve the appearance of a scar, making it less obvious through the injection or application of certain steroid medications or through surgical procedures known as scar revisions.There are many variables affect the severity of scarring, including the size and depth of the wound, blood supply to the area, the thickness and color of your skin, and the direction of the scar [5,6].",
"title": ""
},
{
"docid": "neg:1840322_17",
"text": "Summary form only given. Existing studies on ensemble classifiers typically take a static approach in assembling individual classifiers, in which all the important features are specified in advance. In this paper, we propose a new concept, dynamic ensemble, as an advanced classifier that could have dynamic component classifiers and have dynamic configurations. Toward this goal, we have substantially expanded the existing \"overproduce and choose\" paradigm for ensemble construction. A new algorithm called BAGA is proposed to explore this approach. Taking a set of decision tree component classifiers as input, BAGA generates a set of candidate ensembles using combined bagging and genetic algorithm techniques so that component classifiers are determined at execution time. Empirical studies have been carried out on variations of the BAGA algorithm, where the sizes of chosen classifiers, effects of bag size, voting function and evaluation functions on the dynamic ensemble construction, are investigated.",
"title": ""
},
{
"docid": "neg:1840322_18",
"text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://dv1litvip.jstor.org/page/info/about/policies/terms.jsp. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.",
"title": ""
}
] |
1840323 | Game theory based mitigation of Interest flooding in Named Data Network | [
{
"docid": "pos:1840323_0",
"text": "Current Internet is reaching the limits of its capabilities due to its function transition from host-to-host communication to content dissemination. Named Data Networking (NDN) - an instantiation of Content-Centric Networking approach, embraces this shift by stressing the content itself, rather than where it locates. NDN tries to provide better security and privacy than current Internet does, and resilience to Distributed Denial of Service (DDoS) is a significant issue. In this paper, we present a specific and concrete scenario of DDoS attack in NDN, where perpetrators make use of NDN's packet forwarding rules to send out Interest packets with spoofed names as attacking packets. Afterwards, we identify the victims of NDN DDoS attacks include both the hosts and routers. But the largest victim is not the hosts, but the routers, more specifically, the Pending Interest Table (PIT) within the router. PIT brings NDN many elegant features, but it suffers from vulnerability. We propose Interest traceback as a counter measure against the studied NDN DDoS attacks, which traces back to the originator of the attacking Interest packets. At last, we assess the harmful consequences brought by these NDN DDoS attacks and evaluate the Interest traceback counter measure. Evaluation results reveal that the Interest traceback method effectively mitigates the NDN DDoS attacks studied in this paper.",
"title": ""
}
] | [
{
"docid": "neg:1840323_0",
"text": "BACKGROUND\nFor many years, high dose radiation therapy was the standard treatment for patients with locally or regionally advanced non-small-cell lung cancer (NSCLC), despite a 5-year survival rate of only 3%-10% following such therapy. From May 1984 through May 1987, the Cancer and Leukemia Group B (CALGB) conducted a randomized trial that showed that induction chemotherapy before radiation therapy improved survival during the first 3 years of follow-up.\n\n\nPURPOSE\nThis report provides data for 7 years of follow-up of patients enrolled in the CALGB trial.\n\n\nMETHODS\nThe patient population consisted of individuals who had clinical or surgical stage III, histologically documented NSCLC; a CALGB performance status of 0-1; less than 5% loss of body weight in the 3 months preceding diagnosis; and radiographically visible disease. Patients were randomly assigned to receive either 1) cisplatin (100 mg/m2 body surface area intravenously on days 1 and 29) and vinblastine (5 mg/m2 body surface area intravenously weekly on days 1, 8, 15, 22, and 29) followed by radiation therapy with 6000 cGy given in 30 fractions beginning on day 50 (CT-RT group) or 2) radiation therapy with 6000 cGy alone beginning on day 1 (RT group) for a maximum duration of 6-7 weeks. Patients were evaluated for tumor regression if they had measurable or evaluable disease and were monitored for toxic effects, disease progression, and date of death.\n\n\nRESULTS\nThere were 78 eligible patients randomly assigned to the CT-RT group and 77 randomly assigned to the RT group. Both groups were similar in terms of sex, age, histologic cell type, performance status, substage of disease, and whether staging had been clinical or surgical. All patients had measurable or evaluable disease at the time of random assignment to treatment groups. Both groups received a similar quantity and quality of radiation therapy. As previously reported, the rate of tumor response, as determined radiographically, was 56% for the CT-RT group and 43% for the RT group (P = .092). After more than 7 years of follow-up, the median survival remains greater for the CT-RT group (13.7 months) than for the RT group (9.6 months) (P = .012) as ascertained by the logrank test (two-sided). The percentages of patients surviving after years 1 through 7 were 54, 26, 24, 19, 17, 13, and 13 for the CT-RT group and 40, 13, 10, 7, 6, 6, and 6 for the RT group.\n\n\nCONCLUSIONS\nLong-term follow-up confirms that patients with stage III NSCLC who receive 5 weeks of chemotherapy with cisplatin and vinblastine before radiation therapy have a 4.1-month increase in median survival. The use of sequential chemotherapy-radiotherapy increases the projected proportion of 5-year survivors by a factor of 2.8 compared with that of radiotherapy alone. However, inasmuch as 80%-85% of such patients still die within 5 years and because treatment failure occurs both in the irradiated field and at distant sites in patients receiving either sequential chemotherapy-radiotherapy or radiotherapy alone, the need for further improvements in both the local and systemic treatment of this disease persists.",
"title": ""
},
{
"docid": "neg:1840323_1",
"text": "BACKGROUND\nAcne is a common condition seen in up to 80% of people between 11 and 30 years of age and in up to 5% of older adults. In some patients, it can result in permanent scars that are surprisingly difficult to treat. A relatively new treatment, termed skin needling (needle dermabrasion), seems to be appropriate for the treatment of rolling scars in acne.\n\n\nAIM\nTo confirm the usefulness of skin needling in acne scarring treatment.\n\n\nMETHODS\nThe present study was conducted from September 2007 to March 2008 at the Department of Systemic Pathology, University of Naples Federico II and the UOC Dermatology Unit, University of Rome La Sapienza. In total, 32 patients (20 female, 12 male patients; age range 17-45) with acne rolling scars were enrolled. Each patient was treated with a specific tool in two sessions. Using digital cameras, photos of all patients were taken to evaluate scar depth and, in five patients, silicone rubber was used to make a microrelief impression of the scars. The photographic data were analysed by using the sign test statistic (alpha < 0.05) and the data from the cutaneous casts were analysed by fast Fourier transformation (FFT).\n\n\nRESULTS\nAnalysis of the patient photographs, supported by the sign test and of the degree of irregularity of the surface microrelief, supported by FFT, showed that, after only two sessions, the severity grade of rolling scars in all patients was greatly reduced and there was an overall aesthetic improvement. No patient showed any visible signs of the procedure or hyperpigmentation.\n\n\nCONCLUSION\nThe present study confirms that skin needling has an immediate effect in improving acne rolling scars and has advantages over other procedures.",
"title": ""
},
{
"docid": "neg:1840323_2",
"text": "The Internet of Things (IoT) is a latest concept of machine-to-machine communication, that also gave birth to several information security problems. Many traditional software solutions fail to address these security issues such as trustworthiness of remote entities. Remote attestation is a technique given by Trusted Computing Group (TCG) to monitor and verify this trustworthiness. In this regard, various remote validation methods have been proposed. However, static techniques cannot provide resistance to recent attacks e.g. the latest Heartbleed bug, and the recent high profile glibc attack on Linux operating system. In this research, we have designed and implemented a lightweight Linux kernel security module for IoT devices that is scalable enough to monitor multiple applications in the kernel space. The newly built technique can measure and report multiple application’s static and dynamic behavior simultaneously. Verification of behavior of applications is performed via machine learning techniques. The result shows that deviating behavior can be detected successfully by the verifier.",
"title": ""
},
{
"docid": "neg:1840323_3",
"text": "We present a new algorithm for real-time hand tracking on commodity depth-sensing devices. Our method does not require a user-specific calibration session, but rather learns the geometry as the user performs live in front of the camera, thus enabling seamless virtual interaction at the consumer level. The key novelty in our approach is an online optimization algorithm that jointly estimates pose and shape in each frame, and determines the uncertainty in such estimates. This knowledge allows the algorithm to integrate per-frame estimates over time, and build a personalized geometric model of the captured user. Our approach can easily be integrated in state-of-the-art continuous generative motion tracking software. We provide a detailed evaluation that shows how our approach achieves accurate motion tracking for real-time applications, while significantly simplifying the workflow of accurate hand performance capture. We also provide quantitative evaluation datasets at http://gfx.uvic.ca/datasets/handy",
"title": ""
},
{
"docid": "neg:1840323_4",
"text": "Approximately thirty-four percent of people who experience acute low back pain (LBP) will have recurrent episodes. It remains unclear why some people experience recurrences and others do not, but one possible cause is a loss of normal control of the back muscles. We investigated whether the control of the short and long fibres of the deep back muscles was different in people with recurrent unilateral LBP from healthy participants. Recurrent unilateral LBP patients, who were symptom free during testing, and a group of healthy volunteers, participated. Intramuscular and surface electrodes recorded the electromyographic activity (EMG) of the short and long fibres of the lumbar multifidus and the shoulder muscle, deltoid, during a postural perturbation associated with a rapid arm movement. EMG onsets of the short and long fibres, relative to that of deltoid, were compared between groups, muscles, and sides. In association with a postural perturbation, short fibre EMG onset occurred later in participants with recurrent unilateral LBP than in healthy participants (p=0.022). The short fibres were active earlier than long fibres on both sides in the healthy participants (p<0.001) and on the non-painful side in the LBP group (p=0.045), but not on the previously painful side in the LBP group. Activity of deep back muscles is different in people with a recurrent unilateral LBP, despite the resolution of symptoms. Because deep back muscle activity is critical for normal spinal control, the current results provide the first evidence of a candidate mechanism for recurrent episodes.",
"title": ""
},
{
"docid": "neg:1840323_5",
"text": "We propose a novel extension of the encoder-decoder framework, called a review network. The review network is generic and can enhance any existing encoderdecoder model: in this paper, we consider RNN decoders with both CNN and RNN encoders. The review network performs a number of review steps with attention mechanism on the encoder hidden states, and outputs a thought vector after each review step; the thought vectors are used as the input of the attention mechanism in the decoder. We show that conventional encoder-decoders are a special case of our framework. Empirically, we show that our framework improves over state-ofthe-art encoder-decoder systems on the tasks of image captioning and source code captioning.1",
"title": ""
},
{
"docid": "neg:1840323_6",
"text": "Charging PEVs (Plug-In Electric Vehicles) at public fast charging station can improve the public acceptance and increase their penetration level by solving problems related to vehicles' battery. However, the price for the impact of fast charging stations on the distribution grid has to be dealt with. The main purpose of this paper is to investigate the impacts of fast charging stations on a distribution grid using a stochastic fast charging model and to present the charging model with some of its results. The model is used to investigate the impacts on distribution transformer loading and system bus voltage profiles of the test distribution grid. Stochastic and deterministic modelling approaches are also compared. It is concluded that fast charging stations affect transformer loading and system bus voltage profiles. Hence, necessary measures such as using local energy storage and voltage conditioning devices, such as SVC (Static Var Compensator), have to be used at the charging station to handle the problems. It is also illustrated that stochastic modelling approach can produce a more sound and realistic results than deterministic approach.",
"title": ""
},
{
"docid": "neg:1840323_7",
"text": "There is a universal standard for facial beauty regardless of race, age, sex and other variables. Beautiful faces have ideal facial proportion. Ideal proportion is directly related to divine proportion, and that proportion is 1 to 1.618. All living organisms, including humans, are genetically encoded to develop to this proportion because there are extreme esthetic and physiologic benefits. The vast majority of us are not perfectly proportioned because of environmental factors. Establishment of a universal standard for facial beauty will significantly simplify the diagnosis and treatment of facial disharmonies and abnormalities. More important, treating to this standard will maximize facial esthetics, TMJ health, psychologic and physiologic health, fertility, and quality of life.",
"title": ""
},
{
"docid": "neg:1840323_8",
"text": "BACKGROUND\nLymphangitic streaking, characterized by linear erythema on the skin, is most commonly observed in the setting of bacterial infection. However, a number of nonbacterial causes can result in lymphangitic streaking. We sought to elucidate the nonbacterial causes of lymphangitic streaking that may mimic bacterial infection to broaden clinicians' differential diagnosis for patients presenting with lymphangitic streaking.\n\n\nMETHODS\nWe performed a review of the literature, including all available reports pertaining to nonbacterial causes of lymphangitic streaking.\n\n\nRESULTS\nVarious nonbacterial causes can result in lymphangitic streaking, including viral and fungal infections, insect or spider bites, and iatrogenic etiologies.\n\n\nCONCLUSION\nAwareness of potential nonbacterial causes of superficial lymphangitis is important to avoid misdiagnosis and delay the administration of appropriate care.",
"title": ""
},
{
"docid": "neg:1840323_9",
"text": "The objective of this master thesis is to identify \" key-drivers \" embedded in customer satisfaction data. The data was collected by a large transportation sector corporation during five years and in four different countries. The questionnaire involved several different sections of questions and ranged from demographical information to satisfaction attributes with the vehicle, dealer and several problem areas. Various regression, correlation and cooperative game theory approaches were used to identify the key satisfiers and dissatisfiers. The theoretical and practical advantages of using the Shapley value, Canonical Correlation Analysis and Hierarchical Logistic Regression has been demonstrated and applied to market research. ii iii Acknowledgements",
"title": ""
},
{
"docid": "neg:1840323_10",
"text": "The Ponseti method for the management of idiopathic clubfoot has recently experienced a rise in popularity, with several centers reporting excellent outcomes. The challenge in achieving a successful outcome with this method lies not in correcting deformity but in preventing relapse. The most common cause of relapse is failure to adhere to the prescribed postcorrective bracing regimen. Socioeconomic status, cultural factors, and physician-parent communication may influence parental compliance with bracing. New, more user-friendly braces have been introduced in the hope of improving the rate of compliance. Strategies that may be helpful in promoting adherence include educating the family at the outset about the importance of bracing, encouraging calls and visits to discuss problems, providing clear written instructions, avoiding or promptly addressing skin problems, and refraining from criticism of the family when noncompliance is evident. A strong physician-family partnership and consideration of underlying cognitive, socioeconomic, and cultural issues may lead to improved adherence to postcorrective bracing protocols and better patient outcomes.",
"title": ""
},
{
"docid": "neg:1840323_11",
"text": "We show that a deep convolutional network with an architecture inspired by the models used in image recognition can yield accuracy similar to a long-short term memory (LSTM) network, which achieves the state-of-the-art performance on the standard Switchboard automatic speech recognition task. Moreover, we demonstrate that merging the knowledge in the CNN and LSTM models via model compression further improves the accuracy of the convolutional model.",
"title": ""
},
{
"docid": "neg:1840323_12",
"text": "Main memory capacities have grown up to a point where most databases fit into RAM. For main-memory database systems, index structure performance is a critical bottleneck. Traditional in-memory data structures like balanced binary search trees are not efficient on modern hardware, because they do not optimally utilize on-CPU caches. Hash tables, also often used for main-memory indexes, are fast but only support point queries. To overcome these shortcomings, we present ART, an adaptive radix tree (trie) for efficient indexing in main memory. Its lookup performance surpasses highly tuned, read-only search trees, while supporting very efficient insertions and deletions as well. At the same time, ART is very space efficient and solves the problem of excessive worst-case space consumption, which plagues most radix trees, by adaptively choosing compact and efficient data structures for internal nodes. Even though ART's performance is comparable to hash tables, it maintains the data in sorted order, which enables additional operations like range scan and prefix lookup.",
"title": ""
},
{
"docid": "neg:1840323_13",
"text": "The main purpose of Feature Subset Selection is to find a reduced subset of attributes from a data set described by a feature set. The task of a feature selection algorithm (FSA) is to provide with a computational solution motivated by a certain definition of relevance or by a reliable evaluation measure. In this paper several fundamental algorithms are studied to assess their performance in a controlled experimental scenario. A measure to evaluate FSAs is devised that computes the degree of matching between the output given by a FSA and the known optimal solutions. An extensive experimental study on synthetic problems is carried out to assess the behaviour of the algorithms in terms of solution accuracy and size as a function of the relevance, irrelevance, redundancy and size of the data samples. The controlled experimental conditions facilitate the derivation of better-supported and meaningful conclusions.",
"title": ""
},
{
"docid": "neg:1840323_14",
"text": "Social media use, potential and challenges in innovation have received little attention in literature, especially from the standpoint of the business-to-business sector. Therefore, this paper focuses on bridging this gap with a survey of social media use, potential and challenges, combined with a social media - focused innovation literature review of state-of-the-art. The study also studies the essential differences between business-to-consumer and business-to-business in the above respects. The paper starts by defining of social media and web 2.0, and then characterizes social media in business, social media in business-to-business sector and social media in business-to-business innovation. Finally we present and analyze the results of our empirical survey of 122 Finnish companies. This paper suggests that there is a significant gap between perceived potential of social media and social media use in innovation activity in business-to-business companies, recognizes potentially effective ways to reduce the gap, and clarifies the found differences between B2B's and B2C's.",
"title": ""
},
{
"docid": "neg:1840323_15",
"text": "This paper provides a comprehensive review of outcome studies and meta-analyses of effectiveness studies of psychodynamic therapy (PDT) for the major categories of mental disorders. Comparisons with inactive controls (waitlist, treatment as usual and placebo) generally but by no means invariably show PDT to be effective for depression, some anxiety disorders, eating disorders and somatic disorders. There is little evidence to support its implementation for post-traumatic stress disorder, obsessive-compulsive disorder, bulimia nervosa, cocaine dependence or psychosis. The strongest current evidence base supports relatively long-term psychodynamic treatment of some personality disorders, particularly borderline personality disorder. Comparisons with active treatments rarely identify PDT as superior to control interventions and studies are generally not appropriately designed to provide tests of statistical equivalence. Studies that demonstrate inferiority of PDT to alternatives exist, but are small in number and often questionable in design. Reviews of the field appear to be subject to allegiance effects. The present review recommends abandoning the inherently conservative strategy of comparing heterogeneous \"families\" of therapies for heterogeneous diagnostic groups. Instead, it advocates using the opportunities provided by bioscience and computational psychiatry to creatively explore and assess the value of protocol-directed combinations of specific treatment components to address the key problems of individual patients.",
"title": ""
},
{
"docid": "neg:1840323_16",
"text": "State-of-the-art distributed RDF systems partition data across multiple computer nodes (workers). Some systems perform cheap hash partitioning, which may result in expensive query evaluation. Others try to minimize inter-node communication, which requires an expensive data preprocessing phase, leading to a high startup cost. Apriori knowledge of the query workload has also been used to create partitions, which, however, are static and do not adapt to workload changes. In this paper, we propose AdPart, a distributed RDF system, which addresses the shortcomings of previous work. First, AdPart applies lightweight partitioning on the initial data, which distributes triples by hashing on their subjects; this renders its startup overhead low. At the same time, the locality-aware query optimizer of AdPart takes full advantage of the partitioning to (1) support the fully parallel processing of join patterns on subjects and (2) minimize data communication for general queries by applying hash distribution of intermediate results instead of broadcasting, wherever possible. Second, AdPart monitors the data access patterns and dynamically redistributes and replicates the instances of the most frequent ones among workers. As a result, the communication cost for future queries is drastically reduced or even eliminated. To control replication, AdPart implements an eviction policy for the redistributed patterns. Our experiments with synthetic and real data verify that AdPart: (1) starts faster than all existing systems; (2) processes thousands of queries before other systems become online; and (3) gracefully adapts to the query load, being able to evaluate queries on billion-scale RDF data in subseconds.",
"title": ""
},
{
"docid": "neg:1840323_17",
"text": "For urban driving, knowledge of ego-vehicle’s position is a critical piece of information that enables advanced driver-assistance systems or self-driving cars to execute safety-related, autonomous driving maneuvers. This is because, without knowing the current location, it is very hard to autonomously execute any driving maneuvers for the future. The existing solutions for localization rely on a combination of a Global Navigation Satellite System, an inertial measurement unit, and a digital map. However, in urban driving environments, due to poor satellite geometry and disruption of radio signal reception, their longitudinal and lateral errors are too significant to be used for an autonomous system. To enhance the existing system’s localization capability, this work presents an effort to develop a vision-based lateral localization algorithm. The algorithm aims at reliably counting, with or without observations of lane-markings, the number of road-lanes and identifying the index of the road-lane on the roadway upon which our vehicle happens to be driving. Tests of the proposed algorithms against intercity and interstate highway videos showed promising results in terms of counting the number of road-lanes and the indices of the current road-lanes. C © 2015 Wiley Periodicals, Inc.",
"title": ""
},
{
"docid": "neg:1840323_18",
"text": "0957-4174/$ see front matter 2013 Elsevier Ltd. All rights reserved. http://dx.doi.org/10.1016/j.eswa.2013.04.023 ⇑ Corresponding author. Tel.: +55 8197885665. E-mail addresses: rflm@cin.ufpe.br (Rafael Ferreira), lscabral@gmail.com (L. de Souza Cabral), rdl@cin.ufpe.br (R.D. Lins), gfps.cin@gmail.com (G. Pereira e Silva), fred@cin.ufpe.br (F. Freitas), gdcc@cin.ufpe.br (G.D.C. Cavalcanti), rjlima01@gmail. com (R. Lima), steven.simske@hp.com (S.J. Simske), luciano.favaro@hp.com (L. Favaro). Rafael Ferreira a,⇑, Luciano de Souza Cabral , Rafael Dueire Lins , Gabriel Pereira e Silva , Fred Freitas , George D.C. Cavalcanti , Rinaldo Lima , Steven J. Simske , Luciano Favaro c",
"title": ""
}
] |
1840324 | Scalable and Lightweight CTF Infrastructures Using Application Containers | [
{
"docid": "pos:1840324_0",
"text": "Security competitions have become a popular way to foster security education by creating a competitive environment in which participants go beyond the effort usually required in traditional security courses. Live security competitions (also called “Capture The Flag,” or CTF competitions) are particularly well-suited to support handson experience, as they usually have both an attack and a defense component. Unfortunately, because these competitions put several (possibly many) teams against one another, they are difficult to design, implement, and run. This paper presents a framework that is based on the lessons learned in running, for more than 10 years, the largest educational CTF in the world, called iCTF. The framework’s goal is to provide educational institutions and other organizations with the ability to run customizable CTF competitions. The framework is open and leverages the security community for the creation of a corpus of educational security challenges.",
"title": ""
}
] | [
{
"docid": "neg:1840324_0",
"text": "The occurrence of visually induced motion sickness has been frequently linked to the sensation of illusory self-motion (vection), however, the precise nature of this relationship is still not fully understood. To date, it is still a matter of debate as to whether vection is a necessary prerequisite for visually induced motion sickness (VIMS). That is, can there be VIMS without any sensation of self-motion? In this paper, we will describe the possible nature of this relationship, review the literature that addresses this relationship (including theoretical accounts of vection and VIMS), and offer suggestions with respect to operationally defining and reporting these phenomena in future.",
"title": ""
},
{
"docid": "neg:1840324_1",
"text": "Global path planning for mobile robot using genetic algorithm and A* algorithm is investigated in this paper. The proposed algorithm includes three steps: the MAKLINK graph theory is adopted to establish the free space model of mobile robots firstly, then Dijkstra algorithm is utilized for finding a feasible collision-free path, finally the global optimal path of mobile robots is obtained based on the hybrid algorithm of A* algorithm and genetic algorithm. Experimental results indicate that the proposed algorithm has better performance than Dijkstra algorithm in term of both solution quality and computational time, and thus it is a viable approach to mobile robot global path planning.",
"title": ""
},
{
"docid": "neg:1840324_2",
"text": "Poker is an interesting test-bed for artificial intelligence research. It is a game of imperfect knowledge, where multiple competing agents must deal with risk management, agent modeling, unreliable information and deception, much like decision-making applications in the real world. Agent modeling is one of the most difficult problems in decision-making applications and in poker it is essential to achieving high performance. This paper describes and evaluates Loki, a poker program capable of observing its opponents, constructing opponent models and dynamically adapting its play to best exploit patterns in the opponents’ play.",
"title": ""
},
{
"docid": "neg:1840324_3",
"text": "Previous researchers studying baseball pitching have compared kinematic and kinetic parameters among different types of pitches, focusing on the trunk, shoulder, and elbow. The lack of data on the wrist and forearm limits the understanding of clinicians, coaches, and researchers regarding the mechanics of baseball pitching and the differences among types of pitches. The purpose of this study was to expand existing knowledge of baseball pitching by quantifying and comparing kinematic data of the wrist and forearm for the fastball (FA), curveball (CU) and change-up (CH) pitches. Kinematic and temporal parameters were determined from 8 collegiate pitchers recorded with a four-camera system (200 Hz). Although significant differences were observed for all pitch comparisons, the least number of differences occurred between the FA and CH. During arm cocking, peak wrist extension for the FA and CH pitches was greater than for the CU, while forearm supination was greater for the CU. In contrast to the current study, previous comparisons of kinematic data for trunk, shoulder, and elbow revealed similarities between the FA and CU pitches and differences between the FA and CH pitches. Kinematic differences among pitches depend on the segment of the body studied.",
"title": ""
},
{
"docid": "neg:1840324_4",
"text": "Audit regulators and the auditing profession have responded to this expectation by issuing a number of standards outlining auditors’ responsibilities to detect fraud (e.g., PCAOB 2010; IAASB 2009, PCAOB 2002; AICPA 2002; AICPA 1997; AICPA 1988). These standards indicate that auditors are responsible for providing reasonable assurance that audited financial statements are free of material misstatements due to fraud. Nonetheless, prior research indicates that auditors detect relatively few significant frauds (Dyck et al. 2010, KPMG 2009). This finding raises the obvious question: Why do auditors rarely detect fraud?",
"title": ""
},
{
"docid": "neg:1840324_5",
"text": "Nowadays, universities offer most of their services using corporate website. In higher education services including admission services, a university needs to always provide excellent service to ensure student candidate satisfaction. To obtain student candidate satisfaction apart from the quality of education must also be accompanied by providing consultation services and information to them. This paper proposes the development of Chatbot which acts as a conversation agent that can play a role of as student candidate service. This Chatbot is called Dinus Intelligent Assistance (DINA). DINA uses knowledge based as a center for machine learning approach. The pattern extracted from the knowledge based can be used to provide responses to the user. The source of knowledge based is taken from Universitas Dian Nuswantoro (UDINUS) guest book. It contains of questions and answers about UDINUS admission services. Testing of this system is done by entering questions. From 166 intents, the author tested it using ten random sample questions. Among them, it got eight tested questions answered correctly. Therefore, by using this study we can develop further intelligent Chatbots to help student candidates find the information they need without waiting for the admission staffs's answer.",
"title": ""
},
{
"docid": "neg:1840324_6",
"text": "A compact multiple-input-multiple-output (MIMO) antenna with a small size of 26×40 mm2 is proposed for portable ultrawideband (UWB) applications. The antenna consists of two planar-monopole (PM) antenna elements with microstrip-fed printed on one side of the substrate and placed perpendicularly to each other to achieve good isolation. To enhance isolation and increase impedance bandwidth, two long protruding ground stubs are added to the ground plane on the other side and a short ground strip is used to connect the ground planes of the two PMs together to form a common ground. Simulation and measurement are used to study the antenna performance in terms of reflection coefficients at the two input ports, coupling between the two input ports, radiation pattern, realized peak gain, efficiency and envelope correlation coefficient for pattern diversity. Results show that the MIMO antenna has an impedance bandwidth of larger than 3.1-10.6 GHz, low mutual coupling of less than -15 dB, and a low envelope correlation coefficient of less than 0.2 across the frequency band, making it a good candidate for portable UWB applications.",
"title": ""
},
{
"docid": "neg:1840324_7",
"text": "With the increasing computational power of computers, software design systems are progressing from being tools enabling architects and designers to express their ideas, to tools capable of creating designs under human guidance. One of the main limitations for these computer-automated design systems is the representation with which they encode designs. If the representation cannot encode a certain design, then the design system cannot produce it. To be able to produce new types of designs, and not just optimize pre-defined parameterizations, evolutionary design systems must use generative representations. Generative representations are assembly procedures, or algorithms, for constructing a design, thereby allowing for truly novel design solutions to be encoded. In addition, by enabling modularity, regularity and hierarchy, the level of sophistication that can be evolved is increased. We demonstrate the advantages of generative representations on two different design domains: the evolution of spacecraft antennas and the evolution of 3D solid objects.",
"title": ""
},
{
"docid": "neg:1840324_8",
"text": "Depression's influence on mother-infant interactious at 2 months postpartum was studied in 24 depressed and 22 nondepressed mothex-infant dyads. Depression was diagnosed using the SADS-L and RDC. In S's homes, structured interactions of 3 min duration were videotaped and later coded using behavioral descriptors and a l-s time base. Unstructured interactions were described using rating scales. During structured interactions, depressed mothers were more negative and their babies were less positive than were nondepressed dyads. The reduced positivity of depressed dyads was achieved through contingent resixmfiveness. Ratings from unstructured interactions were consistent with these findings. Results support the hypothesis that depression negatively influences motherinfant behaviol; but indicate that influence may vary with development, chronicity, and presence of other risk factors.",
"title": ""
},
{
"docid": "neg:1840324_9",
"text": "A compact design of a circularly-polarized microstrip antenna in order to achieve dual-band behavior for Radio Frequency Identification (RFID) applications is presented, defected ground structure (DGS) technique is used to miniaturize and get a dual-band antenna, the entire size is 38×40×1.58 mm3. This antenna was designed to cover both ultra-height frequency (740MHz ~ 1GHz) and slow height frequency (2.35 GHz ~ 2.51GHz), return loss <; -10 dB, the 3-dB axial ratio bandwidths are about 110 MHz at the lower band (900 MHz).",
"title": ""
},
{
"docid": "neg:1840324_10",
"text": "This paper proposes an architecture for an open-domain conversational system and evaluates an implemented system. The proposed architecture is fully composed of modules based on natural language processing techniques. Experimental results using human subjects show that our architecture achieves significantly better naturalness than a retrieval-based baseline and that its naturalness is close to that of a rule-based system using 149K hand-crafted rules.",
"title": ""
},
{
"docid": "neg:1840324_11",
"text": "At the core of interpretable machine learning is the question of whether humans are able to make accurate predictions about a model’s behavior. Assumed in this question are three properties of the interpretable output: coverage, precision, and effort. Coverage refers to how often humans think they can predict the model’s behavior, precision to how accurate humans are in those predictions, and effort is either the up-front effort required in interpreting the model, or the effort required to make predictions about a model’s behavior.",
"title": ""
},
{
"docid": "neg:1840324_12",
"text": "BACKGROUND\nOsteoarthritis (OA), a chronic degenerative disease of synovial joints is characterised by pain and stiffness. Aim of treatment is pain relief. Complementary and alternative medicine (CAM) refers to practices which are not an integral part of orthodox medicine.\n\n\nAIMS AND OBJECTIVES\nTo determine the pattern of usage of CAM among OA patients in Nigeria.\n\n\nPATIENTS AND METHODS\nConsecutive patients with OA attending orthopaedic clinic of Havana Specialist Hospital, Lagos, Nigeria were interviewed over a 6- month period st st of 1 May to 31 October 2007 on usage of CAM. Structured and open-ended questions were used. Demographic data, duration of OA and treatment as well as compliance to orthodox medications were documented.\n\n\nRESULTS\nOne hundred and sixty four patients were studied.120 (73.25%) were females and 44(26.89%) were males. Respondents age range between 35-74 years. 66(40.2%) patients used CAM. 35(53.0%) had done so before presenting to the hospital. The most commonly used CAM were herbal products used by 50(75.8%) of CAM users. Among herbal product users, 74.0% used non- specific local products, 30.0% used ginger, 36.0% used garlic and 28.0% used Aloe Vera. Among CAM users, 35(53.0%) used local embrocation and massage, 10(15.2%) used spiritual methods. There was no significant difference in demographics, clinical characteristics and pain control among CAM users and non-users.\n\n\nCONCLUSION\nMany OA patients receiving orthodox therapy also use CAM. Medical doctors need to keep a wary eye on CAM usage among patients and enquire about this health-seeking behaviour in order to educate them on possible drug interactions, adverse effects and long term complications.",
"title": ""
},
{
"docid": "neg:1840324_13",
"text": "The set of minutia points is considered to be the most distinctive feature for fingerprint representation and is widely used in fingerprint matching. It was believed that the minutiae set does not contain sufficient information to reconstruct the original fingerprint image from which minutiae were extracted. However, recent studies have shown that it is indeed possible to reconstruct fingerprint images from their minutiae representations. Reconstruction techniques demonstrate the need for securing fingerprint templates, improving the template interoperability, and improving fingerprint synthesis. But, there is still a large gap between the matching performance obtained from original fingerprint images and their corresponding reconstructed fingerprint images. In this paper, the prior knowledge about fingerprint ridge structures is encoded in terms of orientation patch and continuous phase patch dictionaries to improve the fingerprint reconstruction. The orientation patch dictionary is used to reconstruct the orientation field from minutiae, while the continuous phase patch dictionary is used to reconstruct the ridge pattern. Experimental results on three public domain databases (FVC2002 DB1_A, FVC2002 DB2_A, and NIST SD4) demonstrate that the proposed reconstruction algorithm outperforms the state-of-the-art reconstruction algorithms in terms of both: 1) spurious minutiae and 2) matching performance with respect to type-I attack (matching the reconstructed fingerprint against the same impression from which minutiae set was extracted) and type-II attack (matching the reconstructed fingerprint against a different impression of the same finger).",
"title": ""
},
{
"docid": "neg:1840324_14",
"text": "Information Extraction is the process of automatically obtaining knowledge from plain text. Because of the ambiguity of written natural language, Information Extraction is a difficult task. Ontology-based Information Extraction (OBIE) reduces this complexity by including contextual information in the form of a domain ontology. The ontology provides guidance to the extraction process by providing concepts and relationships about the domain. However, OBIE systems have not been widely adopted because of the difficulties in deployment and maintenance. The Ontology-based Components for Information Extraction (OBCIE) architecture has been proposed as a form to encourage the adoption of OBIE by promoting reusability through modularity. In this paper, we propose two orthogonal extensions to OBCIE that allow the construction of hybrid OBIE systems with higher extraction accuracy and a new functionality. The first extension utilizes OBCIE modularity to integrate different types of implementation into one extraction system, producing a more accurate extraction. For each concept or relationship in the ontology, we can select the best implementation for extraction, or we can combine both implementations under an ensemble learning schema. The second extension is a novel ontology-based error detection mechanism. Following a heuristic approach, we can identify sentences that are logically inconsistent with the domain ontology. Because the implementation strategy for the extraction of a concept is independent of the functionality of the extraction, we can design a hybrid OBIE system with concepts utilizing different implementation strategies for extracting correct or incorrect sentences. Our evaluation shows that, in the implementation extension, our proposed method is more accurate in terms of correctness and completeness of the extraction. Moreover, our error detection method can identify incorrect statements with a high accuracy.",
"title": ""
},
{
"docid": "neg:1840324_15",
"text": "The success of deep neural networks (DNNs) is heavily dependent on the availability of labeled data. However, obtaining labeled data is a big challenge in many real-world problems. In such scenarios, a DNN model can leverage labeled and unlabeled data from a related domain, but it has to deal with the shift in data distributions between the source and the target domains. In this paper, we study the problem of classifying social media posts during a crisis event (e.g., Earthquake). For that, we use labeled and unlabeled data from past similar events (e.g., Flood) and unlabeled data for the current event. We propose a novel model that performs adversarial learning based domain adaptation to deal with distribution drifts and graph based semi-supervised learning to leverage unlabeled data within a single unified deep learning framework. Our experiments with two real-world crisis datasets collected from Twitter demonstrate significant improvements over several baselines.",
"title": ""
},
{
"docid": "neg:1840324_16",
"text": "Electric motor and power electronics-based inverter are the major components in industrial and automotive electric drives. In this paper, we present a model-based fault diagnostics system developed using a machine learning technology for detecting and locating multiple classes of faults in an electric drive. Power electronics inverter can be considered to be the weakest link in such a system from hardware failure point of view; hence, this work is focused on detecting faults and finding which switches in the inverter cause the faults. A simulation model has been developed based on the theoretical foundations of electric drives to simulate the normal condition, all single-switch and post-short-circuit faults. A machine learning algorithm has been developed to automatically select a set of representative operating points in the (torque, speed) domain, which in turn is sent to the simulated electric drive model to generate signals for the training of a diagnostic neural network, fault diagnostic neural network (FDNN). We validated the capability of the FDNN on data generated by an experimental bench setup. Our research demonstrates that with a robust machine learning approach, a diagnostic system can be trained based on a simulated electric drive model, which can lead to a correct classification of faults over a wide operating domain.",
"title": ""
},
{
"docid": "neg:1840324_17",
"text": "The oxysterol receptor LXR is a key transcriptional regulator of lipid metabolism. LXR increases expression of SREBP-1, which in turn regulates at least 32 genes involved in lipid synthesis and transport. We recently identified 25-hydroxycholesterol-3-sulfate (25HC3S) as an important regulatory molecule in the liver. We have now studied the effects of 25HC3S and its precursor, 25-hydroxycholesterol (25HC), on lipid metabolism as mediated by the LXR/SREBP-1 signaling in macrophages. Addition of 25HC3S to human THP-1-derived macrophages markedly decreased nuclear LXR protein levels. 25HC3S administration was followed by dose- and time-dependent decreases in SREBP-1 mature protein and mRNA levels. 25HC3S decreased the expression of SREBP-1-responsive genes, acetyl-CoA carboxylase-1, and fatty acid synthase (FAS) as well as HMGR and LDLR, which are key proteins involved in lipid metabolism. Subsequently, 25HC3S decreased intracellular lipids and increased cell proliferation. In contrast to 25HC3S, 25HC acted as an LXR ligand, increasing ABCA1, ABCG1, SREBP-1, and FAS mRNA levels. In the presence of 25HC3S, 25HC, and LXR agonist T0901317, stimulation of LXR targeting gene expression was repressed. We conclude that 25HC3S acts in macrophages as a cholesterol satiety signal, downregulating cholesterol and fatty acid synthetic pathways via inhibition of LXR/SREBP signaling. A possible role of oxysterol sulfation is proposed.",
"title": ""
},
{
"docid": "neg:1840324_18",
"text": "A simple way to mitigate the potential negative side-effects associated with chemical lysis of a blood clot is to tear its fibrin network via mechanical rubbing using a helical robot. Here, we achieve mechanical rubbing of blood clots under ultrasound guidance and using external magnetic actuation. Position of the helical robot is determined using ultrasound feedback and used to control its motion toward the clot, whereas the volume of the clots is estimated simultaneously using visual feedback. We characterize the shear modulus and ultimate shear strength of the blood clots to predict their removal rate during rubbing. Our <italic>in vitro</italic> experiments show the ability to move the helical robot controllably toward clots using ultrasound feedback with average and maximum errors of <inline-formula> <tex-math notation=\"LaTeX\">${\\text{0.84}\\pm \\text{0.41}}$</tex-math></inline-formula> and 2.15 mm, respectively, and achieve removal rate of <inline-formula><tex-math notation=\"LaTeX\">$-\\text{0.614} \\pm \\text{0.303}$</tex-math> </inline-formula> mm<inline-formula><tex-math notation=\"LaTeX\">$^{3}$</tex-math></inline-formula>/min at room temperature (<inline-formula><tex-math notation=\"LaTeX\">${\\text{25}}^{\\circ }$</tex-math></inline-formula>C) and <inline-formula><tex-math notation=\"LaTeX\">$-\\text{0.482} \\pm \\text{0.23}$</tex-math></inline-formula> mm <inline-formula><tex-math notation=\"LaTeX\">$^{3}$</tex-math></inline-formula>/min at body temperature (37 <inline-formula><tex-math notation=\"LaTeX\">$^{\\circ}$</tex-math></inline-formula>C), under the influence of two rotating dipole fields at frequency of 35 Hz. We also validate the effectiveness of mechanical rubbing by measuring the number of red blood cells and platelets past the clot. Our measurements show that rubbing achieves cell count of <inline-formula><tex-math notation=\"LaTeX\">$(\\text{46} \\pm \\text{10.9}) \\times \\text{10}^{4}$</tex-math> </inline-formula> cell/ml, whereas the count in the absence of rubbing is <inline-formula><tex-math notation=\"LaTeX\"> $(\\text{2} \\pm \\text{1.41}) \\times \\text{10}^{4}$</tex-math></inline-formula> cell/ml, after 40 min.",
"title": ""
},
{
"docid": "neg:1840324_19",
"text": "In the past 20 years, there has been a great advancement in knowledge pertaining to compliance with amblyopia treatments. The occlusion dose monitor introduced quantitative monitoring methods in patching, which sparked our initial understanding of the dose-response relationship for patching amblyopia treatment. This review focuses on current compliance knowledge and the impact it has on patching and atropine amblyopia treatment.",
"title": ""
}
] |
1840325 | Structured Sequence Modeling with Graph Convolutional Recurrent Networks | [
{
"docid": "pos:1840325_0",
"text": "Many underlying relationships among data in several areas of science and engineering, e.g., computer vision, molecular chemistry, molecular biology, pattern recognition, and data mining, can be represented in terms of graphs. In this paper, we propose a new neural network model, called graph neural network (GNN) model, that extends existing neural network methods for processing the data represented in graph domains. This GNN model, which can directly process most of the practically useful types of graphs, e.g., acyclic, cyclic, directed, and undirected, implements a function tau(G,n) isin IRm that maps a graph G and one of its nodes n into an m-dimensional Euclidean space. A supervised learning algorithm is derived to estimate the parameters of the proposed GNN model. The computational cost of the proposed algorithm is also considered. Some experimental results are shown to validate the proposed learning algorithm, and to demonstrate its generalization capabilities.",
"title": ""
}
] | [
{
"docid": "neg:1840325_0",
"text": "Inspired by the progress of deep neural network (DNN) in single-media retrieval, the researchers have applied the DNN to cross-media retrieval. These methods are mainly two-stage learning: the first stage is to generate the separate representation for each media type, and the existing methods only model the intra-media information but ignore the inter-media correlation with the rich complementary context to the intra-media information. The second stage is to get the shared representation by learning the cross-media correlation, and the existing methods learn the shared representation through a shallow network structure, which cannot fully capture the complex cross-media correlation. For addressing the above problems, we propose the cross-media multiple deep network (CMDN) to exploit the complex cross-media correlation by hierarchical learning. In the first stage, CMDN jointly models the intra-media and intermedia information for getting the complementary separate representation of each media type. In the second stage, CMDN hierarchically combines the inter-media and intra-media representations to further learn the rich cross-media correlation by a deeper two-level network strategy, and finally get the shared representation by a stacked network style. Experiment results show that CMDN achieves better performance comparing with several state-of-the-art methods on 3 extensively used cross-media datasets.",
"title": ""
},
{
"docid": "neg:1840325_1",
"text": "Recent empirical results on long-term dependency tasks have shown that neural networks augmented with an external memory can learn the long-term dependency tasks more easily and achieve better generalization than vanilla recurrent neural networks (RNN). We suggest that memory augmented neural networks can reduce the effects of vanishing gradients by creating shortcut (or wormhole) connections. Based on this observation, we propose a novel memory augmented neural network model called TARDIS (Temporal Automatic Relation Discovery in Sequences). The controller of TARDIS can store a selective set of embeddings of its own previous hidden states into an external memory and revisit them as and when needed. For TARDIS, memory acts as a storage for wormhole connections to the past to propagate the gradients more effectively and it helps to learn the temporal dependencies. The memory structure of TARDIS has similarities to both Neural Turing Machines (NTM) and Dynamic Neural Turing Machines (D-NTM), but both read and write operations of TARDIS are simpler and more efficient. We use discrete addressing for read/write operations which helps to substantially to reduce the vanishing gradient problem with very long sequences. Read and write operations in TARDIS are tied with a heuristic once the memory becomes full, and this makes the learning problem simpler when compared to NTM or D-NTM type of architectures. We provide a detailed analysis on the gradient propagation in general for MANNs. We evaluate our models on different long-term dependency tasks and report competitive results in all of them.",
"title": ""
},
{
"docid": "neg:1840325_2",
"text": "INTRODUCTION\nThe adequate use of compression in venous leg ulcer treatment is equally important to patients as well as clinicians. Currently, there is a lack of clarity on contraindications, risk factors, adverse events and complications, when applying compression therapy for venous leg ulcer patients.\n\n\nMETHODS\nThe project aimed to optimize prevention, treatment and maintenance approaches by recognizing contraindications, risk factors, adverse events and complications, when applying compression therapy for venous leg ulcer patients. A literature review was conducted of current guidelines on venous leg ulcer prevention, management and maintenance.\n\n\nRESULTS\nSearches took place from 29th February 2016 to 30th April 2016 and were prospectively limited to publications in the English and German languages and publication dates were between January 2009 and April 2016. Twenty Guidelines, clinical pathways and consensus papers on compression therapy for venous leg ulcer treatment and for venous disease, were included. Guidelines agreed on the following absolute contraindications: Arterial occlusive disease, heart failure and ankle brachial pressure index (ABPI) <0.5, but gave conflicting recommendations on relative contraindications, risks and adverse events. Moreover definitions were unclear and not consistent.\n\n\nCONCLUSIONS\nEvidence-based guidance is needed to inform clinicians on risk factor, adverse effects, complications and contraindications. ABPI values need to be specified and details should be given on the type of compression that is safe to use. Ongoing research challenges the present recommendations, shifting some contraindications into a list of potential indications. Complications of compression can be prevented when adequate assessment is performed and clinicians are skilled in applying compression.",
"title": ""
},
{
"docid": "neg:1840325_3",
"text": "Healthcare consumers, researchers, patients and policy makers increasingly use systematic reviews (SRs) to aid their decision-making process. However, the conduct of SRs can be a time-consuming and resource-intensive task. Often, clinical practice guideline developers or other decision-makers need to make informed decisions in a timely fashion (e.g. outbreaks of infection, hospital-based health technology assessments). Possible approaches to address the issue of timeliness in the production of SRs are to (a) implement process parallelisation, (b) adapt and apply innovative technologies, and/or (c) modify SR processes (e.g. study eligibility criteria, search sources, data extraction or quality assessment). Highly parallelised systematic reviewing requires substantial resources to support a team of experienced information specialists, reviewers and methodologists working alongside with clinical content experts to minimise the time for completing individual review steps while maximising the parallel progression of multiple steps. Effective coordination and management within the team and across external stakeholders are essential elements of this process. Emerging innovative technologies have a great potential for reducing workload and improving efficiency of SR production. The most promising areas of application would be to allow automation of specific SR tasks, in particular if these tasks are time consuming and resource intensive (e.g. language translation, study selection, data extraction). Modification of SR processes involves restricting, truncating and/or bypassing one or more SR steps, which may risk introducing bias to the review findings. Although the growing experiences in producing various types of rapid reviews (RR) and the accumulation of empirical studies exploring potential bias associated with specific SR tasks have contributed to the methodological development for expediting SR production, there is still a dearth of research examining the actual impact of methodological modifications and comparing the findings between RRs and SRs. This evidence would help to inform as to which SR tasks can be accelerated or truncated and to what degree, while maintaining the validity of review findings. Timely delivered SRs can be of value in informing healthcare decisions and recommendations, especially when there is practical urgency and there is no other relevant synthesised evidence.",
"title": ""
},
{
"docid": "neg:1840325_4",
"text": "OBJECTIVES\nGiven the large-scale adoption and deployment of mobile phones by health services and frontline health workers (FHW), we aimed to review and synthesise the evidence on the feasibility and effectiveness of mobile-based services for healthcare delivery.\n\n\nMETHODS\nFive databases - MEDLINE, EMBASE, Global Health, Google Scholar and Scopus - were systematically searched for relevant peer-reviewed articles published between 2000 and 2013. Data were extracted and synthesised across three themes as follows: feasibility of use of mobile tools by FHWs, training required for adoption of mobile tools and effectiveness of such interventions.\n\n\nRESULTS\nForty-two studies were included in this review. With adequate training, FHWs were able to use mobile phones to enhance various aspects of their work activities. Training of FHWs to use mobile phones for healthcare delivery ranged from a few hours to about 1 week. Five key thematic areas for the use of mobile phones by FHWs were identified as follows: data collection and reporting, training and decision support, emergency referrals, work planning through alerts and reminders, and improved supervision of and communication between healthcare workers. Findings suggest that mobile based data collection improves promptness of data collection, reduces error rates and improves data completeness. Two methodologically robust studies suggest that regular access to health information via SMS or mobile-based decision-support systems may improve the adherence of the FHWs to treatment algorithms. The evidence on the effectiveness of the other approaches was largely descriptive and inconclusive.\n\n\nCONCLUSIONS\nUse of mHealth strategies by FHWs might offer some promising approaches to improving healthcare delivery; however, the evidence on the effectiveness of such strategies on healthcare outcomes is insufficient.",
"title": ""
},
{
"docid": "neg:1840325_5",
"text": "In this paper, we propose an extractive multi-document summarization (MDS) system using joint optimization and active learning for content selection grounded in user feedback. Our method interactively obtains user feedback to gradually improve the results of a state-of-the-art integer linear programming (ILP) framework for MDS. Our methods complement fully automatic methods in producing highquality summaries with a minimum number of iterations and feedbacks. We conduct multiple simulation-based experiments and analyze the effect of feedbackbased concept selection in the ILP setup in order to maximize the user-desired content in the summary.",
"title": ""
},
{
"docid": "neg:1840325_6",
"text": "OBJECTIVES\nWe hypothesized reduction of 30 days' in-hospital morbidity, mortality, and length of stay postimplementation of the World Health Organization's Surgical Safety Checklist (SSC).\n\n\nBACKGROUND\nReductions of morbidity and mortality have been reported after SSC implementation in pre-/postdesigned studies without controls. Here, we report a randomized controlled trial of the SSC.\n\n\nMETHODS\nA stepped wedge cluster randomized controlled trial was conducted in 2 hospitals. We examined effects on in-hospital complications registered by International Classification of Diseases, Tenth Revision codes, length of stay, and mortality. The SSC intervention was sequentially rolled out in a random order until all 5 clusters-cardiothoracic, neurosurgery, orthopedic, general, and urologic surgery had received the Checklist. Data were prospectively recorded in control and intervention stages during a 10-month period in 2009-2010.\n\n\nRESULTS\nA total of 2212 control procedures were compared with 2263 SCC procedures. The complication rates decreased from 19.9% to 11.5% (P < 0.001), with absolute risk reduction 8.4 (95% confidence interval, 6.3-10.5) from the control to the SSC stages. Adjusted for possible confounding factors, the SSC effect on complications remained significant with odds ratio 1.95 (95% confidence interval, 1.59-2.40). Mean length of stay decreased by 0.8 days with SCC utilization (95% confidence interval, 0.11-1.43). In-hospital mortality decreased significantly from 1.9% to 0.2% in 1 of the 2 hospitals post-SSC implementation, but the overall reduction (1.6%-1.0%) across hospitals was not significant.\n\n\nCONCLUSIONS\nImplementation of the WHO SSC was associated with robust reduction in morbidity and length of in-hospital stay and some reduction in mortality.",
"title": ""
},
{
"docid": "neg:1840325_7",
"text": "While predictions abound that electronic books will supplant traditional paper-based books, many people bemoan the coming loss of the book as cultural artifact. In this project we deliberately keep the affordances of paper books while adding electronic augmentation. The Listen Reader combines the look and feel of a real book - a beautiful binding, paper pages and printed images and text - with the rich, evocative quality of a movie soundtrack. The book's multi-layered interactive soundtrack consists of music and sound effects. Electric field sensors located in the book binding sense the proximity of the reader's hands and control audio parameters, while RFID tags embedded in each page allow fast, robust page identification.\nThree different Listen Readers were built as part of a six-month museum exhibit, with more than 350,000 visitors. This paper discusses design, implementation, and lessons learned through the iterative design process, observation, and visitor interviews.",
"title": ""
},
{
"docid": "neg:1840325_8",
"text": "State-of-the-art natural language processing systems rely on supervision in the form of annotated data to learn competent models. These models are generally trained on data in a single language (usually English), and cannot be directly used beyond that language. Since collecting data in every language is not realistic, there has been a growing interest in crosslingual language understanding (XLU) and low-resource cross-language transfer. In this work, we construct an evaluation set for XLU by extending the development and test sets of the Multi-Genre Natural Language Inference Corpus (MultiNLI) to 15 languages, including low-resource languages such as Swahili and Urdu. We hope that our dataset, dubbed XNLI, will catalyze research in cross-lingual sentence understanding by providing an informative standard evaluation task. In addition, we provide several baselines for multilingual sentence understanding, including two based on machine translation systems, and two that use parallel data to train aligned multilingual bag-of-words and LSTM encoders. We find that XNLI represents a practical and challenging evaluation suite, and that directly translating the test data yields the best performance among available baselines.",
"title": ""
},
{
"docid": "neg:1840325_9",
"text": "In this paper, we study the human locomotor adaptation to the action of a powered exoskeleton providing assistive torque at the user's hip during walking. To this end, we propose a controller that provides the user's hip with a fraction of the nominal torque profile, adapted to the specific gait features of the user from Winter's reference data . The assistive controller has been implemented on the ALEX II exoskeleton and tested on ten healthy subjects. Experimental results show that when assisted by the exoskeleton, users can reduce the muscle effort compared to free walking. Despite providing assistance only to the hip joint, both hip and ankle muscles significantly reduced their activation, indicating a clear tradeoff between hip and ankle strategy to propel walking.",
"title": ""
},
{
"docid": "neg:1840325_10",
"text": "INTRODUCTION\nPreliminary research has indicated that recreational ketamine use may be associated with marked cognitive impairments and elevated psychopathological symptoms, although no study to date has determined how these are affected by differing frequencies of use or whether they are reversible on cessation of use. In this study we aimed to determine how variations in ketamine use and abstention from prior use affect neurocognitive function and psychological wellbeing.\n\n\nMETHOD\nWe assessed a total of 150 individuals: 30 frequent ketamine users, 30 infrequent ketamine users, 30 ex-ketamine users, 30 polydrug users and 30 controls who did not use illicit drugs. Cognitive tasks included spatial working memory, pattern recognition memory, the Stockings of Cambridge (a variant of the Tower of London task), simple vigilance and verbal and category fluency. Standardized questionnaires were used to assess psychological wellbeing. Hair analysis was used to verify group membership.\n\n\nRESULTS\nFrequent ketamine users were impaired on spatial working memory, pattern recognition memory, Stockings of Cambridge and category fluency but exhibited preserved verbal fluency and prose recall. There were no differences in the performance of the infrequent ketamine users or ex-users compared to the other groups. Frequent users showed increased delusional, dissociative and schizotypal symptoms which were also evident to a lesser extent in infrequent and ex-users. Delusional symptoms correlated positively with the amount of ketamine used currently by the frequent users.\n\n\nCONCLUSIONS\nFrequent ketamine use is associated with impairments in working memory, episodic memory and aspects of executive function as well as reduced psychological wellbeing. 'Recreational' ketamine use does not appear to be associated with distinct cognitive impairments although increased levels of delusional and dissociative symptoms were observed. As no performance decrements were observed in the ex-ketamine users, it is possible that the cognitive impairments observed in the frequent ketamine group are reversible upon cessation of ketamine use, although delusional symptoms persist.",
"title": ""
},
{
"docid": "neg:1840325_11",
"text": "A fold-back current-limit circuit, with load-insensitive quiescent current characteristic for CMOS low dropout regulator (LDO), is proposed in this paper. This method has been designed in 0.35 µm CMOS technology and verified by Hspice simulation. The quiescent current of the LDO is 5.7 µA at 100-mA load condition. It is only 2.2% more than it in no-load condition, 5.58 µA. The maximum current limit is set to be 197 mA, and the short-current limit is 77 mA. Thus, the power consumption can be saved up to 61% at the short-circuit condition, which also decreases the risk of damaging the power transistor. Moreover, the thermal protection can be simplified and the LDO will be more reliable.",
"title": ""
},
{
"docid": "neg:1840325_12",
"text": "This paper presents and studies various selected literature primarily from conference proceedings, journals and clinical tests of the robotic, mechatronics, neurology and biomedical engineering of rehabilitation robotic systems. The present paper focuses of three main categories: types of rehabilitation robots, key technologies with current issues and future challenges. Literature on fundamental research with some examples from commercialized robots and new robot development projects related to rehabilitation are introduced. Most of the commercialized robots presented in this paper are well known especially to robotics engineers and scholars in the robotic field, but are less known to humanities scholars. The field of rehabilitation robot research is expanding; in light of this, some of the current issues and future challenges in rehabilitation robot engineering are recalled, examined and clarified with future directions. This paper is concluded with some recommendations with respect to rehabilitation robots.",
"title": ""
},
{
"docid": "neg:1840325_13",
"text": "With the tremendous popularity of PDF format, recognizing mathematical formulas in PDF documents becomes a new and important problem in document analysis field. In this paper, we present a method of embedded mathematical formula identification in PDF documents, based on Support Vector Machine (SVM). The method first segments text lines into words, and then classifies each word into two classes, namely formula or ordinary text. Various features of embedded formulas, including geometric layout, character and context content, are utilized to build a robust and adaptable SVM classifier. Embedded formulas are then extracted through merging the words labeled as formulas. Experimental results show good performance of the proposed method. Furthermore, the method has been successfully incorporated into a commercial software package for large-scale e-Book production.",
"title": ""
},
{
"docid": "neg:1840325_14",
"text": "In this paper we present our approach to automatically identify the subjectivity, polarity and irony of Italian Tweets. Our system which reaches and outperforms the state of the art in Italian is well adapted for different domains since it uses abstract word features instead of bag of words. We also present experiments carried out to study how Italian Sentiment Analysis systems react to domain changes. We show that bag of words approaches commonly used in Sentiment Analysis do not adapt well to domain changes.",
"title": ""
},
{
"docid": "neg:1840325_15",
"text": "In the context of structural optimization we propose a new numerical method based on a combination of the classical shape derivative and of the level-set method for front propagation. We implement this method in two and three space dimensions for a model of linear or nonlinear elasticity. We consider various objective functions with weight and perimeter constraints. The shape derivative is computed by an adjoint method. The cost of our numerical algorithm is moderate since the shape is captured on a fixed Eulerian mesh. Although this method is not specifically designed for topology optimization, it can easily handle topology changes. However, the resulting optimal shape is strongly dependent on the initial guess. 2003 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "neg:1840325_16",
"text": "Today, a large number of audio features exists in audio retrieval for different purposes, such as automatic speech recognition, music information retrieval, audio segmentation, and environmental sound retrieval. The goal of this paper is to review latest research in the context of audio feature extraction and to give an application-independent overview of the most important existing techniques. We survey state-of-the-art features from various domains and propose a novel taxonomy for the organization of audio features. Additionally, we identify the building blocks of audio features and propose a scheme that allows for the description of arbitrary features. We present an extensive literature survey and provide more than 200 references to relevant high quality publications.",
"title": ""
},
{
"docid": "neg:1840325_17",
"text": "The authors examined how an applicant's handshake influences hiring recommendations formed during the employment interview. A sample of 98 undergraduate students provided personality measures and participated in mock interviews during which the students received ratings of employment suitability. Five trained raters independently evaluated the quality of the handshake for each participant. Quality of handshake was related to interviewer hiring recommendations. Path analysis supported the handshake as mediating the effect of applicant extraversion on interviewer hiring recommendations, even after controlling for differences in candidate physical appearance and dress. Although women received lower ratings for the handshake, they did not on average receive lower assessments of employment suitability. Exploratory analysis suggested that the relationship between a firm handshake and interview ratings may be stronger for women than for men.",
"title": ""
},
{
"docid": "neg:1840325_18",
"text": "We introduce a novel joint sparse representation based multi-view automatic target recognition (ATR) method, which can not only handle multi-view ATR without knowing the pose but also has the advantage of exploiting the correlations among the multiple views of the same physical target for a single joint recognition decision. Extensive experiments have been carried out on moving and stationary target acquisition and recognition (MSTAR) public database to evaluate the proposed method compared with several state-of-the-art methods such as linear support vector machine (SVM), kernel SVM, as well as a sparse representation based classifier (SRC). Experimental results demonstrate that the proposed joint sparse representation ATR method is very effective and performs robustly under variations such as multiple joint views, depression, azimuth angles, target articulations, as well as configurations.",
"title": ""
},
{
"docid": "neg:1840325_19",
"text": "High resolution magnetic resonance (MR) imaging is desirable in many clinical applications due to its contribution to more accurate subsequent analyses and early clinical diagnoses. Single image super resolution (SISR) is an effective and cost efficient alternative technique to improve the spatial resolution of MR images. In the past few years, SISR methods based on deep learning techniques, especially convolutional neural networks (CNNs), have achieved state-of-the-art performance on natural images. However, the information is gradually weakened and training becomes increasingly difficult as the network deepens. The problem is more serious for medical images because lacking high quality and effective training samples makes deep models prone to underfitting or overfitting. Nevertheless, many current models treat the hierarchical features on different channels equivalently, which is not helpful for the models to deal with the hierarchical features discriminatively and targetedly. To this end, we present a novel channel splitting network (CSN) to ease the representational burden of deep models. The proposed CSN model divides the hierarchical features into two branches, i.e., residual branch and dense branch, with different information transmissions. The residual branch is able to promote feature reuse, while the dense branch is beneficial to the exploration of new features. Besides, we also adopt the merge-and-run mapping to facilitate information integration between different branches. Extensive experiments on various MR images, including proton density (PD), T1 and T2 images, show that the proposed CSN model achieves superior performance over other state-of-the-art SISR methods.",
"title": ""
}
] |
1840326 | Printflatables: Printing Human-Scale, Functional and Dynamic Inflatable Objects | [
{
"docid": "pos:1840326_0",
"text": "This paper presents preliminary results for the design, development and evaluation of a hand rehabilitation glove fabricated using soft robotic technology. Soft actuators comprised of elastomeric materials with integrated channels that function as pneumatic networks (PneuNets), are designed and geometrically analyzed to produce bending motions that can safely conform with the human finger motion. Bending curvature and force response of these actuators are investigated using geometrical analysis and a finite element model (FEM) prior to fabrication. The fabrication procedure of the chosen actuator is described followed by a series of experiments that mechanically characterize the actuators. The experimental data is compared to results obtained from FEM simulations showing good agreement. Finally, an open-palm glove design and the integration of the actuators to it are described, followed by a qualitative evaluation study.",
"title": ""
}
] | [
{
"docid": "neg:1840326_0",
"text": "Platelet-rich plasma (PRP) has been utilized for many years as a regenerative agent capable of inducing vascularization of various tissues using blood-derived growth factors. Despite this, drawbacks mostly related to the additional use of anti-coagulants found in PRP have been shown to inhibit the wound healing process. For these reasons, a novel platelet concentrate has recently been developed with no additives by utilizing lower centrifugation speeds. The purpose of this study was therefore to investigate osteoblast behavior of this novel therapy (injectable-platelet-rich fibrin; i-PRF, 100% natural with no additives) when compared to traditional PRP. Human primary osteoblasts were cultured with either i-PRF or PRP and compared to control tissue culture plastic. A live/dead assay, migration assay as well as a cell adhesion/proliferation assay were investigated. Furthermore, osteoblast differentiation was assessed by alkaline phosphatase (ALP), alizarin red and osteocalcin staining, as well as real-time PCR for genes encoding Runx2, ALP, collagen1 and osteocalcin. The results showed that all cells had high survival rates throughout the entire study period irrespective of culture-conditions. While PRP induced a significant 2-fold increase in osteoblast migration, i-PRF demonstrated a 3-fold increase in migration when compared to control tissue-culture plastic and PRP. While no differences were observed for cell attachment, i-PRF induced a significantly higher proliferation rate at three and five days when compared to PRP. Furthermore, i-PRF induced significantly greater ALP staining at 7 days and alizarin red staining at 14 days. A significant increase in mRNA levels of ALP, Runx2 and osteocalcin, as well as immunofluorescent staining of osteocalcin was also observed in the i-PRF group when compared to PRP. In conclusion, the results from the present study favored the use of the naturally-formulated i-PRF when compared to traditional PRP with anti-coagulants. Further investigation into the direct role of fibrin and leukocytes contained within i-PRF are therefore warranted to better elucidate their positive role in i-PRF on tissue wound healing.",
"title": ""
},
{
"docid": "neg:1840326_1",
"text": "Enterprise resource planning (ERP) systems have been widely implemented by numerous firms throughout the industrial world. While success stories of ERP implementation abound due to its potential in resolving the problem of fragmented information, a substantial number of these implementations fail to meet the goals of the organization. Some are abandoned altogether and others contribute to the failure of an organization. This article seeks to identify the critical factors of ERP implementation and uses statistical analysis to further delineate the patterns of adoption of the various concepts. A cross-sectional mail survey was mailed to business executives who have experience in the implementation of ERP systems. The results of this study provide empirical evidence that the theoretical constructs of ERP implementation are followed at varying levels. It offers some fresh insights into the current practice of ERP implementation. In addition, this study fills the need for ERP implementation constructs that can be utilized for further study of this important topic.",
"title": ""
},
{
"docid": "neg:1840326_2",
"text": "[Context] It is an enigma that agile projects can succeed ‘without requirements’ when weak requirements engineering is a known cause for project failures. While agile development projects often manage well without extensive requirements test cases are commonly viewed as requirements and detailed requirements are documented as test cases. [Objective] We have investigated this agile practice of using test cases as requirements to understand how test cases can support the main requirements activities, and how this practice varies. [Method] We performed an iterative case study at three companies and collected data through 14 interviews and 2 focus groups. [Results] The use of test cases as requirements poses both benefits and challenges when eliciting, validating, verifying, and managing requirements, and when used as a documented agreement. We have identified five variants of the test-cases-as-requirements practice, namely de facto, behaviour-driven, story-test driven, stand-alone strict and stand-alone manual for which the application of the practice varies concerning the time frame of requirements documentation, the requirements format, the extent to which the test cases are a machine executable specification and the use of tools which provide specific support for the practice of using test cases as requirements. [Conclusions] The findings provide empirical insight into how agile development projects manage and communicate requirements. The identified variants of the practice of using test cases as requirements can be used to perform in-depth investigations into agile requirements engineering. Practitioners can use the provided recommendations as a guide in designing and improving their agile requirements practices based on project characteristics such as number of stakeholders and rate of change.",
"title": ""
},
{
"docid": "neg:1840326_3",
"text": "A very lightweight, broad-band, dual polarized antenna array with 128 elements for the frequency range from 7 GHz to 18 GHz has been designed, manufactured and measured. The total gain at the center frequency was measured to be 20 dBi excluding feeding network losses.",
"title": ""
},
{
"docid": "neg:1840326_4",
"text": "When people work together to analyze a data set, they need to organize their findings, hypotheses, and evidence, share that information with their collaborators, and coordinate activities amongst team members. Sharing externalizations (recorded information such as notes) could increase awareness and assist with team communication and coordination. However, we currently know little about how to provide tool support for this sort of sharing. We explore how linked common work (LCW) can be employed within a `collaborative thinking space', to facilitate synchronous collaborative sensemaking activities in Visual Analytics (VA). Collaborative thinking spaces provide an environment for analysts to record, organize, share and connect externalizations. Our tool, CLIP, extends earlier thinking spaces by integrating LCW features that reveal relationships between collaborators' findings. We conducted a user study comparing CLIP to a baseline version without LCW. Results demonstrated that LCW significantly improved analytic outcomes at a collaborative intelligence task. Groups using CLIP were also able to more effectively coordinate their work, and held more discussion of their findings and hypotheses. LCW enabled them to maintain awareness of each other's activities and findings and link those findings to their own work, preventing disruptive oral awareness notifications.",
"title": ""
},
{
"docid": "neg:1840326_5",
"text": "Internet of Things (IoT) is one of the greatest technology revolutions in the history. Due to IoT potential, daily objects will be consciously worked in harmony with optimized performances. However, today, technology is not ready to fully bring its power to our daily life because of huge data analysis requirements in instant time. On the other hand, the powerful data management of cloud computing gives IoT an opportunity to make the revolution in our life. However, the traditional cloud computing server schedulers are not ready to provide services to IoT because IoT consists of a number of heterogeneous devices and applications which are far away from standardization. Therefore, to meet the expectations of users, the traditional cloud computing server schedulers should be improved to efficiently schedule and allocate IoT requests. There are several proposed scheduling algorithms for cloud computing in the literature. However, these scheduling algorithms are limited because of considering neither heterogeneous servers nor dynamic scheduling approach for different priority requests. Our objective is to propose Husnu S. Narman husnu@ou.edu 1 Holcombe Department of Electrical and Computer Engineering, Clemson University, Clemson, SC, 29634, USA 2 Department of Computer Science and Engineering, Bangladesh University of Engineering and Technology, Zahir Raihan Rd, Dhaka, 1000, Bangladesh 3 School of Computer Science, University of Oklahoma, Norman, OK, 73019, USA dynamic dedicated server scheduling for heterogeneous and homogeneous systems to efficiently provide desired services by considering priorities of requests. Results show that the proposed scheduling algorithm improves throughput up to 40 % in heterogeneous and homogeneous cloud computing systems for IoT requests. Our proposed scheduling algorithm and related analysis will help cloud service providers build efficient server schedulers which are adaptable to homogeneous and heterogeneous environments byconsidering systemperformancemetrics, such as drop rate, throughput, and utilization in IoT.",
"title": ""
},
{
"docid": "neg:1840326_6",
"text": "While blockchain services hold great promise to improve many different industries, there are significant cybersecurity concerns which must be addressed. In this paper, we investigate security considerations for an Ethereum blockchain hosting a distributed energy management application. We have simulated a microgrid with ten buildings in the northeast U.S., and results of the transaction distribution and electricity utilization are presented. We also present the effects on energy distribution when one or two smart meters have their identities corrupted. We then propose a new approach to digital identity management that would require smart meters to authenticate with the blockchain ledger and mitigate identity-spoofing attacks. Applications of this approach to defense against port scans and DDoS, attacks are also discussed.",
"title": ""
},
{
"docid": "neg:1840326_7",
"text": "We propose a simple solution to use a single Neural Machine Translation (NMT) model to translate between multiple languages. Our solution requires no changes to the model architecture from a standard NMT system but instead introduces an artificial token at the beginning of the input sentence to specify the required target language. Using a shared wordpiece vocabulary, our approach enables Multilingual NMT systems using a single model. On the WMT’14 benchmarks, a single multilingual model achieves comparable performance for English→French and surpasses state-of-theart results for English→German. Similarly, a single multilingual model surpasses state-of-the-art results for French→English and German→English on WMT’14 and WMT’15 benchmarks, respectively. On production corpora, multilingual models of up to twelve language pairs allow for better translation of many individual pairs. Our models can also learn to perform implicit bridging between language pairs never seen explicitly during training, showing that transfer learning and zero-shot translation is possible for neural translation. Finally, we show analyses that hints at a universal interlingua representation in our models and also show some interesting examples when mixing languages.",
"title": ""
},
{
"docid": "neg:1840326_8",
"text": "Neuropathic pain is a debilitating form of chronic pain that affects 6.9-10% of the population. Health-related quality-of-life is impeded by neuropathic pain, which not only includes physical impairment, but the mental wellbeing of the patient is also hindered. A reduction in both physical and mental wellbeing bares economic costs that need to be accounted for. A variety of medications are in use for the treatment of neuropathic pain, such as calcium channel α2δ agonists, serotonin/noradrenaline reuptake inhibitors and tricyclic antidepressants. However, recent studies have indicated a lack of efficacy regarding the aforementioned medication. There is increasing clinical and pre-clinical evidence that can point to the use of ketamine, an “old” anaesthetic, in the management of neuropathic pain. Conversely, to see ketamine being used in neuropathic pain, there needs to be more conclusive evidence exploring the long-term effects of sub-anesthetic ketamine.",
"title": ""
},
{
"docid": "neg:1840326_9",
"text": "Heart disease diagnosis is a complex task which requires much experience and knowledge. Traditional way of predicting Heart disease is doctor’s examination or number of medical tests such as ECG, Stress Test, and Heart MRI etc. Nowadays, Health care industry contains huge amount of heath care data, which contains hidden information. This hidden information is useful for making effective decisions. Computer based information along with advanced Data mining techniques are used for appropriate results. Neural network is widely used tool for predicting Heart disease diagnosis. In this research paper, a Heart Disease Prediction system (HDPS) is developed using Neural network. The HDPS system predicts the likelihood of patient getting a Heart disease. For prediction, the system uses sex, blood pressure, cholesterol like 13 medical parameters. Here two more parameters are added i.e. obesity and smoking for better accuracy. From the results, it has been seen that neural network predict heart disease with nearly 100% accuracy.",
"title": ""
},
{
"docid": "neg:1840326_10",
"text": "BACKGROUND\nPatient portals tied to provider electronic health record (EHR) systems are increasingly popular.\n\n\nPURPOSE\nTo systematically review the literature reporting the effect of patient portals on clinical care.\n\n\nDATA SOURCES\nPubMed and Web of Science searches from 1 January 1990 to 24 January 2013.\n\n\nSTUDY SELECTION\nHypothesis-testing or quantitative studies of patient portals tethered to a provider EHR that addressed patient outcomes, satisfaction, adherence, efficiency, utilization, attitudes, and patient characteristics, as well as qualitative studies of barriers or facilitators, were included.\n\n\nDATA EXTRACTION\nTwo reviewers independently extracted data and addressed discrepancies through consensus discussion.\n\n\nDATA SYNTHESIS\nFrom 6508 titles, 14 randomized, controlled trials; 21 observational, hypothesis-testing studies; 5 quantitative, descriptive studies; and 6 qualitative studies were included. Evidence is mixed about the effect of portals on patient outcomes and satisfaction, although they may be more effective when used with case management. The effect of portals on utilization and efficiency is unclear, although patient race and ethnicity, education level or literacy, and degree of comorbid conditions may influence use.\n\n\nLIMITATION\nLimited data for most outcomes and an absence of reporting on organizational and provider context and implementation processes.\n\n\nCONCLUSION\nEvidence that patient portals improve health outcomes, cost, or utilization is insufficient. Patient attitudes are generally positive, but more widespread use may require efforts to overcome racial, ethnic, and literacy barriers. Portals represent a new technology with benefits that are still unclear. Better understanding requires studies that include details about context, implementation factors, and cost.",
"title": ""
},
{
"docid": "neg:1840326_11",
"text": "The authors have developed an adaptive matched filtering algorithm based upon an artificial neural network (ANN) for QRS detection. They use an ANN adaptive whitening filter to model the lower frequencies of the electrocardiogram (ECG) which are inherently nonlinear and nonstationary. The residual signal which contains mostly higher frequency QRS complex energy is then passed through a linear matched filter to detect the location of the QRS complex. The authors developed an algorithm to adaptively update the matched filter template from the detected QRS complex in the ECG signal itself so that the template can be customized to an individual subject. This ANN whitening filter is very effective at removing the time-varying, nonlinear noise characteristic of ECG signals. The detection rate for a very noisy patient record in the MIT/BIH arrhythmia database is 99.5% with this approach, which compares favorably to the 97.5% obtained using a linear adaptive whitening filter and the 96.5% achieved with a bandpass filtering method.<<ETX>>",
"title": ""
},
{
"docid": "neg:1840326_12",
"text": "Organizations place a great deal of emphasis on hiring individuals who are a good fit for the organization and the job. Among the many ways that individuals are screened for a job, the employment interview is particularly prevalent and nearly universally used (Macan, 2009; Huffcutt and Culbertson, 2011). This Research Topic is devoted to a construct that plays a critical role in our understanding of job interviews: impression management (IM). In the interview context, IM describes behaviors an individual uses to influence the impression that others have of them (Bozeman and Kacmar, 1997). For instance, a job applicant can flatter an interviewer to be seen as likable (i.e., ingratiation), play up their qualifications and abilities to be seen as competent (i.e., self-promotion), or utilize excuses or justifications to make up for a negative event or error (i.e., defensive IM; Ellis et al., 2002). IM has emerged as a central theme in the interview literature over the last several decades (for reviews, see Posthuma et al., 2002; Levashina et al., 2014). Despite some pioneering early work (e.g., Schlenker, 1980; Leary and Kowalski, 1990; Stevens and Kristof, 1995), there has been a resurgence of interest in the area over the last decade. While the literature to date has set up a solid foundational knowledge about interview IM, there are a number of emerging trends and directions. In the following, we lay out some critical areas of inquiry in interview IM, and highlight how the innovative set of papers in this Research Topic is illustrative of these new directions.",
"title": ""
},
{
"docid": "neg:1840326_13",
"text": "In this paper, we present results of a study of the data rate fairness among nodes within a LoRaWAN cell. Since LoRa/LoRaWAN supports various data rates, we firstly derive the fairest ratios of deploying each data rate within a cell for a fair collision probability. LoRa/LoRaWan, like other frequency modulation based radio interfaces, exhibits the capture effect in which only the stronger signal of colliding signals will be extracted. This leads to unfairness, where far nodes or nodes experiencing higher attenuation are less likely to see their packets received correctly. Therefore, we secondly develop a transmission power control algorithm to balance the received signal powers from all nodes regardless of their distances from the gateway for a fair data extraction. Simulations show that our approach achieves higher fairness in data rate than the state-of-art in almost all network configurations.",
"title": ""
},
{
"docid": "neg:1840326_14",
"text": "Data from 1,010 lactating lactating, predominately component-fed Holstein cattle from 25 predominately tie-stall dairy farms in southwest Ontario were used to identify objective thresholds for defining hyperketonemia in lactating dairy cattle based on negative impacts on cow health, milk production, or both. Serum samples obtained during wk 1 and 2 postpartum and analyzed for beta-hydroxybutyrate (BHBA) concentrations that were used in analysis. Data were time-ordered so that the serum samples were obtained at least 1 d before the disease or milk recording events. Serum BHBA cutpoints were constructed at 200 micromol/L intervals between 600 and 2,000 micromol/L. Critical cutpoints for the health analysis were determined based on the threshold having the greatest sum of sensitivity and specificity for predicting the disease occurrence. For the production outcomes, models for first test day milk yield, milk fat, and milk protein percentage were constructed including covariates of parity, precalving body condition score, season of calving, test day linear score, and the random effect of herd. Each cutpoint was tested in these models to determine the threshold with the greatest impact and least risk of a type 1 error. Serum BHBA concentrations at or above 1,200 micromol/L in the first week following calving were associated with increased risks of subsequent displaced abomasum [odds ratio (OR) = 2.60] and metritis (OR = 3.35), whereas the critical threshold of BHBA in wk 2 postpartum on the risk of abomasal displacement was >or=1,800 micromol/L (OR = 6.22). The best threshold for predicting subsequent risk of clinical ketosis from serum obtained during wk 1 and wk 2 postpartum was 1,400 micromol/L of BHBA (OR = 4.25 and 5.98, respectively). There was no association between clinical mastitis and elevated serum BHBA in wk 1 or 2 postpartum, and there was no association between wk 2 BHBA and risk of metritis. Greater serum BHBA measured during the first and second week postcalving were associated with less milk yield, greater milk fat percentage, and less milk protein percentage on the first Dairy Herd Improvement test day of lactation. Impacts on first Dairy Herd Improvement test milk yield began at BHBA >or=1,200 micromol/L for wk 1 samples and >or=1,400 micromol/L for wk 2 samples. The greatest impact on yield occurred at 1,400 micromol/L (-1.88 kg/d) and 2,000 micromol/L (-3.3 kg/d) for sera from the first and second week postcalving, respectively. Hyperketonemia can be defined at 1,400 micromol/L of BHBA and in the first 2 wk postpartum increases disease risk and results in substantial loss of milk yield in early lactation.",
"title": ""
},
{
"docid": "neg:1840326_15",
"text": "Privacy and security are two important but seemingly contradictory objectives in a pervasive computing environment (PCE). On one hand, service providers want to authenticate legitimate users and make sure they are accessing their authorized services in a legal way. On the other hand, users want to maintain the necessary privacy without being tracked down for wherever they are and whatever they are doing. In this paper, a novel privacy preserving authentication and access control scheme to secure the interactions between mobile users and services in PCEs is proposed. The proposed scheme seamlessly integrates two underlying cryptographic primitives, namely blind signature and hash chain, into a highly flexible and lightweight authentication and key establishment protocol. The scheme provides explicit mutual authentication between a user and a service while allowing the user to anonymously interact with the service. Differentiated service access control is also enabled in the proposed scheme by classifying mobile users into different service groups. The correctness of the proposed authentication and key establishment protocol is formally verified based on Burrows-Abadi-Needham logic",
"title": ""
},
{
"docid": "neg:1840326_16",
"text": "Recent advancements in deep neural networks for graph-structured data have led to state-of-the-art performance on recommender system benchmarks. However, making these methods practical and scalable to web-scale recommendation tasks with billions of items and hundreds of millions of users remains an unsolved challenge. Here we describe a large-scale deep recommendation engine that we developed and deployed at Pinterest. We develop a data-efficient Graph Convolutional Network (GCN) algorithm, which combines efficient random walks and graph convolutions to generate embeddings of nodes (i.e., items) that incorporate both graph structure as well as node feature information. Compared to prior GCN approaches, we develop a novel method based on highly efficient random walks to structure the convolutions and design a novel training strategy that relies on harder-and-harder training examples to improve robustness and convergence of the model. We also develop an efficient MapReduce model inference algorithm to generate embeddings using a trained model. Overall, we can train on and embed graphs that are four orders of magnitude larger than typical GCN implementations. We show how GCN embeddings can be used to make high-quality recommendations in various settings at Pinterest, which has a massive underlying graph with 3 billion nodes representing pins and boards, and 17 billion edges. According to offline metrics, user studies, as well as A/B tests, our approach generates higher-quality recommendations than comparable deep learning based systems. To our knowledge, this is by far the largest application of deep graph embeddings to date and paves the way for a new generation of web-scale recommender systems based on graph convolutional architectures.",
"title": ""
},
{
"docid": "neg:1840326_17",
"text": "This study investigates high frequency currency trading with neural networks trained via Recurrent Reinforcement Learning (RRL). We compare the performance of single layer networks with networks having a hidden layer, and examine the impact of the fixed system parameters on performance. In general, we conclude that the trading systems may be effective, but the performance varies widely for different currency markets and this variability cannot be explained by simple statistics of the markets. Also we find that the single layer network outperforms the two layer network in this application.",
"title": ""
},
{
"docid": "neg:1840326_18",
"text": "We propose a general framework for learning from labeled and unlabeled data on a directed graph in which the structure of the graph including the directionality of the edges is considered. The time complexity of the algorithm derived from this framework is nearly linear due to recently developed numerical techniques. In the absence of labeled instances, this framework can be utilized as a spectral clustering method for directed graphs, which generalizes the spectral clustering approach for undirected graphs. We have applied our framework to real-world web classification problems and obtained encouraging results.",
"title": ""
}
] |
1840327 | An effective voting method for circle detection | [
{
"docid": "pos:1840327_0",
"text": "We introduce the Adaptive Hough Transform, AHT, as an efficient way of implementing the Hough Transform, HT, method for the detection of 2-D shapes. The AHT uses a small accumulator array and the idea of a flexible iterative \"coarse to fine\" accumulation and search strategy to identify significant peaks in the Hough parameter spaces. The method is substantially superior to the standard HT implementation in both storage and computational requirements. In this correspondence we illustrate the ideas of the AHT by tackling the problem of identifying linear and circular segments in images by searching for clusters of evidence in 2-D parameter spaces. We show that the method is robust to the addition of extraneous noise and can be used to analyze complex images containing more than one shape.",
"title": ""
}
] | [
{
"docid": "neg:1840327_0",
"text": "Online game is an increasingly popular source of entertainment for all ages, with relatively prevalent negative consequences. Addiction is a problem that has received much attention. This research aims to develop a measure of online game addiction for Indonesian children and adolescents. The Indonesian Online Game Addiction Questionnaire draws from earlier theories and research on the internet and game addiction. Its construction is further enriched by including findings from qualitative interviews and field observation to ensure appropriate expression of the items. The measure consists of 7 items with a 5-point Likert Scale. It is validated by testing 1,477 Indonesian junior and senior high school students from several schools in Manado, Medan, Pontianak, and Yogyakarta. The validation evidence is shown by item-total correlation and criterion validity. The Indonesian Online Game Addiction Questionnaire has good item-total correlation (ranging from 0.29 to 0.55) and acceptable reliability (α = 0.73). It is also moderately correlated with the participant's longest time record to play online games (r = 0.39; p<0.01), average days per week in playing online games (ρ = 0.43; p<0.01), average hours per days in playing online games (ρ = 0.41; p<0.01), and monthly expenditure for online games (ρ = 0.30; p<0.01). Furthermore, we created a clinical cut-off estimate by combining criteria and population norm. The clinical cut-off estimate showed that the score of 14 to 21 may indicate mild online game addiction, and the score of 22 and above may indicate online game addiction. Overall, the result shows that Indonesian Online Game Addiction Questionnaire has sufficient psychometric property for research use, as well as limited clinical application.",
"title": ""
},
{
"docid": "neg:1840327_1",
"text": "Our object oriented programming approach have great ability to improve the programming behavior for modern system and software engineering but it does not give the proper interaction of real world .In real world , programming required powerful interlinking among properties and characteristics towards the various objects. Basically this approach of programming gives the better presentation of object with real world and provide the better relationship among the objects. I have explained the new concept of my neuro object oriented approach .This approach contains many new features like originty , new concept of inheritance , new concept of encapsulation , object relation with dimensions , originty relation with dimensions and time , category of NOOPA like high order thinking object and low order thinking object , differentiation model for achieving the various requirements from the user and a rotational model .",
"title": ""
},
{
"docid": "neg:1840327_2",
"text": "Taxi demand prediction is an important building block to enabling intelligent transportation systems in a smart city. An accurate prediction model can help the city pre-allocate resources to meet travel demand and to reduce empty taxis on streets which waste energy and worsen the traffic congestion. With the increasing popularity of taxi requesting services such as Uber and Didi Chuxing (in China), we are able to collect large-scale taxi demand data continuously. How to utilize such big data to improve the demand prediction is an interesting and critical real-world problem. Traditional demand prediction methods mostly rely on time series forecasting techniques, which fail to model the complex non-linear spatial and temporal relations. Recent advances in deep learning have shown superior performance on traditionally challenging tasks such as image classification by learning the complex features and correlations from largescale data. This breakthrough has inspired researchers to explore deep learning techniques on traffic prediction problems. However, existing methods on traffic prediction have only considered spatial relation (e.g., using CNN) or temporal relation (e.g., using LSTM) independently. We propose a Deep Multi-View Spatial-Temporal Network (DMVST-Net) framework to model both spatial and temporal relations. Specifically, our proposed model consists of three views: temporal view (modeling correlations between future demand values with near time points via LSTM), spatial view (modeling local spatial correlation via local CNN), and semantic view (modeling correlations among regions sharing similar temporal patterns). Experiments on large-scale real taxi demand data demonstrate effectiveness of our approach over state-ofthe-art methods.",
"title": ""
},
{
"docid": "neg:1840327_3",
"text": "Computational Thinking (CT) has become popular in recent years and has been recognised as an essential skill for all, as members of the digital age. Many researchers have tried to define CT and have conducted studies about this topic. However, CT literature is at an early stage of maturity, and is far from either explaining what CT is, or how to teach and assess this skill. In the light of this state of affairs, the purpose of this study is to examine the purpose, target population, theoretical basis, definition, scope, type and employed research design of selected papers in the literature that have focused on computational thinking, and to provide a framework about the notion, scope and elements of CT. In order to reveal the literature and create the framework for computational thinking, an inductive qualitative content analysis was conducted on 125 papers about CT, selected according to pre-defined criteria from six different databases and digital libraries. According to the results, the main topics covered in the papers composed of activities (computerised or unplugged) that promote CT in the curriculum. The targeted population of the papers was mainly K-12. Gamed-based learning and constructivism were the main theories covered as the basis for CT papers. Most of the papers were written for academic conferences and mainly composed of personal views about CT. The study also identified the most commonly used words in the definitions and scope of CT, which in turn formed the framework of CT. The findings obtained in this study may not only be useful in the exploration of research topics in CT and the identification of CT in the literature, but also support those who need guidance for developing tasks or programs about computational thinking and informatics.",
"title": ""
},
{
"docid": "neg:1840327_4",
"text": "PURPOSE\nTo summarize the literature addressing subthreshold or nondamaging retinal laser therapy (NRT) for central serous chorioretinopathy (CSCR) and to discuss results and trends that provoke further investigation.\n\n\nMETHODS\nAnalysis of current literature evaluating NRT with micropulse or continuous wave lasers for CSCR.\n\n\nRESULTS\nSixteen studies including 398 patients consisted of retrospective case series, prospective nonrandomized interventional case series, and prospective randomized clinical trials. All studies but one evaluated chronic CSCR, and laser parameters varied greatly between studies. Mean central macular thickness decreased, on average, by ∼80 μm by 3 months. Mean best-corrected visual acuity increased, on average, by about 9 letters by 3 months, and no study reported a decrease in acuity below presentation. No retinal complications were observed with the various forms of NRT used, but six patients in two studies with micropulse laser experienced pigmentary changes in the retinal pigment epithelium attributed to excessive laser settings.\n\n\nCONCLUSION\nBased on the current evidence, NRT demonstrates efficacy and safety in 12-month follow-up in patients with chronic and possibly acute CSCR. The NRT would benefit from better standardization of the laser settings and understanding of mechanisms of action, as well as further prospective randomized clinical trials.",
"title": ""
},
{
"docid": "neg:1840327_5",
"text": "Transfer printing represents a set of techniques for deterministic assembly of micro-and nanomaterials into spatially organized, functional arrangements with two and three-dimensional layouts. Such processes provide versatile routes not only to test structures and vehicles for scientific studies but also to high-performance, heterogeneously integrated functional systems, including those in flexible electronics, three-dimensional and/or curvilinear optoelectronics, and bio-integrated sensing and therapeutic devices. This article summarizes recent advances in a variety of transfer printing techniques, ranging from the mechanics and materials aspects that govern their operation to engineering features of their use in systems with varying levels of complexity. A concluding section presents perspectives on opportunities for basic and applied research, and on emerging use of these methods in high throughput, industrial-scale manufacturing.",
"title": ""
},
{
"docid": "neg:1840327_6",
"text": "A controller for a quadratic buck converter is given using average current-mode control. The converter has two filters; thus, it will exhibit fourth-order characteristic dynamics. The proposed scheme employs an inner loop that uses the current of the first inductor. This current can also be used for overload protection; therefore, the full benefits of current-mode control are maintained. For the outer loop, a conventional controller which provides good regulation characteristics is used. The design-oriented analytic results allow the designer to easily pinpoint the control circuit parameters that optimize the converter's performance. Experimental results are given for a 28 W switching regulator where current-mode control and voltage-mode control are compared.",
"title": ""
},
{
"docid": "neg:1840327_7",
"text": "This paper presents a conceptual framework for security engineering, with a strong focus on security requirements elicitation and analysis. This conceptual framework establishes a clear-cut vocabulary and makes explicit the interrelations between the different concepts and notions used in security engineering. Further, we apply our conceptual framework to compare and evaluate current security requirements engineering approaches, such as the Common Criteria, Secure Tropos, SREP, MSRA, as well as methods based on UML and problem frames. We review these methods and assess them according to different criteria, such as the general approach and scope of the method, its validation, and quality assurance capabilities. Finally, we discuss how these methods are related to the conceptual framework and to one another.",
"title": ""
},
{
"docid": "neg:1840327_8",
"text": "We introduce an interactive method to assess cataracts in the human eye by crafting an optical solution that measures the perceptual impact of forward scattering on the foveal region. Current solutions rely on highly-trained clinicians to check the back scattering in the crystallin lens and test their predictions on visual acuity tests. Close-range parallax barriers create collimated beams of light to scan through sub-apertures, scattering light as it strikes a cataract. User feedback generates maps for opacity, attenuation, contrast and sub-aperture point-spread functions. The goal is to allow a general audience to operate a portable high-contrast light-field display to gain a meaningful understanding of their own visual conditions. User evaluations and validation with modified camera optics are performed. Compiled data is used to reconstruct the individual's cataract-affected view, offering a novel approach for capturing information for screening, diagnostic, and clinical analysis.",
"title": ""
},
{
"docid": "neg:1840327_9",
"text": "The claustrum has been proposed as a possible neural candidate for the coordination of conscious experience due to its extensive ‘connectome’. Herein we propose that the claustrum contributes to consciousness by supporting the temporal integration of cortical oscillations in response to multisensory input. A close link between conscious awareness and interval timing is suggested by models of consciousness and conjunctive changes in meta-awareness and timing in multiple contexts and conditions. Using the striatal beatfrequency model of interval timing as a framework, we propose that the claustrum integrates varying frequencies of neural oscillations in different sensory cortices into a coherent pattern that binds different and overlapping temporal percepts into a unitary conscious representation. The proposed coordination of the striatum and claustrum allows for time-based dimensions of multisensory integration and decision-making to be incorporated into consciousness.",
"title": ""
},
{
"docid": "neg:1840327_10",
"text": "This article reviews the most current practice guidelines in the diagnosis and management of patients born with cleft lip and/or palate. Such patients frequently have multiple medical and social issues that benefit greatly from a team approach. Common challenges include feeding difficulty, nutritional deficiency, speech disorders, hearing problems, ear disease, dental anomalies, and both social and developmental delays, among others. Interdisciplinary evaluation and collaboration throughout a patient's development are essential.",
"title": ""
},
{
"docid": "neg:1840327_11",
"text": "The process of obtaining intravenous (IV) access, Venipuncture, is an everyday invasive procedure in medical settings and there are more than one billion venipuncture related procedures like blood draws, peripheral catheter insertions, intravenous therapies, etc. performed per year [3]. Excessive venipunctures are both time and resource consuming events causing anxiety, pain and distress in patients, or can lead to severe harmful injuries [8]. The major problem faced by the doctors today is difficulty in accessing veins for intra-venous drug delivery & other medical situations [3]. There is a need to develop vein detection devices which can clearly show veins. This project deals with the design development of non-invasive subcutaneous vein detection system and is implemented based on near infrared imaging and interfaced to a laptop to make it portable. A customized CCD camera is used for capturing the vein images and Computer Software modules (MATLAB & LabVIEW) is used for the processing [3].",
"title": ""
},
{
"docid": "neg:1840327_12",
"text": "Almost all of the existing work on Named Entity Recognition (NER) consists of the following pipeline stages – part-of-speech tagging, segmentation, and named entity type classification. The requirement of hand-labeled training data on these stages makes it very expensive to extend to different domains and entity classes. Even with a large amount of hand-labeled data, existing techniques for NER on informal text, such as social media, perform poorly due to a lack of reliable capitalization, irregular sentence structure and a wide range of vocabulary. In this paper, we address the lack of hand-labeled training data by taking advantage of weak super vision signals. We present our approach in two parts. First, we propose a novel generative model that combines the ideas from Hidden Markov Model (HMM) and n-gram language models into what we call an N-gram Language Markov Model (NLMM). Second, we utilize large-scale weak supervision signals from sources such as Wikipedia titles and the corresponding click counts to estimate parameters in NLMM. Our model is simple and can be implemented without the use of Expectation Maximization or other expensive iterative training techniques. Even with this simple model, our approach to NER on informal text outperforms existing systems trained on formal English and matches state-of-the-art NER systems trained on hand-labeled Twitter messages. Because our model does not require hand-labeled data, we can adapt our system to other domains and named entity classes very easily. We demonstrate the flexibility of our approach by successfully applying it to the different domain of extracting food dishes from restaurant reviews with very little extra work.",
"title": ""
},
{
"docid": "neg:1840327_13",
"text": "This paper investigates discrimination capabilities in the texture of fundus images to differentiate between pathological and healthy images. For this purpose, the performance of local binary patterns (LBP) as a texture descriptor for retinal images has been explored and compared with other descriptors such as LBP filtering and local phase quantization. The goal is to distinguish between diabetic retinopathy (DR), age-related macular degeneration (AMD), and normal fundus images analyzing the texture of the retina background and avoiding a previous lesion segmentation stage. Five experiments (separating DR from normal, AMD from normal, pathological from normal, DR from AMD, and the three different classes) were designed and validated with the proposed procedure obtaining promising results. For each experiment, several classifiers were tested. An average sensitivity and specificity higher than 0.86 in all the cases and almost of 1 and 0.99, respectively, for AMD detection were achieved. These results suggest that the method presented in this paper is a robust algorithm for describing retina texture and can be useful in a diagnosis aid system for retinal disease screening.",
"title": ""
},
{
"docid": "neg:1840327_14",
"text": "This paper presents the advantages in extending Classical T ensor Algebra (CTA), also known as Kronecker Algebra, to allow the definition of functions, i.e., functional dependencies among its operands. Such extended tensor algebra have been called Generalized Tenso r Algebra (GTA). Stochastic Automata Networks (SAN) and Superposed Generalized Stochastic Petri Ne ts (SGSPN) formalisms use such Kronecker representations. We show that SAN, which uses GTA, has the sa m application scope of SGSPN, which uses CTA. We also show that any SAN model with functions has at least one equivalent representation without functions. In fact, the use of functions, and conseq uently the GTA, is not really a “need” since there is an equivalence of formalisms, but in some cases it represe nts, in a computational cost point of view, some irrefutable “advantages”. Some modeling examples are pres ent d in order to draw comparisons between the memory needs and CPU time to the generation, and the solution of the presented models.",
"title": ""
},
{
"docid": "neg:1840327_15",
"text": "Systems that enforce memory safety for today’s operating system kernels and other system software do not account for the behavior of low-level software/hardware interactions such as memory-mapped I/O, MMU configuration, and context switching. Bugs in such low-level interactions can lead to violations of the memory safety guarantees provided by a safe execution environment and can lead to exploitable vulnerabilities in system software . In this work, we present a set of program analysis and run-time instrumentation techniques that ensure that errors in these low-level operations do not violate the assumptions made by a safety checking system. Our design introduces a small set of abstractions and interfaces for manipulating processor state, kernel stacks, memory mapped I/O objects, MMU mappings, and self modifying code to achieve this goal, without moving resource allocation and management decisions out of the kernel. We have added these techniques to a compiler-based virtual machine called Secure Virtual Architecture (SVA), to which the standard Linux kernel has been ported previously. Our design changes to SVA required only an additional 100 lines of code to be changed in this kernel. Our experimental results show that our techniques prevent reported memory safety violations due to low-level Linux operations and that these violations are not prevented by SVA without our techniques . Moreover, the new techniques in this paper introduce very little overhead over and above the existing overheads of SVA. Taken together, these results indicate that it is clearly worthwhile to add these techniques to an existing memory safety system.",
"title": ""
},
{
"docid": "neg:1840327_16",
"text": "In this paper, a new offline actor-critic learning algorithm is introduced: Sampled Policy Gradient (SPG). SPG samples in the action space to calculate an approximated policy gradient by using the critic to evaluate the samples. This sampling allows SPG to search the action-Q-value space more globally than deterministic policy gradient (DPG), enabling it to theoretically avoid more local optima. SPG is compared to Q-learning and the actor-critic algorithms CACLA and DPG in a pellet collection task and a self play environment in the game Agar.io. The online game Agar.io has become massively popular on the internet due to intuitive game design and the ability to instantly compete against players around the world. From the point of view of artificial intelligence this game is also very intriguing: The game has a continuous input and action space and allows to have diverse agents with complex strategies compete against each other. The experimental results show that Q-Learning and CACLA outperform a pre-programmed greedy bot in the pellet collection task, but all algorithms fail to outperform this bot in a fighting scenario. The SPG algorithm is analyzed to have great extendability through offline exploration and it matches DPG in performance even in its basic form without extensive sampling.",
"title": ""
},
{
"docid": "neg:1840327_17",
"text": "The rise of Digital B2B Marketing has presented us with new opportunities and challenges as compared to traditional e-commerce. B2B setup is different from B2C setup in many ways. Along with the contrasting buying entity (company vs. individual), there are dissimilarities in order size (few dollars in e-commerce vs. up to several thousands of dollars in B2B), buying cycle (few days in B2C vs. 6–18 months in B2B) and most importantly a presence of multiple decision makers (individual or family vs. an entire company). Due to easy availability of the data and bargained complexities, most of the existing literature has been set in the B2C framework and there are not many examples in the B2B context. We present a unique approach to model next likely action of B2B customers by observing a sequence of digital actions. In this paper, we propose a unique two-step approach to model next likely action using a novel ensemble method that aims to predict the best digital asset to target customers as a next action. The paper provides a unique approach to translate the propensity model at an email address level into a segment that can target a group of email addresses. In the first step, we identify the high propensity customers for a given asset using traditional and advanced multinomial classification techniques and use non-negative least squares to stack rank different assets based on the output for ensemble model. In the second step, we perform a penalized regression to reduce the number of coefficients and obtain the satisfactory segment variables. Using real world digital marketing campaign data, we further show that the proposed method outperforms the traditional classification methods.",
"title": ""
},
{
"docid": "neg:1840327_18",
"text": "In many cases, neurons process information carried by the precise timings of spikes. Here we show how neurons can learn to generate specific temporally precise output spikes in response to input patterns of spikes having precise timings, thus processing and memorizing information that is entirely temporally coded, both as input and as output. We introduce two new supervised learning rules for spiking neurons with temporal coding of information (chronotrons), one that provides high memory capacity (E-learning), and one that has a higher biological plausibility (I-learning). With I-learning, the neuron learns to fire the target spike trains through synaptic changes that are proportional to the synaptic currents at the timings of real and target output spikes. We study these learning rules in computer simulations where we train integrate-and-fire neurons. Both learning rules allow neurons to fire at the desired timings, with sub-millisecond precision. We show how chronotrons can learn to classify their inputs, by firing identical, temporally precise spike trains for different inputs belonging to the same class. When the input is noisy, the classification also leads to noise reduction. We compute lower bounds for the memory capacity of chronotrons and explore the influence of various parameters on chronotrons' performance. The chronotrons can model neurons that encode information in the time of the first spike relative to the onset of salient stimuli or neurons in oscillatory networks that encode information in the phases of spikes relative to the background oscillation. Our results show that firing one spike per cycle optimizes memory capacity in neurons encoding information in the phase of firing relative to a background rhythm.",
"title": ""
}
] |
1840328 | Privacy-Preserving Deep Inference for Rich User Data on The Cloud | [
{
"docid": "pos:1840328_0",
"text": "In recent years, privacy-preserving data mining has been studied extensively, because of the wide proliferation of sensitive information on the internet. A number of algorithmic techniques have been designed for privacy-preserving data mining. In this paper, we provide a review of the state-of-the-art methods for privacy. We discuss methods for randomization, k-anonymization, and distributed privacy-preserving data mining. We also discuss cases in which the output of data mining applications needs to be sanitized for privacy-preservation purposes. We discuss the computational and theoretical limits associated with privacy-preservation over high dimensional data sets.",
"title": ""
}
] | [
{
"docid": "neg:1840328_0",
"text": "Skeletonization is a way to reduce dimensionality of digital objects. Here, we present an algorithm that computes the curve skeleton of a surface-like object in a 3D image, i.e., an object that in one of the three dimensions is at most twovoxel thick. A surface-like object consists of surfaces and curves crossing each other. Its curve skeleton is a 1D set centred within the surface-like object and with preserved topological properties. It can be useful to achieve a qualitative shape representation of the object with reduced dimensionality. The basic idea behind our algorithm is to detect the curves and the junctions between different surfaces and prevent their removal as they retain the most significant shape representation. 2002 Elsevier Science B.V. All rights reserved.",
"title": ""
},
{
"docid": "neg:1840328_1",
"text": "Modern neural networks are often augmented with an attention mechanism, which tells the network where to focus within the input. We propose in this paper a new framework for sparse and structured attention, building upon a smoothed max operator. We show that the gradient of this operator defines a mapping from real values to probabilities, suitable as an attention mechanism. Our framework includes softmax and a slight generalization of the recently-proposed sparsemax as special cases. However, we also show how our framework can incorporate modern structured penalties, resulting in more interpretable attention mechanisms, that focus on entire segments or groups of an input. We derive efficient algorithms to compute the forward and backward passes of our attention mechanisms, enabling their use in a neural network trained with backpropagation. To showcase their potential as a drop-in replacement for existing ones, we evaluate our attention mechanisms on three large-scale tasks: textual entailment, machine translation, and sentence summarization. Our attention mechanisms improve interpretability without sacrificing performance; notably, on textual entailment and summarization, we outperform the standard attention mechanisms based on softmax and sparsemax.",
"title": ""
},
{
"docid": "neg:1840328_2",
"text": "Although Bitcoin is often perceived to be an anonymous currency, research has shown that a user’s Bitcoin transactions can be linked to compromise the user’s anonymity. We present solutions to the anonymity problem for both transactions on Bitcoin’s blockchain and off the blockchain (in so called micropayment channel networks). We use an untrusted third party to issue anonymous vouchers which users redeem for Bitcoin. Blind signatures and Bitcoin transaction contracts (aka smart contracts) ensure the anonymity and fairness during the bitcoin ↔ voucher exchange. Our schemes are practical, secure and anonymous.",
"title": ""
},
{
"docid": "neg:1840328_3",
"text": "In the accident of nuclear disasters or biochemical terrors, there is the strong need for robots which can move around and collect information at the disaster site. The robot should have toughness and high mobility in a location of stairs and obstacles. In this study, we propose a brand new type of mobile base named “crank-wheel” suitable for such use. Crank-wheel consists of wheels and connecting coupler link named crank-leg. Crank-wheel makes simple and smooth wheeled motion on flat ground and automatically transforms to the walking motion on rugged terrain as the crank-legs starts to contact the surface of the rugged terrain and acts as legs. This mechanism features its simple, easiness to maintain water and dust proof structure, and limited danger of biting rubbles in the driving mechanism just as the case of tracked vehicles. Effectiveness of the Crank-wheel is confirmed by several driving experiments on debris, sand and bog.",
"title": ""
},
{
"docid": "neg:1840328_4",
"text": "Modern digital systems are severely constrained by both battery life and operating temperatures, resulting in strict limits on total power consumption and power density. To continue to scale digital throughput at constant power density, there is a need for increasing parallelism and dynamic voltage/bias scaling. This work presents an architecture and power converter implementation providing efficient power-delivery for microprocessors and other high-performance digital circuits stacked in vertical voltage domains. A multi-level DC-DC converter interfaces between a fixed DC voltage and multiple 0.7 V to 1.4 V voltage domains stacked in series. The converter implements dynamic voltage scaling (DVS) with multi-objective digital control implemented in an on-board (embedded) digital control system. We present measured results demonstrating functional multi-core DVS and performance with moderate load current steps. The converter demonstrates the use of a two-phase interleaved powertrain with coupled inductors to achieve voltage and current ripple reduction for the stacked ladder-converter architecture.",
"title": ""
},
{
"docid": "neg:1840328_5",
"text": "A method is developed for imputing missing values when the probability of response depends upon the variable being imputed. The missing data problem is viewed as one of parameter estimation in a regression model with stochastic ensoring of the dependent variable. The prediction approach to imputation is used to solve this estimation problem. Wages and salaries are imputed to nonrespondents in the Current Population Survey and the results are compared to the nonrespondents' IRS wage and salary data. The stochastic ensoring approach gives improved results relative to a prediction approach that ignores the response mechanism.",
"title": ""
},
{
"docid": "neg:1840328_6",
"text": "In this paper, we want to study how natural and engineered systems could perform complex optimizations with limited computational and communication capabilities. We adopt a continuous-time dynamical system view rooted in early work on optimization and more recently in network protocol design, and merge it with the dynamic view of distributed averaging systems. We obtain a general approach, based on the control system viewpoint, that allows to analyze and design (distributed) optimization systems converging to the solution of given convex optimization problems. The control system viewpoint provides many insights and new directions of research. We apply the framework to a distributed optimal location problem and demonstrate the natural tracking and adaptation capabilities of the system to changing constraints.",
"title": ""
},
{
"docid": "neg:1840328_7",
"text": "Both industry and academia have extensively investigated hardware accelerations. To address the demands in increasing computational capability and memory requirement, in this work, we propose the structured weight matrices (SWM)-based compression technique for both Field Programmable Gate Array (FPGA) and application-specific integrated circuit (ASIC) implementations. In the algorithm part, the SWM-based framework adopts block-circulant matrices to achieve a fine-grained tradeoff between accuracy and compression ratio. The SWM-based technique can reduce computational complexity from O(n2) to O(nlog n) and storage complexity from O(n2) to O(n) for each layer and both training and inference phases. For FPGA implementations on deep convolutional neural networks (DCNNs), we achieve at least 152X and 72X improvement in performance and energy efficiency, respectively using the SWM-based framework, compared with the baseline of IBM TrueNorth processor under same accuracy constraints using the data set of MNIST, SVHN, and CIFAR-10. For FPGA implementations on long short term memory (LSTM) networks, the proposed SWM-based LSTM can achieve up to 21X enhancement in performance and 33.5X gains in energy efficiency compared with the ESE accelerator. For ASIC implementations, the proposed SWM-based ASIC design exhibits impressive advantages in terms of power, throughput, and energy efficiency. Experimental results indicate that this method is greatly suitable for applying DNNs onto both FPGAs and mobile/IoT devices.",
"title": ""
},
{
"docid": "neg:1840328_8",
"text": "American students rank well below international peers in the disciplines of science, technology, engineering, and mathematics (STEM). Early exposure to STEM-related concepts is critical to later academic achievement. Given the rise of tablet-computer use in early childhood education settings, interactive technology might be one particularly fruitful way of supplementing early STEM education. Using a between-subjects experimental design, we sought to determine whether preschoolers could learn a fundamental math concept (i.e., measurement with non-standard units) from educational technology, and whether interactivity is a crucial component of learning from that technology. Participants who either played an interactive tablet-based game or viewed a non-interactive video demonstrated greater transfer of knowledge than those assigned to a control condition. Interestingly, interactivity contributed to better performance on near transfer tasks, while participants in the non-interactive condition performed better on far transfer tasks. Our findings suggest that, while preschool-aged children can learn early STEM skills from educational technology, interactivity may only further support learning in certain",
"title": ""
},
{
"docid": "neg:1840328_9",
"text": "An Ambient Intelligence (AmI) environment is primary developed using intelligent agents and wireless sensor networks. The intelligent agents could automatically obtain contextual information in real time using Near Field Communication (NFC) technique and wireless ad-hoc networks. In this research, we propose a stock trading and recommendation system with mobile devices (Android platform) interface in the over-the-counter market (OTC) environments. The proposed system could obtain the real-time financial information of stock price through a multi-agent architecture with plenty of useful features. In addition, NFC is used to achieve a context-aware environment allowing for automatic acquisition and transmission of useful trading recommendations and relevant stock information for investors. Finally, AmI techniques are applied to successfully create smart investment spaces, providing investors with useful monitoring tools and investment recommendation.",
"title": ""
},
{
"docid": "neg:1840328_10",
"text": "We consider the problem of using image queries to retrieve videos from a database. Our focus is on large-scale applications, where it is infeasible to index each database video frame independently. Our main contribution is a framework based on Bloom filters, which can be used to index long video segments, enabling efficient image-to-video comparisons. Using this framework, we investigate several retrieval architectures, by considering different types of aggregation and different functions to encode visual information – these play a crucial role in achieving high performance. Extensive experiments show that the proposed technique improves mean average precision by 24% on a public dataset, while being 4× faster, compared to the previous state-of-the-art.",
"title": ""
},
{
"docid": "neg:1840328_11",
"text": "Object detection methods fall into two categories, i.e., two-stage and single-stage detectors. The former is characterized by high detection accuracy while the latter usually has considerable inference speed. Hence, it is imperative to fuse their metrics for a better accuracy vs. speed trade-off. To this end, we propose a dual refinement network (DRN) to boost the performance of the single-stage detector. Inheriting from the advantages of two-stage approaches (i.e., two-step regression and accurate features for detection), anchor refinement and feature offset refinement are conducted in anchor-offset detection, where the detection head is comprised of deformable convolutions. Moreover, to leverage contextual information for describing objects, we design a multi-deformable head, in which multiple detection paths with different receptive field sizes devote themselves to detecting objects. Extensive experiments on PASCAL VOC and ImageNet VID datasets are conducted, and we achieve the state-of-the-art results and a better accuracy vs. speed trade-off, i.e., 81.4% mAP vs. 42.3 FPS on VOC2007 test set. Codes will be publicly available.",
"title": ""
},
{
"docid": "neg:1840328_12",
"text": "The proliferation of internet along with the attractiveness of the web in recent years has made web mining as the research area of great magnitude. Web mining essentially has many advantages which makes this technology attractive to researchers. The analysis of web user’s navigational pattern within a web site can provide useful information for applications like, server performance enhancements, restructuring a web site, direct marketing in ecommerce etc. The navigation paths may be explored based on some similarity criteria, in order to get the useful inference about the usage of web. The objective of this paper is to propose an effective clustering technique to group users’ sessions by modifying K-means algorithm and suggest a method to compute the distance between sessions based on similarity of their web access path, which takes care of the issue of the user sessions that are of variable",
"title": ""
},
{
"docid": "neg:1840328_13",
"text": "Although motion blur and rolling shutter deformations are closely coupled artifacts in images taken with CMOS image sensors, the two phenomena have so far mostly been treated separately, with deblurring algorithms being unable to handle rolling shutter wobble, and rolling shutter algorithms being incapable of dealing with motion blur. We propose an approach that delivers sharp and undistorted output given a single rolling shutter motion blurred image. The key to achieving this is a global modeling of the camera motion trajectory, which enables each scanline of the image to be deblurred with the corresponding motion segment. We show the results of the proposed framework through experiments on synthetic and real data.",
"title": ""
},
{
"docid": "neg:1840328_14",
"text": "In order to improve the life quality of amputees, providing approximate manipulation ability of a human hand to that of a prosthetic hand is considered by many researchers. In this study, a biomechanical model of the index finger of the human hand is developed based on the human anatomy. Since the activation of finger bones are carried out by tendons, a tendon configuration of the index finger is introduced and used in the model to imitate the human hand characteristics and functionality. Then, fuzzy sliding mode control where the slope of the sliding surface is tuned by a fuzzy logic unit is proposed and applied to have the finger model to follow a certain trajectory. The trajectory of the finger model, which mimics the motion characteristics of the human hand, is pre-determined from the camera images of a real hand during closing and opening motion. Also, in order to check the robust behaviour of the controller, an unexpected joint friction is induced on the prosthetic finger on its way. Finally, the resultant prosthetic finger motion and the tendon forces produced are given and results are discussed.",
"title": ""
},
{
"docid": "neg:1840328_15",
"text": "6 1 and cost of the product. Not all materials can be scaled-up with the same mixing process. Frequently, scaling-up the mixing process from small research batches to large quantities, necessary for production, can lead to unexpected problems. This reference book is intended to help the reader both identify and solve mixing problems. It is a comprehensive handbook that provides excellent coverage on the fundamentals, design, and applications of current mixing technology in general. Although this book includes many technology areas, one of main areas of interest to our readers would be in the polymer processing area. This would include the first eight chapters in the book and a specific application chapter on polymer processing. These cover the fundamentals of mixing technology, important to polymer processing, including residence time distributions and laminar mixing techniques. In the experimental section of the book, some of the relevant tools and techniques cover flow visualization technologies, lab scale mixing, flow and torque measurements, CFD coding, and numerical methods. There is a good overview of various types of mixers used for polymer processing in a dedicated applications chapter on mixing high viscosity materials such as polymers. There are many details given on the differences between the mixing blades in various types of high viscosity mixers and suggestions for choosing the proper mixer for high viscosity applications. The majority of the book does, however, focus on the chemical, petroleum, and pharmaceutical industries that generally process materials with much lower viscosity than polymers. The reader interested in learning about the fundamentals of mixing in general as well as some specifics on polymer processing would find this book to be a useful reference.",
"title": ""
},
{
"docid": "neg:1840328_16",
"text": "In this paper, a novel subspace method called diagonal principal component analysis (DiaPCA) is proposed for face recognition. In contrast to standard PCA, DiaPCA directly seeks the optimal projective vectors from diagonal face images without image-to-vector transformation. While in contrast to 2DPCA, DiaPCA reserves the correlations between variations of rows and those of columns of images. Experiments show that DiaPCA is much more accurate than both PCA and 2DPCA. Furthermore, it is shown that the accuracy can be further improved by combining DiaPCA with 2DPCA.",
"title": ""
},
{
"docid": "neg:1840328_17",
"text": "A novel multi-objective evolutionary algorithm (MOEA) is developed based on Imperialist Competitive Algorithm (ICA), a newly introduced evolutionary algorithm (EA). Fast non-dominated sorting and the Sigma method are employed for ranking the solutions. The algorithm is tested on six well-known test functions each of them incorporate a particular feature that may cause difficulty to MOEAs. The numerical results indicate that MOICA shows significantly higher efficiency in terms of accuracy and maintaining a diverse population of solutions when compared to the existing salient MOEAs, namely fast elitism non-dominated sorting genetic algorithm (NSGA-II) and multi-objective particle swarm optimization (MOPSO). Considering the computational time, the proposed algorithm is slightly faster than MOPSO and significantly outperforms NSGA-II. KEYWORD Multi-objective Imperialist Competitive Algorithm, Multi-objective optimization, Pareto front.",
"title": ""
},
{
"docid": "neg:1840328_18",
"text": "Ontologies now play an important role for many knowledge-intensive applications for which they provide a source of precisely defined terms. However, with their wide-spread usage there come problems concerning their proliferation. Ontology engineers or users frequently have a core ontology that they use, e.g., for browsing or querying data, but they need to extend it with, adapt it to, or compare it with the large set of other ontologies. For the task of detecting and retrieving relevant ontologies, one needs means for measuring the similarity between ontologies. We present a set of ontology similarity measures and a multiple-phase empirical evaluation.",
"title": ""
}
] |
1840329 | THE FAILURE OF E-GOVERNMENT IN DEVELOPING COUNTRIES: A LITERATURE REVIEW | [
{
"docid": "pos:1840329_0",
"text": "E-governance is more than just a government website on the Internet. The strategic objective of e-governance is to support and simplify governance for all parties; government, citizens and businesses. The use of ICTs can connect all three parties and support processes and activities. In other words, in e-governance electronic means support and stimulate good governance. Therefore, the objectives of e-governance are similar to the objectives of good governance. Good governance can be seen as an exercise of economic, political, and administrative authority to better manage affairs of a country at all levels. It is not difficult for people in developed countries to imagine a situation in which all interaction with government can be done through one counter 24 hours a day, 7 days a week, without waiting in lines. However to achieve this same level of efficiency and flexibility for developing countries is going to be difficult. The experience in developed countries shows that this is possible if governments are willing to decentralize responsibilities and processes, and if they start to use electronic means. This paper is going to examine the legal and infrastructure issues related to e-governance from the perspective of developing countries. Particularly it will examine how far the developing countries have been successful in providing a legal framework.",
"title": ""
}
] | [
{
"docid": "neg:1840329_0",
"text": "This paper presents a class of linear predictors for nonlinear controlled dynamical systems. The basic idea is to lift (or embed) the nonlinear dynamics into a higher dimensional space where its evolution is approximately linear. In an uncontrolled setting, this procedure amounts to numerical approximations of the Koopman operator associated to the nonlinear dynamics. In this work, we extend the Koopman operator to controlled dynamical systems and apply the Extended Dynamic Mode Decomposition (EDMD) to compute a finite-dimensional approximation of the operator in such a way that this approximation has the form of a linear controlled dynamical system. In numerical examples, the linear predictors obtained in this way exhibit a performance superior to existing linear predictors such as those based on local linearization or the so called Carleman linearization. Importantly, the procedure to construct these linear predictors is completely data-driven and extremely simple – it boils down to a nonlinear transformation of the data (the lifting) and a linear least squares problem in the lifted space that can be readily solved for large data sets. These linear predictors can be readily used to design controllers for the nonlinear dynamical system using linear controller design methodologies. We focus in particular on model predictive control (MPC) and show that MPC controllers designed in this way enjoy computational complexity of the underlying optimization problem comparable to that of MPC for a linear dynamical system with the same number of control inputs and the same dimension of the state-space. Importantly, linear inequality constraints on the state and control inputs as well as nonlinear constraints on the state can be imposed in a linear fashion in the proposed MPC scheme. Similarly, cost functions nonlinear in the state variable can be handled in a linear fashion. We treat both the full-state measurement case and the input-output case, as well as systems with disturbances / noise. Numerical examples (including a high-dimensional nonlinear PDE control) demonstrate the approach with the source code available online2.",
"title": ""
},
{
"docid": "neg:1840329_1",
"text": "A growing amount of research focuses on learning in group settings and more specifically on learning in computersupported collaborative learning (CSCL) settings. Studies on western students indicate that online collaboration enhances student learning achievement; however, few empirical studies have examined student satisfaction, performance, and knowledge construction through online collaboration from a cross-cultural perspective. This study examines satisfaction, performance, and knowledge construction via online group discussions of students in two different cultural contexts. Students were both first-year university students majoring in educational sciences at a Flemish university and a Chinese university. Differences and similarities of the two groups of students with regard to satisfaction, learning process, and achievement were analyzed.",
"title": ""
},
{
"docid": "neg:1840329_2",
"text": "OBJECTIVES\nWe studied whether park size, number of features in the park, and distance to a park from participants' homes were related to a park being used for physical activity.\n\n\nMETHODS\nWe collected observational data on 28 specific features from 33 parks. Adult residents in surrounding areas (n=380) completed 7-day physical activity logs that included the location of their activities. We used logistic regression to examine the relative importance of park size, features, and distance to participants' homes in predicting whether a park was used for physical activity, with control for perceived neighborhood safety and aesthetics.\n\n\nRESULTS\nParks with more features were more likely to be used for physical activity; size and distance were not significant predictors. Park facilities were more important than were park amenities. Of the park facilities, trails had the strongest relationship with park use for physical activity.\n\n\nCONCLUSIONS\nSpecific park features may have significant implications for park-based physical activity. Future research should explore these factors in diverse neighborhoods and diverse parks among both younger and older populations.",
"title": ""
},
{
"docid": "neg:1840329_3",
"text": "Ongoing innovations in recurrent neural network architectures have provided a steady influx of apparently state-of-the-art results on language modelling benchmarks. However, these have been evaluated using differing codebases and limited computational resources, which represent uncontrolled sources of experimental variation. We reevaluate several popular architectures and regularisation methods with large-scale automatic black-box hyperparameter tuning and arrive at the somewhat surprising conclusion that standard LSTM architectures, when properly regularised, outperform more recent models. We establish a new state of the art on the Penn Treebank and Wikitext-2 corpora, as well as strong baselines on the Hutter Prize dataset.",
"title": ""
},
{
"docid": "neg:1840329_4",
"text": "An energy-efficient gait planning (EEGP) and control system is established for biped robots with three-mass inverted pendulum mode (3MIPM), which utilizes both vertical body motion (VBM) and allowable zero-moment-point (ZMP) region (AZR). Given a distance to be traveled, we newly designed an online gait synthesis algorithm to construct a complete walking cycle, i.e., a starting step, multiple cyclic steps, and a stopping step, in which: 1) ZMP was fully manipulated within AZR; and 2) vertical body movement was allowed to relieve knee bending. Moreover, gait parameter optimization is effectively performed to determine the optimal set of gait parameters, i.e., average body height and amplitude of VBM, number of steps, and average walking speed, which minimizes energy consumption of actuation motors for leg joints under practical constraints, i.e., geometrical constraints, friction force limit, and yawing moment limit. Various simulations were conducted to identify the effectiveness of the proposed method and verify energy-saving performance for various ZMP regions. Our control system was implemented and tested on the humanoid robot DARwIn-OP.",
"title": ""
},
{
"docid": "neg:1840329_5",
"text": "This paper presents a new probabilistic model of information retrieval. The most important modeling assumption made is that documents and queries are defined by an ordered sequence of single terms. This assumption is not made in well-known existing models of information retrieval, but is essential in the field of statistical natural language processing. Advances already made in statistical natural language processing will be used in this paper to formulate a probabilistic justification for using tf×idf term weighting. The paper shows that the new probabilistic interpretation of tf×idf term weighting might lead to better understanding of statistical ranking mechanisms, for example by explaining how they relate to coordination level ranking. A pilot experiment on the TREC collection shows that the linguistically motivated weighting algorithm outperforms the popular BM25 weighting algorithm.",
"title": ""
},
{
"docid": "neg:1840329_6",
"text": "People design what they say specifically for their conversational partners, and they adapt to their partners over the course of a conversation. A comparison of keyboard conversations involving a simulated computer partner (as in a natural language interface) with those involving a human partner (as in teleconferencing) yielded striking differences and some equally striking similarities. For instance, there were significantly fewer acknowledgments in human/computer dialogue than in human/human. However, regardless of the conversational partner, people expected connectedness across conversational turns. In addition, the style of a partner's response shaped what people subsequently typed. These results suggest some issues that need to be addressed before a natural language computer interface will be able to hold up its end of a conversation.",
"title": ""
},
{
"docid": "neg:1840329_7",
"text": "Teaching by examples and cases is widely used to promote learning, but it varies widely in its effectiveness. The authors test an adaptation to case-based learning that facilitates abstracting problemsolving schemas from examples and using them to solve further problems: analogical encoding, or learning by drawing a comparison across examples. In 3 studies, the authors examined schema abstraction and transfer among novices learning negotiation strategies. Experiment 1 showed a benefit for analogical learning relative to no case study. Experiment 2 showed a marked advantage for comparing two cases over studying the 2 cases separately. Experiment 3 showed that increasing the degree of comparison support increased the rate of transfer in a face-to-face dynamic negotiation exercise.",
"title": ""
},
{
"docid": "neg:1840329_8",
"text": "Despite years of HCI research on digital technology in museums, it is still unclear how different interactions impact on visitors'. A comparative evaluation of smart replicas, phone app and smart cards looked at the personal preferences, behavioural change, and the appeal of mobiles in museums. 76 participants used all three interaction modes and gave their opinions in a questionnaire; participants interaction was also observed. The results show the phone is the most disliked interaction mode while tangible interaction (smart card and replica combined) is the most liked. Preference for the phone favour mobility to the detriment of engagement with the exhibition. Different behaviours when interacting with the phone or the tangibles where observed. The personal visiting style appeared to be only marginally affected by the device. Visitors also expect museums to provide the phones against the current trend of developing apps in a \"bring your own device\" approach.",
"title": ""
},
{
"docid": "neg:1840329_9",
"text": "Predefined categories can be assigned to the natural language text using for text classification. It is a “bag-of-word” representation, previous documents have a word with values, it represents how frequently this word appears in the document or not. But large documents may face many problems because they have irrelevant or abundant information is there. This paper explores the effect of other types of values, which express the distribution of a word in the document. These values are called distributional features. All features are calculated by tfidf style equation and these features are combined with machine learning techniques. Term frequency is one of the major factor for distributional features it holds weighted item set. When the need is to minimize a certain score function, discovering rare data correlations is more interesting than mining frequent ones. This paper tackles the issue of discovering rare and weighted item sets, i.e., the infrequent weighted item set mining problem. The classifier which gives the more accurate result is selected for categorization. Experiments show that the distributional features are useful for text categorization.",
"title": ""
},
{
"docid": "neg:1840329_10",
"text": "In this article, the authors present a psychodynamically oriented psychotherapy approach for posttraumatic stress disorder (PTSD) related to childhood abuse. This neurobiologically informed, phase-oriented treatment approach, which has been developed in Germany during the past 20 years, takes into account the broad comorbidity and the large degree of ego-function impairment typically found in these patients. Based on a psychodynamic relationship orientation, this treatment integrates a variety of trauma-specific imaginative and resource-oriented techniques. The approach places major emphasis on the prevention of vicarious traumatization. The authors are presently planning to test the approach in a randomized controlled trial aimed at strengthening the evidence base for psychodynamic psychotherapy in PTSD.",
"title": ""
},
{
"docid": "neg:1840329_11",
"text": "A major challenge in real-world feature matching problems is to tolerate the numerous outliers arising in typical visual tasks. Variations in object appearance, shape, and structure within the same object class make it harder to distinguish inliers from outliers due to clutters. In this paper, we propose a max-pooling approach to graph matching, which is not only resilient to deformations but also remarkably tolerant to outliers. The proposed algorithm evaluates each candidate match using its most promising neighbors, and gradually propagates the corresponding scores to update the neighbors. As final output, it assigns a reliable score to each match together with its supporting neighbors, thus providing contextual information for further verification. We demonstrate the robustness and utility of our method with synthetic and real image experiments.",
"title": ""
},
{
"docid": "neg:1840329_12",
"text": "In this paper, an ultra-compact single-chip solar energy harvesting IC using on-chip solar cell for biomedical implant applications is presented. By employing an on-chip charge pump with parallel connected photodiodes, a 3.5 <inline-formula> <tex-math notation=\"LaTeX\">$\\times$</tex-math></inline-formula> efficiency improvement can be achieved when compared with the conventional stacked photodiode approach to boost the harvested voltage while preserving a single-chip solution. A photodiode-assisted dual startup circuit (PDSC) is also proposed to improve the area efficiency and increase the startup speed by 77%. By employing an auxiliary charge pump (AQP) using zero threshold voltage (ZVT) devices in parallel with the main charge pump, a low startup voltage of 0.25 V is obtained while minimizing the reversion loss. A <inline-formula> <tex-math notation=\"LaTeX\">$4\\, {\\mathbf{V}}_{\\mathbf{in}}$</tex-math></inline-formula> gate drive voltage is utilized to reduce the conduction loss. Systematic charge pump and solar cell area optimization is also introduced to improve the energy harvesting efficiency. The proposed system is implemented in a standard 0.18- <inline-formula> <tex-math notation=\"LaTeX\">$\\mu\\text{m}$</tex-math></inline-formula> CMOS technology and occupies an active area of 1.54 <inline-formula> <tex-math notation=\"LaTeX\">$\\text{mm}^{2}$</tex-math></inline-formula>. Measurement results show that the on-chip charge pump can achieve a maximum efficiency of 67%. With an incident power of 1.22 <inline-formula> <tex-math notation=\"LaTeX\">$\\text{mW/cm}^{2}$</tex-math></inline-formula> from a halogen light source, the proposed energy harvesting IC can deliver an output power of 1.65 <inline-formula> <tex-math notation=\"LaTeX\">$\\mu\\text{W}$</tex-math></inline-formula> at 64% charge pump efficiency. The chip prototype is also verified using <italic>in-vitro</italic> experiment.",
"title": ""
},
{
"docid": "neg:1840329_13",
"text": "Double spending and blockchain forks are two main issues that the Bitcoin crypto-system is confronted with. The former refers to an adversary's ability to use the very same coin more than once while the latter reflects the occurrence of transient inconsistencies in the history of the blockchain distributed data structure. We present a new approach to tackle these issues: it consists in adding some local synchronization constraints on Bitcoin's validation operations, and in making these constraints independent from the native blockchain protocol. Synchronization constraints are handled by nodes which are randomly and dynamically chosen in the Bitcoin system. We show that with such an approach, content of the blockchain is consistent with all validated transactions and blocks which guarantees the absence of both double-spending attacks and blockchain forks.",
"title": ""
},
{
"docid": "neg:1840329_14",
"text": "One of the important types of information on the Web is the opinions expressed in the user generated content, e.g., customer reviews of products, forum posts, and blogs. In this paper, we focus on customer reviews of products. In particular, we study the problem of determining the semantic orientations (positive, negative or neutral) of opinions expressed on product features in reviews. This problem has many applications, e.g., opinion mining, summarization and search. Most existing techniques utilize a list of opinion (bearing) words (also called opinion lexicon) for the purpose. Opinion words are words that express desirable (e.g., great, amazing, etc.) or undesirable (e.g., bad, poor, etc) states. These approaches, however, all have some major shortcomings. In this paper, we propose a holistic lexicon-based approach to solving the problem by exploiting external evidences and linguistic conventions of natural language expressions. This approach allows the system to handle opinion words that are context dependent, which cause major difficulties for existing algorithms. It also deals with many special words, phrases and language constructs which have impacts on opinions based on their linguistic patterns. It also has an effective function for aggregating multiple conflicting opinion words in a sentence. A system, called Opinion Observer, based on the proposed technique has been implemented. Experimental results using a benchmark product review data set and some additional reviews show that the proposed technique is highly effective. It outperforms existing methods significantly",
"title": ""
},
{
"docid": "neg:1840329_15",
"text": "The flipped classroom pedagogy has achieved significant mention in academic circles in recent years. \"Flipping\" involves the reinvention of a traditional course so that students engage with learning materials via recorded lectures and interactive exercises prior to attending class and then use class time for more interactive activities. Proper implementation of a flipped classroom is difficult to gauge, but combines successful techniques for distance education with constructivist learning theory in the classroom. While flipped classrooms are not a novel concept, technological advances and increased comfort with distance learning have made the tools to produce and consume course materials more pervasive. Flipped classroom experiments have had both positive and less-positive results and are generally measured by a significant improvement in learning outcomes. This study, however, analyzes the opinions of students in a flipped sophomore-level information technology course by using a combination of surveys and reflective statements. The author demonstrates that at the outset students are new - and somewhat receptive - to the concept of the flipped classroom. By the conclusion of the course satisfaction with the pedagogy is significant. Finally, student feedback is provided in an effort to inform instructors in the development of their own flipped classrooms.",
"title": ""
},
{
"docid": "neg:1840329_16",
"text": "In this article, I provide commentary on the Rudd et al. (2009) article advocating thorough informed consent with suicidal clients. I examine the Rudd et al. recommendations in light of their previous empirical-research and clinical-practice articles on suicidality, and from the perspective of clinical practice with suicidal clients in university counseling center settings. I conclude that thorough informed consent is a clinical intervention that is still in preliminary stages of development, necessitating empirical research and clinical training before actual implementation as an ethical clinical intervention. (PsycINFO Database Record (c) 2010 APA, all rights reserved).",
"title": ""
},
{
"docid": "neg:1840329_17",
"text": "Hoffa's (infrapatellar) fat pad (HFP) is one of the knee fat pads interposed between the joint capsule and the synovium. Located posterior to patellar tendon and anterior to the capsule, the HFP is richly innervated and, therefore, one of the sources of anterior knee pain. Repetitive local microtraumas, impingement, and surgery causing local bleeding and inflammation are the most frequent causes of HFP pain and can lead to a variety of arthrofibrotic lesions. In addition, the HFP may be secondarily involved to menisci and ligaments disorders, injuries of the patellar tendon and synovial disorders. Patients with oedema or abnormalities of the HFP on magnetic resonance imaging (MRI) are often symptomatic; however, these changes can also be seen in asymptomatic patients. Radiologists should be cautious in emphasising abnormalities of HFP since they do not always cause pain and/or difficulty in walking and, therefore, do not require therapy. Teaching Points • Hoffa's fat pad (HFP) is richly innervated and, therefore, a source of anterior knee pain. • HFP disorders are related to traumas, involvement from adjacent disorders and masses. • Patients with abnormalities of the HFP on MRI are often but not always symptomatic. • Radiologists should be cautious in emphasising abnormalities of HFP.",
"title": ""
},
{
"docid": "neg:1840329_18",
"text": "Smart grids equipped with bi-directional communication flow are expected to provide more sophisticated consumption monitoring and energy trading. However, the issues related to the security and privacy of consumption and trading data present serious challenges. In this paper we address the problem of providing transaction security in decentralized smart grid energy trading without reliance on trusted third parties. We have implemented a proof-of-concept for decentralized energy trading system using blockchain technology, multi-signatures, and anonymous encrypted messaging streams, enabling peers to anonymously negotiate energy prices and securely perform trading transactions. We conducted case studies to perform security analysis and performance evaluation within the context of the elicited security and privacy requirements.",
"title": ""
}
] |
1840330 | DSGAN: Generative Adversarial Training for Distant Supervision Relation Extraction | [
{
"docid": "pos:1840330_0",
"text": "Distant supervision for relation extraction is an efficient method to scale relation extraction to very large corpora which contains thousands of relations. However, the existing approaches have flaws on selecting valid instances and lack of background knowledge about the entities. In this paper, we propose a sentence-level attention model to select the valid instances, which makes full use of the supervision information from knowledge bases. And we extract entity descriptions from Freebase and Wikipedia pages to supplement background knowledge for our task. The background knowledge not only provides more information for predicting relations, but also brings better entity representations for the attention module. We conduct three experiments on a widely used dataset and the experimental results show that our approach outperforms all the baseline systems significantly.",
"title": ""
},
{
"docid": "pos:1840330_1",
"text": "Entity pair provide essential information for identifying relation type. Aiming at this characteristic, Position Feature is widely used in current relation classification systems to highlight the words close to them. However, semantic knowledge involved in entity pair has not been fully utilized. To overcome this issue, we propose an Entity-pair-based Attention Mechanism, which is specially designed for relation classification. Recently, attention mechanism significantly promotes the development of deep learning in NLP. Inspired by this, for specific instance(entity pair, sentence), the corresponding entity pair information is incorporated as prior knowledge to adaptively compute attention weights for generating sentence representation. Experimental results on SemEval-2010 Task 8 dataset show that our method outperforms most of the state-of-the-art models, without external linguistic features.",
"title": ""
},
{
"docid": "pos:1840330_2",
"text": "We survey recent approaches to noise reduction in distant supervision learning for relation extraction. We group them according to the principles they are based on: at-least-one constraints, topic-based models, or pattern correlations. Besides describing them, we illustrate the fundamental differences and attempt to give an outlook to potentially fruitful further research. In addition, we identify related work in sentiment analysis which could profit from approaches to noise reduction.",
"title": ""
},
{
"docid": "pos:1840330_3",
"text": "In relation extraction, distant supervision seeks to extract relations between entities from text by using a knowledge base, such as Freebase, as a source of supervision. When a sentence and a knowledge base refer to the same entity pair, this approach heuristically labels the sentence with the corresponding relation in the knowledge base. However, this heuristic can fail with the result that some sentences are labeled wrongly. This noisy labeled data causes poor extraction performance. In this paper, we propose a method to reduce the number of wrong labels. We present a novel generative model that directly models the heuristic labeling process of distant supervision. The model predicts whether assigned labels are correct or wrong via its hidden variables. Our experimental results show that this model detected wrong labels with higher performance than baseline methods. In the experiment, we also found that our wrong label reduction boosted the performance of relation extraction.",
"title": ""
}
] | [
{
"docid": "neg:1840330_0",
"text": "The distinguishing feature of the Fog Computing (FC) paradigm is that FC spreads communication and computing resources over the wireless access network, so as to provide resource augmentation to resource and energy-limited wireless (possibly mobile) devices. Since FC would lead to substantial reductions in energy consumption and access latency, it will play a key role in the realization of the Fog of Everything (FoE) paradigm. The core challenge of the resulting FoE paradigm is tomaterialize the seamless convergence of three distinct disciplines, namely, broadband mobile communication, cloud computing, and Internet of Everything (IoE). In this paper, we present a new IoE architecture for FC in order to implement the resulting FoE technological platform. Then, we elaborate the related Quality of Service (QoS) requirements to be satisfied by the underlying FoE technological platform. Furthermore, in order to corroborate the conclusion that advancements in the envisioned architecture description, we present: (i) the proposed energy-aware algorithm adopt Fog data center; and, (ii) the obtained numerical performance, for a real-world case study that shows that our approach saves energy consumption impressively in theFog data Center compared with the existing methods and could be of practical interest in the incoming Fog of Everything (FoE) realm.",
"title": ""
},
{
"docid": "neg:1840330_1",
"text": "Massive graphs arise naturally in a lot of applications, especially in communication networks like the internet. The size of these graphs makes it very hard or even impossible to store set of edges in the main memory. Thus, random access to the edges can't be realized, which makes most o ine algorithms unusable. This essay investigates e cient algorithms that read the edges only in a xed sequential order. Since even basic graph problems often need at least linear space in the number of vetices to be solved, the storage space bounds are relaxed compared to the classic streaming model, such that the bound is O(n · polylog n). The essay describes algorithms for approximations of the unweighted and weighted matching problem and gives a o(log1− n) lower bound for approximations of the diameter. Finally, some results for further graph problems are discussed.",
"title": ""
},
{
"docid": "neg:1840330_2",
"text": "In recent times we tend to use a number of surveillance systems for monitoring the targeted area. This requires an enormous amount of storage space along with a lot of human power in order to implement and monitor the area under surveillance. This is supposed to be costly and not a reliable process. In this paper we propose an intelligent surveillance system that continuously monitors the targeted area and detects motion in each and every frame. If the system detects motion in the targeted area then a notification is automatically sent to the user by sms and the video starts getting recorded till the motion is stopped. Using this method the required memory space for storing the video is reduced since it doesn't store the entire video but stores the video only when a motion is detected. This is achieved by using real time video processing using open CV (computer vision / machine vision) technology and raspberry pi system.",
"title": ""
},
{
"docid": "neg:1840330_3",
"text": "Online action detection is a challenging problem: a system needs to decide what action is happening at the current frame, based on previous frames only. Fortunately in real-life, human actions are not independent from one another: there are strong (long-term) dependencies between them. An online action detection method should be able to capture these dependencies, to enable a more accurate early detection. At first sight, an LSTM seems very suitable for this problem. It is able to model both short-term and long-term patterns. It takes its input one frame at the time, updates its internal state and gives as output the current class probabilities. In practice, however, the detection results obtained with LSTMs are still quite low. In this work, we start from the hypothesis that it may be too difficult for an LSTM to learn both the interpretation of the input and the temporal patterns at the same time. We propose a two-stream feedback network, where one stream processes the input and the other models the temporal relations. We show improved detection accuracy on an artificial toy dataset and on the Breakfast Dataset [21] and the TVSeries Dataset [7], reallife datasets with inherent temporal dependencies between the actions.",
"title": ""
},
{
"docid": "neg:1840330_4",
"text": "This article provides a tutorial overview of cognitive architectures that can form a theoretical foundation for designing multimedia instruction. Cognitive architectures include a description of memory stores, memory codes, and cognitive operations. Architectures that are relevant to multimedia learning include Paivio’s dual coding theory, Baddeley’s working memory model, Engelkamp’s multimodal theory, Sweller’s cognitive load theory, Mayer’s multimedia learning theory, and Nathan’s ANIMATE theory. The discussion emphasizes the interplay between traditional research studies and instructional applications of this research for increasing recall, reducing interference, minimizing cognitive load, and enhancing understanding. Tentative conclusions are that (a) there is general agreement among the different architectures, which differ in focus; (b) learners’ integration of multiple codes is underspecified in the models; (c) animated instruction is not required when mental simulations are sufficient; (d) actions must be meaningful to be successful; and (e) multimodal instruction is superior to targeting modality-specific individual differences.",
"title": ""
},
{
"docid": "neg:1840330_5",
"text": "Many environmental variables that are important for the development of chironomid larvae (such as water temperature, oxygen availability, and food quantity) are related to water depth, and a statistically strong relationship between chironomid distribution and water depth is therefore expected. This study focuses on the distribution of fossil chironomids in seven shallow lakes and one deep lake from the Plymouth Aquifer (Massachusetts, USA) and aims to assess the influence of water depth on chironomid assemblages within a lake. Multiple samples were taken per lake in order to study the distribution of fossil chironomid head capsules within a lake. Within each lake, the chironomid assemblages are diverse and the changes that are seen in the assemblages are strongly related to changes in water depth. Several thresholds (i.e., where species turnover abruptly changes) are identified in the assemblages, and most lakes show abrupt changes at about 1–2 and 5–7 m water depth. In the deep lake, changes also occur at 9.6 and 15 m depth. The distribution of many individual taxa is significantly correlated to water depth, and we show that the identification of different taxa within the genus Tanytarsus is important because different morphotypes show different responses to water depth. We conclude that the chironomid fauna is sensitive to changes in lake level, indicating that fossil chironomid assemblages can be used as a tool for quantitative reconstruction of lake level changes.",
"title": ""
},
{
"docid": "neg:1840330_6",
"text": "Deep neural networks (DNNs) are widely used in data analytics, since they deliver state-of-the-art accuracies. Binarized neural networks (BNNs) are recently proposed optimized variant of DNNs. BNNs constraint network weight and/or neuron value to either +1 or −1, which is representable in 1 bit. This leads to dramatic algorithm efficiency improvement, due to reduction in the memory and computational demands. This paper evaluates the opportunity to further improve the execution efficiency of BNNs through hardware acceleration. We first proposed a BNN hardware accelerator design. Then, we implemented the proposed accelerator on Aria 10 FPGA as well as 14-nm ASIC, and compared them against optimized software on Xeon server CPU, Nvidia Titan X server GPU, and Nvidia TX1 mobile GPU. Our evaluation shows that FPGA provides superior efficiency over CPU and GPU. Even though CPU and GPU offer high peak theoretical performance, they are not as efficiently utilized since BNNs rely on binarized bit-level operations that are better suited for custom hardware. Finally, even though ASIC is still more efficient, FPGA can provide orders of magnitudes in efficiency improvements over software, without having to lock into a fixed ASIC solution.",
"title": ""
},
{
"docid": "neg:1840330_7",
"text": "In this paper, we analyze and evaluate word embeddings for representation of longer texts in the multi-label document classification scenario. The embeddings are used in three convolutional neural network topologies. The experiments are realized on the Czech ČTK and English Reuters-21578 standard corpora. We compare the results of word2vec static and trainable embeddings with randomly initialized word vectors. We conclude that initialization does not play an important role for classification. However, learning of word vectors is crucial to obtain good results.",
"title": ""
},
{
"docid": "neg:1840330_8",
"text": "Agent-based modeling of human social behavior is an increasingly important research area. A key factor in human social interaction is our beliefs about others, a theory of mind. Whether we believe a message depends not only on its content but also on our model of the communicator. How we act depends not only on the immediate effect but also on how we believe others will react. In this paper, we discuss PsychSim, an implemented multiagent-based simulation tool for modeling interactions and influence. While typical approaches to such modeling have used first-order logic, PsychSim agents have their own decision-theoretic model of the world, including beliefs about its environment and recursive models of other agents. Using these quantitative models of uncertainty and preferences, we have translated existing psychological theories into a decision-theoretic semantics that allow the agents to reason about degrees of believability in a novel way. We discuss PsychSim’s underlying architecture and describe its application to a school violence scenario for illustration.",
"title": ""
},
{
"docid": "neg:1840330_9",
"text": "Before we try to specify how to give a semantic analysis of discourse, we must define what semantic analysis is and what kinds of semantic analysis can be distinguished. Such a definition will be as complex as the number of semantic theories in the various disciplines involved in the study of language: linguistics and grammar, the philosophy of language, logic, cognitive psychology, and sociology, each with several competing semantic theories. These theories will be different according to their object of analysis, their aims, and their methods. Yet, they will also have some common properties that allow us to call them semantic theories. In this chapter I first enumerate more or less intuitively a number of these common properties, then select some of them for further theoretical analysis, and finally apply the theoretical notions in actual semantic analyses of some discourse fragments. In the most general sense, semantics is a component theory within a larger semiotic theory about meaningful, symbolic, behavior. Hence we have not only a semantics of natural language utterances or acts, but also of nonverbal or paraverbal behavior, such as gestures, pictures and films, logical systems or computer languages, sign languages of the deaf, and perhaps social interaction in general. In this chapter we consider only the semantics of natural-language utterances, that is, discourses, and their component elements, such as words, phrases, clauses, sentences, paragraphs, and other identifiable discourse units. Other semiotic aspects of verbal and nonverbal communication are treated elsewhere in this Handbook. Probably the most general concept used to denote the specific object",
"title": ""
},
{
"docid": "neg:1840330_10",
"text": "Energy harvesting based on tethered kites makes use of the advantage, that these airborne wind energy systems are able to exploit higher wind speeds at higher altitudes. The setup, considered in this paper, is based on the pumping cycle, which generates energy by winching out at high tether forces, driving an electrical generator while flying crosswind and winching in at a stationary neutral position, thus leaving a net amount of generated energy. The economic operation of such airborne wind energy plants demands for a reliable control system allowing for a complete autonomous operation of cycles. This task involves the flight control of the kite as well as the operation of a winch for the tether. The focus of this paper is put on the flight control, which implements an accurate direction control towards target points allowing for eight-down pattern flights. In addition, efficient winch control strategies are provided. The paper summarises a simple comprehensible model with equations of motion in order to motivate the approach of the control system design. After an extended overview on the control system, the flight controller parts are discussed in detail. Subsequently, the winch strategies based on an optimisation scheme are presented. In order to demonstrate the real world functionality of the presented algorithms, flight data from a fully automated pumping-cycle operation of a small-scale prototype setup based on a 30 m2 kite and a 50 kW electrical motor/generator is given.",
"title": ""
},
{
"docid": "neg:1840330_11",
"text": "In a 1998 speech before the California Science Center in Los Angeles, then US VicePresident Al Gore called for a global undertaking to build a multi-faceted computing system for education and research, which he termed “Digital Earth.” The vision was that of a system providing access to what is known about the planet and its inhabitants’ activities – currently and for any time in history – via responses to queries and exploratory tools. Furthermore, it would accommodate modeling extensions for predicting future conditions. Organized efforts towards realizing that vision have diminished significantly since 2001, but progress on key requisites has been made. As the 10 year anniversary of that influential speech approaches, we re-examine it from the perspective of a systematic software design process and find the envisioned system to be in many respects inclusive of concepts of distributed geolibraries and digital atlases. A preliminary definition for a particular digital earth system as: “a comprehensive, distributed geographic information and knowledge organization system,” is offered and discussed. We suggest that resumption of earlier design and focused research efforts can and should be undertaken, and may prove a worthwhile “Grand Challenge” for the GIScience community.",
"title": ""
},
{
"docid": "neg:1840330_12",
"text": "Ž . The technology acceptance model TAM proposes that ease of use and usefulness predict applications usage. The current research investigated TAM for work-related tasks with the World Wide Web as the application. One hundred and sixty-three subjects responded to an e-mail survey about a Web site they access often in their jobs. The results support TAM. They also Ž . Ž . demonstrate that 1 ease of understanding and ease of finding predict ease of use, and that 2 information quality predicts usefulness for revisited sites. In effect, the investigation applies TAM to help Web researchers, developers, and managers understand antecedents to users’ decisions to revisit sites relevant to their jobs. q 2000 Elsevier Science B.V. All rights reserved.",
"title": ""
},
{
"docid": "neg:1840330_13",
"text": "The unprecedented challenges of creating Biosphere 2, the world's first laboratory for biospherics, the study of global ecology and long-term closed ecological system dynamics, led to breakthrough developments in many fields, and a deeper understanding of the opportunities and difficulties of material closure. This paper will review accomplishments and challenges, citing some of the key research findings and publications that have resulted from the experiments in Biosphere 2. Engineering accomplishments included development of a technique for variable volume to deal with pressure differences between the facility and outside environment, developing methods of atmospheric leak detection and sealing, while achieving new standards of closure, with an annual atmospheric leakrate of less than 10%, or less than 300 ppm per day. This degree of closure permitted detailed tracking of carbon dioxide, oxygen, and trace gases such as nitrous oxide and ethylene over the seasonal variability of two years. Full closure also necessitated developing new approaches and technologies for complete air, water, and wastewater recycle and reuse within the facility. The development of a soil-based highly productive agricultural system was a first in closed ecological systems, and much was learned about managing a wide variety of crops using non-chemical means of pest and disease control. Closed ecological systems have different temporal biogeochemical cycling and ranges of atmospheric components because of their smaller reservoirs of air, water and soil, and higher concentration of biomass, and Biosphere 2 provided detailed examination and modeling of these accelerated cycles over a period of closure which measured in years. Medical research inside Biosphere 2 included the effects on humans of lowered oxygen: the discovery that human productivity can be maintained with good health with lowered atmospheric oxygen levels could lead to major economies on the design of space stations and planetary/lunar settlements. The improved health resulting from the calorie-restricted but nutrient dense Biosphere 2 diet was the first such scientifically controlled experiment with humans. The success of Biosphere 2 in creating a diversity of terrestrial and marine environments, from rainforest to coral reef, allowed detailed studies with comprehensive measurements such that the dynamics of these complex biomic systems are now better understood. The coral reef ecosystem, the largest artificial reef ever built, catalyzed methods of study now being applied to planetary coral reef systems. Restoration ecology advanced through the creation and study of the dynamics of adaptation and self-organization of the biomes in Biosphere 2. The international interest that Biosphere 2 generated has given new impetus to the public recognition of the sciences of biospheres (biospherics), biomes and closed ecological life systems. The facility, although no longer a materially-closed ecological system, is being used as an educational facility by Columbia University as an introduction to the study of the biosphere and complex system ecology and for carbon dioxide impacts utilizing the complex ecosystems created in Biosphere '. The many lessons learned from Biosphere 2 are being used by its key team of creators in their design and operation of a laboratory-sized closed ecological system, the Laboratory Biosphere, in operation as of March 2002, and for the design of a Mars on Earth(TM) prototype life support system for manned missions to Mars and Mars surface habitats. Biosphere 2 is an important foundation for future advances in biospherics and closed ecological system research.",
"title": ""
},
{
"docid": "neg:1840330_14",
"text": "The growth of desktop 3-D printers is driving an interest in recycled 3-D printer filament to reduce costs of distributed production. Life cycle analysis studies were performed on the recycling of high density polyethylene into filament suitable for additive layer manufacturing with 3-D printers. The conventional centralized recycling system for high population density and low population density rural locations was compared to the proposed in home, distributed recycling system. This system would involve shredding and then producing filament with an open-source plastic extruder from postconsumer plastics and then printing the extruded filament into usable, value-added parts and products with 3-D printers such as the open-source self replicating rapid prototyper, or RepRap. The embodied energy and carbon dioxide emissions were calculated for high density polyethylene recycling using SimaPro 7.2 and the database EcoInvent v2.0. The results showed that distributed recycling uses less embodied energy than the best-case scenario used for centralized recycling. For centralized recycling in a low-density population case study involving substantial embodied energy use for transportation and collection these savings for distributed recycling were found to extend to over 80%. If the distributed process is applied to the U.S. high density polyethylene currently recycled, more than 100 million MJ of energy could be conserved per annum along with the concomitant significant reductions in greenhouse gas emissions. It is concluded that with the open-source 3-D printing network expanding rapidly the potential for widespread adoption of in-home recycling of post-consumer plastic represents a novel path to a future of distributed manufacturing appropriate for both the developed and developing world with lower environmental impacts than the current system.",
"title": ""
},
{
"docid": "neg:1840330_15",
"text": "We are witnessing a dramatic change in computer architecture due to the multicore paradigm shift, as every electronic device from cell phones to supercomputers confronts parallelism of unprecedented scale. To fully unleash the potential of these systems, the HPC community must develop multicore specific optimization methodologies for important scientific computations. In this work, we examine sparse matrix-vector multiply (SpMV) - one of the most heavily used kernels in scientific computing - across a broad spectrum of multicore designs. Our experimental platform includes the homogeneous AMD dual-core and Intel quad-core designs, the heterogeneous STI Cell, as well as the first scientific study of the highly multithreaded Sun Niagara2. We present several optimization strategies especially effective for the multicore environment, and demonstrate significant performance improvements compared to existing state-of-the-art serial and parallel SpMV implementations. Additionally, we present key insights into the architectural tradeoffs of leading multicore design strategies, in the context of demanding memory-bound numerical algorithms.",
"title": ""
},
{
"docid": "neg:1840330_16",
"text": "We introduce several probabilistic models for learning the lexicon of a semantic parser. Lexicon learning is the first step of training a semantic parser for a new application domain and the quality of the learned lexicon significantly affects both the accuracy and efficiency of the final semantic parser. Existing work on lexicon learning has focused on heuristic methods that lack convergence guarantees and require significant human input in the form of lexicon templates or annotated logical forms. In contrast, our probabilistic models are trained directly from question/answer pairs using EM and our simplest model has a concave objective that guarantees convergence to a global optimum. An experimental evaluation on a set of 4th grade science questions demonstrates that our models improve semantic parser accuracy (35-70% error reduction) and efficiency (4-25x more sentences per second) relative to prior work despite using less human input. Our models also obtain competitive results on GEO880 without any datasetspecific engineering.",
"title": ""
},
{
"docid": "neg:1840330_17",
"text": "Logistics demand forecasting is important for investment decision-making of infrastructure and strategy programming of the logistics industry. In this paper, a hybrid method which combines the Grey Model, artificial neural networks and other techniques in both learning and analyzing phases is proposed to improve the precision and reliability of forecasting. After establishing a learning model GNNM(1,8) for road logistics demand forecasting, we chose road freight volume as target value and other economic indicators, i.e. GDP, production value of primary industry, total industrial output value, outcomes of tertiary industry, retail sale of social consumer goods, disposable personal income, and total foreign trade value as the seven key influencing factors for logistics demand. Actual data sequences of the province of Zhejiang from years 1986 to 2008 were collected as training and test-proof samples. By comparing the forecasting results, it turns out that GNNM(1,8) is an appropriate forecasting method to yield higher accuracy and lower mean absolute percentage errors than other individual models for short-term logistics demand forecasting.",
"title": ""
},
{
"docid": "neg:1840330_18",
"text": "Nanotechnology offers many potential benefits to cancer research through passive and active targeting, increased solubility/bioavailablility, and novel therapies. However, preclinical characterization of nanoparticles is complicated by the variety of materials, their unique surface properties, reactivity, and the task of tracking the individual components of multicomponent, multifunctional nanoparticle therapeutics in in vivo studies. There are also regulatory considerations and scale-up challenges that must be addressed. Despite these hurdles, cancer research has seen appreciable improvements in efficacy and quite a decrease in the toxicity of chemotherapeutics because of 'nanotech' formulations, and several engineered nanoparticle clinical trials are well underway. This article reviews some of the challenges and benefits of nanomedicine for cancer therapeutics and diagnostics.",
"title": ""
},
{
"docid": "neg:1840330_19",
"text": "Summary form only given. Most of current job scheduling systems for supercomputers and clusters provide batch queuing support. With the development of metacomputing and grid computing, users require resources managed by multiple local job schedulers. Advance reservations are becoming essential for job scheduling systems to be utilized within a large-scale computing environment with geographically distributed resources. COSY is a lightweight implementation of such a local job scheduler with support for both queue scheduling and advance reservations. COSY queue scheduling utilizes the FCFS algorithm with backfilling mechanisms and priority management. Advance reservations with COSY can provide effective QoS support for exact start time and latest completion time. Scheduling polices are defined to reject reservations with too short notice time so that there is no start time advantage to making a reservation over submitting to a queue. Further experimental results show that as a larger percentage of reservation requests are involved, a longer mandatory shortest notice time for advance reservations must be applied in order not to sacrifice queue scheduling efficiency.",
"title": ""
}
] |
1840331 | Congestion Avoidance with Incremental Filter Aggregation in Content-Based Routing Networks | [
{
"docid": "pos:1840331_0",
"text": "Workflow management systems are traditionally centralized, creating a single point of failure and a scalability bottleneck. In collaboration with Cybermation, Inc., we have developed a content-based publish/subscribe platform, called PADRES, which is a distributed middleware platform with features inspired by the requirements of workflow management and business process execution. These features constitute original additions to publish/subscribe systems and include an expressive subscription language, composite subscription processing, a rulebased matching and routing mechanism, historc, query-based data access, and the support for the decentralized execution of business process specified in XML. PADRES constitutes the basis for the next generation of enterprise management systems developed by Cybermation, Inc., including business process automation, monitoring, and execution applications.",
"title": ""
}
] | [
{
"docid": "neg:1840331_0",
"text": "Millimeter wave (mmWave) is a promising approach for the fifth generation cellular networks. It has a large available bandwidth and high gain antennas, which can offer interference isolation and overcome high frequency-dependent path loss. In this paper, we study the non-uniform heterogeneous mmWave network. Non-uniform heterogeneous networks are more realistic in practical scenarios than traditional independent homogeneous Poisson point process (PPP) models. We derive the signal-to-noise-plus-interference ratio (SINR) and rate coverage probabilities for a two-tier non-uniform millimeter-wave heterogeneous cellular network, where the macrocell base stations (MBSs) are deployed as a homogeneous PPP and the picocell base stations (PBSs) are modeled as a Poisson hole process (PHP), dependent on the MBSs. Using tools from stochastic geometry, we derive the analytical results for the SINR and rate coverage probabilities. The simulation results validate the analytical expressions. Furthermore, we find that there exists an optimum density of the PBS that achieves the best coverage probability and the change rule with different radii of the exclusion region. Finally, we show that as expected, mmWave outperforms microWave cellular network in terms of rate coverage probability for this system.",
"title": ""
},
{
"docid": "neg:1840331_1",
"text": "This paper presents a new register assignment heuristic for procedures in SSA Form, whose interference graphs are chordal; the heuristic is called optimistic chordal coloring (OCC). Previous register assignment heuristics eliminate copy instructions via coalescing, in other words, merging nodes in the interference graph. Node merging, however, can not preserve the chordal graph property, making it unappealing for SSA-based register allocation. OCC is based on graph coloring, but does not employ coalescing, and, consequently, preserves graph chordality, and does not increase its chromatic number; in this sense, OCC is conservative as well as optimistic. OCC is observed to eliminate at least as many dynamically executed copy instructions as iterated register coalescing (IRC) for a set of chordal interference graphs generated from several Mediabench and MiBench applications. In many cases, OCC and IRC were able to find optimal or near-optimal solutions for these graphs. OCC ran 1.89x faster than IRC, on average.",
"title": ""
},
{
"docid": "neg:1840331_2",
"text": "The amount of information in medical publications continues to increase at a tremendous rate. Systematic reviews help to process this growing body of information. They are fundamental tools for evidence-based medicine. In this paper, we show that automatic text classification can be useful in building systematic reviews for medical topics to speed up the reviewing process. We propose a per-question classification method that uses an ensemble of classifiers that exploit the particular protocol of a systematic review. We also show that when integrating the classifier in the human workflow of building a review the per-question method is superior to the global method. We test several evaluation measures on a real dataset.",
"title": ""
},
{
"docid": "neg:1840331_3",
"text": "The present article proposes and describes a new ZCS non-isolated bidirectional buck-boost DC-DC converter for energy storage applications in electric vehicles. Usually, the conventional converters are adapted with an auxiliary resonant cell to provide the zero current switching turn-on/turn-off condition for the main switching devices. The advantages of proposed converter has reduced switching losses, reduced component count and improved efficiency. The proposed converter operates either in boost or buck mode. This paper mainly deals with the operating principles, analysis and design simulations of the proposed converter in order to prove the better soft-switching capability, reduced switching losses and efficiency improvement than the conventional converter.",
"title": ""
},
{
"docid": "neg:1840331_4",
"text": "PURPOSE\nTo determine if noise damage in the organ of Corti is different in the low- and high-frequency regions of the cochlea.\n\n\nMATERIALS AND METHODS\nChinchillas were exposed for 2 to 432 days to a 0.5 (low-frequency) or 4 kHz (high-frequency) octave band of noise at 47 to 95 dB sound pressure level. Auditory thresholds were determined before, during, and after the noise exposure. The cochleas were examined microscopically as plastic-embedded flat preparations. Missing cells were counted, and the sequence of degeneration was determined as a function of recovery time (0-30 days).\n\n\nRESULTS\nWith high-frequency noise, primary damage began as small focal losses of outer hair cells in the 4-8 kHz region. With continued exposure, damage progressed to involve loss of an entire segment of the organ of Corti, along with adjacent myelinated nerve fibers. Much of the latter loss is secondary to the intermixing of cochlear fluids through the damaged reticular lamina. With low-frequency noise, primary damage appeared as outer hair cell loss scattered over a broad area in the apex. With continued exposure, additional apical outer hair cells degenerated, while supporting cells, inner hair cells, and nerve fibers remained intact. Continued exposure to low-frequency noise also resulted in focal lesions in the basal cochlea that were indistinguishable from those resulting from exposure to high-frequency noise.\n\n\nCONCLUSIONS\nThe patterns of cochlear damage and their relation to functional measures of hearing in noise-exposed chinchillas are similar to those seen in noise-exposed humans. Thus, the chinchilla is an excellent model for studying noise effects, with the long-term goal of identifying ways to limit noise-induced hearing loss in humans.",
"title": ""
},
{
"docid": "neg:1840331_5",
"text": "In recognizing the importance of educating aspiring scientists in the responsible conduct of research (RCR), the Office of Research Integrity (ORI) began sponsoring the creation of instructional resources to address this pressing need in 2002. The present guide on avoiding plagiarism and other inappropriate writing practices was created to help students, as well as professionals, identify and prevent such malpractices and to develop an awareness of ethical writing and authorship. This guide is one of the many products stemming from ORI’s effort to promote the RCR.",
"title": ""
},
{
"docid": "neg:1840331_6",
"text": "Dictionary methods for cross-language information retrieval give performance below that for mono-lingual retrieval. Failure to translate multi-term phrases has km shown to be one of the factors responsible for the errors associated with dictionary methods. First, we study the importance of phrasaI translation for this approach. Second, we explore the role of phrases in query expansion via local context analysis and local feedback and show how they can be used to significantly reduce the error associated with automatic dictionary translation.",
"title": ""
},
{
"docid": "neg:1840331_7",
"text": "We show how machine vision, learning, and planning can be combined to solve hierarchical consensus tasks. Hierarchical consensus tasks seek correct answers to a hierarchy of subtasks, where branching depends on answers at preceding levels of the hierarchy. We construct a set of hierarchical classification models that aggregate machine and human effort on different subtasks and use these inferences in planning. Optimal solution of hierarchical tasks is intractable due to the branching of task hierarchy and the long horizon of these tasks. We study Monte Carlo planning procedures that can exploit task structure to constrain the policy space for tractability. We evaluate the procedures on data collected from Galaxy Zoo II in allocating human effort and show that significant gains can be achieved.",
"title": ""
},
{
"docid": "neg:1840331_8",
"text": "We have been developing an exoskeleton robot (ExoRob) for assisting daily upper limb movements (i.e., shoulder, elbow and wrist). In this paper we have focused on the development of a 2DOF ExoRob to rehabilitate elbow joint flexion/extension and shoulder joint internal/external rotation, as a step toward the development of a complete (i.e., 3DOF) shoulder motion assisted exoskeleton robot. The proposed ExoRob is designed to be worn on the lateral side of the upper arm in order to provide naturalistic movements at the level of elbow (flexion/extension) and shoulder joint internal/external rotation. This paper also focuses on the modeling and control of the proposed ExoRob. A kinematic model of ExoRob has been developed based on modified Denavit-Hartenberg notations. In dynamic simulations of the proposed ExoRob, a novel nonlinear sliding mode control technique with exponential reaching law and computed torque control technique is employed, where trajectory tracking that corresponds to typical rehab (passive) exercises has been carried out to evaluate the effectiveness of the developed model and controller. Simulated results show that the controller is able to drive the ExoRob efficiently to track the desired trajectories, which in this case consisted in passive arm movements. Such movements are used in rehabilitation and could be performed very efficiently with the developed ExoRob and the controller. Experiments were carried out to validate the simulated results as well as to evaluate the performance of the controller.",
"title": ""
},
{
"docid": "neg:1840331_9",
"text": "We study the online market for peer-to-peer (P2P) lending, in which individuals bid on unsecured microloans sought by other individual borrowers. Using a large sample of consummated and failed listings from the largest online P2P lending marketplace Prosper.com, we test whether social networks lead to better lending outcomes, focusing on the distinction between the structural and relational aspects of networks. While the structural aspects have limited to no significance, the relational aspects are consistently significant predictors of lending outcomes, with a striking gradation based on the verifiability and visibility of a borrower’s social capital. Stronger and more verifiable relational network measures are associated with a higher likelihood of a loan being funded, a lower risk of default, and lower interest rates. We discuss the implications of our findings for financial disintermediation and the design of decentralized electronic lending markets. This version: October 2009 ∗Decision, Operations and Information Technologies Department, **Finance Department. All the authors are at Robert H. Smith School of Business, University of Maryland, College Park, MD 20742. Mingfeng Lin can be reached at mingfeng@rhsmith.umd.edu. Prabhala can be reached at prabhala@rhsmith.umd.edu. Viswanathan can be reached at sviswana@rhsmith.umd.edu. The authors thank Ethan Cohen-Cole, Sanjiv Das, Jerry Hoberg, Dalida Kadyrzhanova, Nikunj Kapadia, De Liu, Vojislav Maksimovic, Gordon Phillips, Kislaya Prasad, Galit Shmueli, Kelly Shue, and seminar participants at Carnegie Mellon University, University of Utah, the 2008 Summer Doctoral Program of the Oxford Internet Institute, the 2008 INFORMS Annual Conference, the Workshop on Information Systems and Economics (Paris), and Western Finance Association for their valuable comments and suggestions. Mingfeng Lin also thanks to the Ewing Marion Kauffman Foundation for the 2009 Dissertation Fellowship Award, and to the Economic Club of Washington D.C. (2008) for their generous financial support. We also thank Prosper.com for making the data for the study available. The contents of this publication are the sole responsibility of the authors. Judging Borrowers By The Company They Keep: Social Networks and Adverse Selection in Online Peer-to-Peer Lending",
"title": ""
},
{
"docid": "neg:1840331_10",
"text": "This paper presents LiteOS, a multi-threaded operating system that provides Unix-like abstractions for wireless sensor networks. Aiming to be an easy-to-use platform, LiteOS offers a number of novel features, including: (1) a hierarchical file system and a wireless shell interface for user interaction using UNIX-like commands; (2) kernel support for dynamic loading and native execution of multithreaded applications; and (3) online debugging, dynamic memory, and file system assisted communication stacks. LiteOS also supports software updates through a separation between the kernel and user applications, which are bridged through a suite of system calls. Besides the features that have been implemented, we also describe our perspective on LiteOS as an enabling platform. We evaluate the platform experimentally by measuring the performance of common tasks, and demonstrate its programmability through twenty-one example applications.",
"title": ""
},
{
"docid": "neg:1840331_11",
"text": "The dorsolateral prefrontal cortex (DLPFC) plays a crucial role in working memory. Notably, persistent activity in the DLPFC is often observed during the retention interval of delayed response tasks. The code carried by the persistent activity remains unclear, however. We critically evaluate how well recent findings from functional magnetic resonance imaging studies are compatible with current models of the role of the DLFPC in working memory. These new findings suggest that the DLPFC aids in the maintenance of information by directing attention to internal representations of sensory stimuli and motor plans that are stored in more posterior regions.",
"title": ""
},
{
"docid": "neg:1840331_12",
"text": "The topic of this thesis is fraud detection in mobile communications networks by means of user profiling and classification techniques. The goal is to first identify relevant user groups based on call data and then to assign a user to a relevant group. Fraud may be defined as a dishonest or illegal use of services, with the intention to avoid service charges. Fraud detection is an important application, since network operators lose a relevant portion of their revenue to fraud. Whereas the intentions of the mobile phone users cannot be observed, it is assumed that the intentions are reflected in the call data. The call data is subsequently used in describing behavioral patterns of users. Neural networks and probabilistic models are employed in learning these usage patterns from call data. These models are used either to detect abrupt changes in established usage patterns or to recognize typical usage patterns of fraud. The methods are shown to be effective in detecting fraudulent behavior by empirically testing the methods with data from real mobile communications networks. © All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording, or otherwise, without the prior permission of the author.",
"title": ""
},
{
"docid": "neg:1840331_13",
"text": "There has recently been a surge of work in explanatory artificial intelligence (XAI). This research area tackles the important problem that complex machines and algorithms often cannot provide insights into their behavior and thought processes. XAI allows users and parts of the internal system to be more transparent, providing explanations of their decisions in some level of detail. These explanations are important to ensure algorithmic fairness, identify potential bias/problems in the training data, and to ensure that the algorithms perform as expected. However, explanations produced by these systems is neither standardized nor systematically assessed. In an effort to create best practices and identify open challenges, we provide our definition of explainability and show how it can be used to classify existing literature. We discuss why current approaches to explanatory methods especially for deep neural networks are insufficient. Finally, based on our survey, we conclude with suggested future research directions for explanatory artificial intelligence.",
"title": ""
},
{
"docid": "neg:1840331_14",
"text": "Educational data mining concerns with developing methods for discovering knowledge from data that come from educational domain. In this paper we used educational data mining to improve graduate students’ performance, and overcome the problem of low grades of graduate students. In our case study we try to extract useful knowledge from graduate students data collected from the college of Science and Technology – Khanyounis. The data include fifteen years period [1993-2007]. After preprocessing the data, we applied data mining techniques to discover association, classification, clustering and outlier detection rules. In each of these four tasks, we present the extracted knowledge and describe its importance in educational domain.",
"title": ""
},
{
"docid": "neg:1840331_15",
"text": "Controlled power system separation, which separates the transmission system into islands in a controlled manner, is considered the final resort against a blackout under severe disturbances, e.g., cascading events. Three critical problems of controlled separation are where and when to separate and what to do after separation, which are rarely studied together. They are addressed in this paper by a proposed unified controlled separation scheme based on synchrophasors. The scheme decouples the three problems by partitioning them into sub-problems handled strategically in three time stages: the Offline Analysis stage determines elementary generator groups, optimizes potential separation points in between, and designs post-separation control strategies; the Online Monitoring stage predicts separation boundaries by modal analysis on synchrophasor data; the Real-time Control stage calculates a synchrophasor-based separation risk index for each boundary to predict the time to perform separation. The proposed scheme is demonstrated on a 179-bus power system by case studies.",
"title": ""
},
{
"docid": "neg:1840331_16",
"text": "OBJECTIVE\nTo discuss the role of proprioception in motor control and in activation of the dynamic restraints for functional joint stability.\n\n\nDATA SOURCES\nInformation was drawn from an extensive MEDLINE search of the scientific literature conducted in the areas of proprioception, motor control, neuromuscular control, and mechanisms of functional joint stability for the years 1970-1999.\n\n\nDATA SYNTHESIS\nProprioception is conveyed to all levels of the central nervous system. It serves fundamental roles for optimal motor control and sensorimotor control over the dynamic restraints.\n\n\nCONCLUSIONS/APPLICATIONS\nAlthough controversy remains over the precise contributions of specific mechanoreceptors, proprioception as a whole is an essential component to controlling activation of the dynamic restraints and motor control. Enhanced muscle stiffness, of which muscle spindles are a crucial element, is argued to be an important characteristic for dynamic joint stability. Articular mechanoreceptors are attributed instrumental influence over gamma motor neuron activation, and therefore, serve to indirectly influence muscle stiffness. In addition, articular mechanoreceptors appear to influence higher motor center control over the dynamic restraints. Further research conducted in these areas will continue to assist in providing a scientific basis to the selection and development of clinical procedures.",
"title": ""
},
{
"docid": "neg:1840331_17",
"text": "The Android packaging model offers ample opportunities for malware writers to piggyback malicious code in popular apps, which can then be easily spread to a large user base. Although recent research has produced approaches and tools to identify piggybacked apps, the literature lacks a comprehensive investigation into such phenomenon. We fill this gap by: 1) systematically building a large set of piggybacked and benign apps pairs, which we release to the community; 2) empirically studying the characteristics of malicious piggybacked apps in comparison with their benign counterparts; and 3) providing insights on piggybacking processes. Among several findings providing insights analysis techniques should build upon to improve the overall detection and classification accuracy of piggybacked apps, we show that piggybacking operations not only concern app code, but also extensively manipulates app resource files, largely contradicting common beliefs. We also find that piggybacking is done with little sophistication, in many cases automatically, and often via library code.",
"title": ""
},
{
"docid": "neg:1840331_18",
"text": "This study reports spore germination, early gametophyte development and change in the reproductive phase of Drynaria fortunei, a medicinal fern, in response to changes in pH and light spectra. Germination of D. fortunei spores occurred on a wide range of pH from 3.7 to 9.7. The highest germination (63.3%) occurred on ½ strength Murashige and Skoog basal medium supplemented with 2% sucrose at pH 7.7 under white light condition. Among the different light spectra tested, red, far-red, blue, and white light resulted in 71.3, 42.3, 52.7, and 71.0% spore germination, respectively. There were no morphological differences among gametophytes grown under white and blue light. Elongated or filamentous but multiseriate gametophytes developed under red light, whereas under far-red light gametophytes grew as uniseriate filaments consisting of mostly elongated cells. Different light spectra influenced development of antheridia and archegonia in the gametophytes. Gametophytes gave rise to new gametophytes and developed antheridia and archegonia after they were transferred to culture flasks. After these gametophytes were transferred to plastic tray cells with potting mix of tree fern trunk fiber mix (TFTF mix) and peatmoss the highest number of sporophytes was found. Sporophytes grown in pots developed rhizomes.",
"title": ""
},
{
"docid": "neg:1840331_19",
"text": "In this paper, we summarize the human emotion recognition using different set of electroencephalogram (EEG) channels using discrete wavelet transform. An audio-visual induction based protocol has been designed with more dynamic emotional content for inducing discrete emotions (disgust, happy, surprise, fear and neutral). EEG signals are collected using 64 electrodes from 20 subjects and are placed over the entire scalp using International 10-10 system. The raw EEG signals are preprocessed using Surface Laplacian (SL) filtering method and decomposed into three different frequency bands (alpha, beta and gamma) using Discrete Wavelet Transform (DWT). We have used “db4” wavelet function for deriving a set of conventional and modified energy based features from the EEG signals for classifying emotions. Two simple pattern classification methods, K Nearest Neighbor (KNN) and Linear Discriminant Analysis (LDA) methods are used and their performances are compared for emotional states classification. The experimental results indicate that, one of the proposed features (ALREE) gives the maximum average classification rate of 83.26% using KNN and 75.21% using LDA compared to those of conventional features. Finally, we present the average classification rate and subsets of emotions classification rate of these two different classifiers for justifying the performance of our emotion recognition system.",
"title": ""
}
] |
1840332 | CAAD: Computer Architecture for Autonomous Driving | [
{
"docid": "pos:1840332_0",
"text": "In order to achieve autonomous operation of a vehicle in urban situations with unpredictable traffic, several realtime systems must interoperate, including environment perception, localization, planning, and control. In addition, a robust vehicle platform with appropriate sensors, computational hardware, networking, and software infrastructure is essential.",
"title": ""
},
{
"docid": "pos:1840332_1",
"text": "In this paper, we study the challenging problem of tracking the trajectory of a moving object in a video with possibly very complex background. In contrast to most existing trackers which only learn the appearance of the tracked object online, we take a different approach, inspired by recent advances in deep learning architectures, by putting more emphasis on the (unsupervised) feature learning problem. Specifically, by using auxiliary natural images, we train a stacked denoising autoencoder offline to learn generic image features that are more robust against variations. This is then followed by knowledge transfer from offline training to the online tracking process. Online tracking involves a classification neural network which is constructed from the encoder part of the trained autoencoder as a feature extractor and an additional classification layer. Both the feature extractor and the classifier can be further tuned to adapt to appearance changes of the moving object. Comparison with the state-of-the-art trackers on some challenging benchmark video sequences shows that our deep learning tracker is more accurate while maintaining low computational cost with real-time performance when our MATLAB implementation of the tracker is used with a modest graphics processing unit (GPU).",
"title": ""
}
] | [
{
"docid": "neg:1840332_0",
"text": "A linear differential equation with rational function coefficients has a Bessel type solution when it is solvable in terms of <i>B</i><sub><i>v</i></sub>(<i>f</i>), <i>B</i><sub><i>v</i>+1</sub>(<i>f</i>). For second order equations, with rational function coefficients, <i>f</i> must be a rational function or the square root of a rational function. An algorithm was given by Debeerst, van Hoeij, and Koepf, that can compute Bessel type solutions if and only if <i>f</i> is a rational function. In this paper we extend this work to the square root case, resulting in a complete algorithm to find all Bessel type solutions.",
"title": ""
},
{
"docid": "neg:1840332_1",
"text": "The geographical properties of words have recently begun to be exploited for geolocating documents based solely on their text, often in the context of social media and online content. One common approach for geolocating texts is rooted in information retrieval. Given training documents labeled with latitude/longitude coordinates, a grid is overlaid on the Earth and pseudo-documents constructed by concatenating the documents within a given grid cell; then a location for a test document is chosen based on the most similar pseudo-document. Uniform grids are normally used, but they are sensitive to the dispersion of documents over the earth. We define an alternative grid construction using k-d trees that more robustly adapts to data, especially with larger training sets. We also provide a better way of choosing the locations for pseudo-documents. We evaluate these strategies on existing Wikipedia and Twitter corpora, as well as a new, larger Twitter corpus. The adaptive grid achieves competitive results with a uniform grid on small training sets and outperforms it on the large Twitter corpus. The two grid constructions can also be combined to produce consistently strong results across all training sets.",
"title": ""
},
{
"docid": "neg:1840332_2",
"text": "In wireless sensor networks (WSNs), long lifetime requirement of different applications and limited energy storage capability of sensor nodes has led us to find out new horizons for reducing power consumption upon nodes. To increase sensor node's lifetime, circuit and protocols have to be energy efficient so that they can make a priori reactions by estimating and predicting energy consumption. The goal of this study is to present and discuss several strategies such as power-aware protocols, cross-layer optimization, and harvesting technologies used to alleviate power consumption constraint in WSNs.",
"title": ""
},
{
"docid": "neg:1840332_3",
"text": "Plants are widely used in many indigenous systems of medicine for therapeutic purposes and are increasingly becoming popular in modern society as alternatives to synthetic medicines. Bioactive principles are derived from the products of plant primary metabolites, which are associated with the process of photosynthesis. The present review highlighted the chemical diversity and medicinal potentials of bioactive principles as well inherent toxicity concerns associated with the use of these plant products, which are of relevance to the clinician, pharmacist or toxicologist. Plant materials are composed of vast array of bioactive principles of which their isolation, identification and characterization for analytical evaluation requires expertise with cutting edge analytical protocols and instrumentations. Bioactive principles are responsible for the therapeutic activities of medicinal plants and provide unlimited opportunities for new drug leads because of their unmatched availability and chemical diversity. For the most part, the beneficial or toxic outcomes of standardized plant extracts depend on the chemical peculiarities of the containing bioactive principles.",
"title": ""
},
{
"docid": "neg:1840332_4",
"text": "The security of today's Web rests in part on the set of X.509 certificate authorities trusted by each user's browser. Users generally do not themselves configure their browser's root store but instead rely upon decisions made by the suppliers of either the browsers or the devices upon which they run. In this work we explore the nature and implications of these trust decisions for Android users. Drawing upon datasets collected by Netalyzr for Android and ICSI's Certificate Notary, we characterize the certificate root store population present in mobile devices in the wild. Motivated by concerns that bloated root stores increase the attack surface of mobile users, we report on the interplay of certificate sets deployed by the device manufacturers, mobile operators, and the Android OS. We identify certificates installed exclusively by apps on rooted devices, thus breaking the audited and supervised root store model, and also discover use of TLS interception via HTTPS proxies employed by a market research company.",
"title": ""
},
{
"docid": "neg:1840332_5",
"text": "Due to the unavailable GPS signals in indoor environments, indoor localization has become an increasingly heated research topic in recent years. Researchers in robotics community have tried many approaches, but this is still an unsolved problem considering the balance between performance and cost. The widely deployed low-cost WiFi infrastructure provides a great opportunity for indoor localization. In this paper, we develop a system for WiFi signal strength-based indoor localization and implement two approaches. The first is improved KNN algorithm-based fingerprint matching method, and the other is the Gaussian Process Regression (GPR) with Bayes Filter approach. We conduct experiments to compare the improved KNN algorithm with the classical KNN algorithm and evaluate the localization performance of the GPR with Bayes Filter approach. The experiment results show that the improved KNN algorithm can bring enhancement for the fingerprint matching method compared with the classical KNN algorithm. In addition, the GPR with Bayes Filter approach can provide about 2m localization accuracy for our test environment.",
"title": ""
},
{
"docid": "neg:1840332_6",
"text": "The ability to update firmware is a feature that is found in nearly all modern embedded systems. We demonstrate how this feature can be exploited to allow attackers to inject malicious firmware modifications into vulnerable embedded devices. We discuss techniques for exploiting such vulnerable functionality and the implementation of a proof of concept printer malware capable of network reconnaissance, data exfiltration and propagation to general purpose computers and other embedded device types. We present a case study of the HP-RFU (Remote Firmware Update) LaserJet printer firmware modification vulnerability, which allows arbitrary injection of malware into the printer’s firmware via standard printed documents. We show vulnerable population data gathered by continuously tracking all publicly accessible printers discovered through an exhaustive scan of IPv4 space. To show that firmware update signing is not the panacea of embedded defense, we present an analysis of known vulnerabilities found in third-party libraries in 373 LaserJet firmware images. Prior research has shown that the design flaws and vulnerabilities presented in this paper are found in other modern embedded systems. Thus, the exploitation techniques presented in this paper can be generalized to compromise other embedded systems. Keywords-Embedded system exploitation; Firmware modification attack; Embedded system rootkit; HP-RFU vulnerability.",
"title": ""
},
{
"docid": "neg:1840332_7",
"text": "We describe a novel approach to modeling idiosyncra tic prosodic behavior for automatic speaker recognition. The approach computes various duration , pitch, and energy features for each estimated syl lable in speech recognition output, quantizes the featur s, forms N-grams of the quantized values, and mode ls normalized counts for each feature N-gram using sup port vector machines (SVMs). We refer to these features as “SNERF-grams” (N-grams of Syllable-base d Nonuniform Extraction Region Features). Evaluation of SNERF-gram performance is conducted o n two-party spontaneous English conversational telephone data from the Fisher corpus, using one co versation side in both training and testing. Resul ts show that SNERF-grams provide significant performance ga ins when combined with a state-of-the-art baseline system, as well as with two highly successful longrange feature systems that capture word usage and lexically constrained duration patterns. Further ex periments examine the relative contributions of fea tures by quantization resolution, N-gram length, and feature type. Results show that the optimal number of bins depends on both feature type and N-gram length, but is roughly in the range of 5 to 10 bins. We find t hat longer N-grams are better than shorter ones, and th at pitch features are most useful, followed by dura tion and energy features. The most important pitch features are those capturing pitch level, whereas the most important energy features reflect patterns of risin g a d falling. For duration features, nucleus dura tion is more important for speaker recognition than are dur ations from the onset or coda of a syllable. Overal l, we find that SVM modeling of prosodic feature sequence s yields valuable information for automatic speaker recognition. It also offers rich new opportunities for exploring how speakers differ from each other i n voluntary but habitual ways.",
"title": ""
},
{
"docid": "neg:1840332_8",
"text": "This research demo describes the implementation of a mobile AR-supported educational course application, AR Circuit, which is designed to promote the effectiveness of remote collaborative learning for physics. The application employs the TCP/IP protocol enabling multiplayer functionality in a mobile AR environment. One phone acts as the server and the other acts as the client. The server phone will capture the video frames, process the video frame, and send the current frame and the markers transformation matrices to the client phone.",
"title": ""
},
{
"docid": "neg:1840332_9",
"text": "Rapid changes in mobile cloud computing tremendously affect the telecommunication, education and healthcare industries and also business perspectives. Nowadays, advanced information and communication technology enhanced healthcare sector to improved medical services at reduced cost. However, issues related to security, privacy, quality of services and mobility and viability need to be solved before mobile cloud computing can be adopted in the healthcare industry. Mobile healthcare (mHealthcare) is one of the latest technologies in the healthcare industry which enable the industry players to collaborate each other’s especially in sharing the patience’s medical reports and histories. MHealthcare offer real-time monitoring and provide rapid diagnosis of health condition. User’s context such as location, identities and etc which are collected by active sensor is important element in MHealthcare. This paper conducts a study pertaining to mobile cloud healthcare, mobile healthcare and comparisons between the variety of applications and architecture developed/proposed by researchers.",
"title": ""
},
{
"docid": "neg:1840332_10",
"text": "The authors present several versions of a general model, titled the E-Z Reader model, of eye movement control in reading. The major goal of the modeling is to relate cognitive processing (specifically aspects of lexical access) to eye movements in reading. The earliest and simplest versions of the model (E-Z Readers 1 and 2) merely attempt to explain the total time spent on a word before moving forward (the gaze duration) and the probability of fixating a word; later versions (E-Z Readers 3-5) also attempt to explain the durations of individual fixations on individual words and the number of fixations on individual words. The final version (E-Z Reader 5) appears to be psychologically plausible and gives a good account of many phenomena in reading. It is also a good tool for analyzing eye movement data in reading. Limitations of the model and directions for future research are also discussed.",
"title": ""
},
{
"docid": "neg:1840332_11",
"text": "The Wi-Fi fingerprinting (WF) technique normally suffers from the RSS (Received Signal Strength) variance problem caused by environmental changes that are inherent in both the training and localization phases. Several calibration algorithms have been proposed but they only focus on the hardware variance problem. Moreover, smartphones were not evaluated and these are now widely used in WF systems. In this paper, we analyze various aspect of the RSS variance problem when using smartphones for WF: device type, device placement, user direction, and environmental changes over time. To overcome the RSS variance problem, we also propose a smartphone-based, indoor pedestrian-tracking system. The scheme uses the location where the maximum RSS is observed, which is preserved even though RSS varies significantly. We experimentally validate that the proposed system is robust to the RSS variance problem.",
"title": ""
},
{
"docid": "neg:1840332_12",
"text": "We use single-agent and multi-agent Reinforcement Learning (RL) for learning dialogue policies in a resource allocation negotiation scenario. Two agents learn concurrently by interacting with each other without any need for simulated users (SUs) to train against or corpora to learn from. In particular, we compare the Qlearning, Policy Hill-Climbing (PHC) and Win or Learn Fast Policy Hill-Climbing (PHC-WoLF) algorithms, varying the scenario complexity (state space size), the number of training episodes, the learning rate, and the exploration rate. Our results show that generally Q-learning fails to converge whereas PHC and PHC-WoLF always converge and perform similarly. We also show that very high gradually decreasing exploration rates are required for convergence. We conclude that multiagent RL of dialogue policies is a promising alternative to using single-agent RL and SUs or learning directly from corpora.",
"title": ""
},
{
"docid": "neg:1840332_13",
"text": "Finding the sparse solution of an underdetermined system of linear equations (the so called sparse recovery problem) has been extensively studied in the last decade because of its applications in many different areas. So, there are now many sparse recovery algorithms (and program codes) available. However, most of these algorithms have been developed for real-valued systems. This paper discusses an approach for using available real-valued algorithms (or program codes) to solve complex-valued problems, too. The basic idea is to convert the complex-valued problem to an equivalent real-valued problem and solve this new real-valued problem using any real-valued sparse recovery algorithm. Theoretical guarantees for the success of this approach will be discussed, too. On the other hand, a widely used sparse recovery idea is finding the minimum ℓ1 norm solution. For real-valued systems, this idea requires to solve a linear programming (LP) problem, but for complex-valued systems it needs to solve a second-order cone programming (SOCP) problem, which demands more computational load. However, based on the approach of this paper, the complex case can also be solved by linear programming, although the theoretical guarantee for finding the sparse solution is more limited.",
"title": ""
},
{
"docid": "neg:1840332_14",
"text": "To reduce the significant redundancy in deep Convolutional Neural Networks (CNNs), most existing methods prune neurons by only considering the statistics of an individual layer or two consecutive layers (e.g., prune one layer to minimize the reconstruction error of the next layer), ignoring the effect of error propagation in deep networks. In contrast, we argue that for a pruned network to retain its predictive power, it is essential to prune neurons in the entire neuron network jointly based on a unified goal: minimizing the reconstruction error of important responses in the \"final response layer\" (FRL), which is the second-to-last layer before classification. Specifically, we apply feature ranking techniques to measure the importance of each neuron in the FRL, formulate network pruning as a binary integer optimization problem, and derive a closed-form solution to it for pruning neurons in earlier layers. Based on our theoretical analysis, we propose the Neuron Importance Score Propagation (NISP) algorithm to propagate the importance scores of final responses to every neuron in the network. The CNN is pruned by removing neurons with least importance, and it is then fine-tuned to recover its predictive power. NISP is evaluated on several datasets with multiple CNN models and demonstrated to achieve significant acceleration and compression with negligible accuracy loss.",
"title": ""
},
{
"docid": "neg:1840332_15",
"text": "In simultaneous electroencephalogram (EEG) and functional magnetic resonance imaging (fMRI) studies, average reference (AR), and digitally linked mastoid (LM) are popular re-referencing techniques in event-related potential (ERP) analyses. However, they may introduce their own physiological signals and alter the EEG/ERP outcome. A reference electrode standardization technique (REST) that calculated a reference point at infinity was proposed to solve this problem. To confirm the advantage of REST in ERP analyses of synchronous EEG-fMRI studies, we compared the reference effect of AR, LM, and REST on task-related ERP results of a working memory task during an fMRI scan. As we hypothesized, we found that the adopted reference did not change the topography map of ERP components (N1 and P300 in the present study), but it did alter the task-related effect on ERP components. LM decreased or eliminated the visual working memory (VWM) load effect on P300, and the AR distorted the distribution of VWM location-related effect at left posterior electrodes as shown in the statistical parametric scalp mapping (SPSM) of N1. ERP cortical source estimates, which are independent of the EEG reference choice, were used as the golden standard to infer the relative utility of different references on the ERP task-related effect. By comparison, REST reference provided a more integrated and reasonable result. These results were further confirmed by the results of fMRI activations and a corresponding EEG-only study. Thus, we recommend the REST, especially with a realistic head model, as the optimal reference method for ERP data analysis in simultaneous EEG-fMRI studies.",
"title": ""
},
{
"docid": "neg:1840332_16",
"text": "Studies using Nomura et al.’s “Negative Attitude toward Robots Scale” (NARS) [1] as an attitudinal measure have featured robots that were perceived to be autonomous, independent agents. State of the art telepresence robots require an explicit human-in-the-loop to drive the robot around. In this paper, we investigate if NARS can be used with telepresence robots. To this end, we conducted three studies in which people watched videos of telepresence robots (n=70), operated telepresence robots (n=38), and interacted with telepresence robots (n=12). Overall, the results from our three studies indicated that NARS may be applied to telepresence robots, and culture, gender, and prior robot experience can be influential factors on the NARS score.",
"title": ""
},
{
"docid": "neg:1840332_17",
"text": "Cluster ensembles generate a large number of different clustering solutions and combine them into a more robust and accurate consensus clustering. On forming the ensembles, the literature has suggested that higher diversity among ensemble members produces higher performance gain. In contrast, some studies also indicated that medium diversity leads to the best performing ensembles. Such contradicting observations suggest that different data, with varying characteristics, may require different treatments. We empirically investigate this issue by examining the behavior of cluster ensembles on benchmark data sets. This leads to a novel framework that selects ensemble members for each data set based on its own characteristics. Our framework first generates a diverse set of solutions and combines them into a consensus partition P*. Based on the diversity between the ensemble members and P*, a subset of ensemble members is selected and combined to obtain the final output. We evaluate the proposed method on benchmark data sets and the results show that the proposed method can significantly improve the clustering performance, often by a substantial margin. In some cases, we were able to produce final solutions that significantly outperform even the best ensemble members.",
"title": ""
},
{
"docid": "neg:1840332_18",
"text": "This paper describes first results using the Unified Medical Language System (UMLS) for distantly supervised relation extraction. UMLS is a large knowledge base which contains information about millions of medical concepts and relations between them. Our approach is evaluated using existing relation extraction data sets that contain relations that are similar to some of those in UMLS.",
"title": ""
},
{
"docid": "neg:1840332_19",
"text": "Localization is an essential and important research issue in wireless sensor networks (WSNs). Most localization schemes focus on static sensor networks. However, mobile sensors are required in some applications such that the sensed area can be enlarged. As such, a localization scheme designed for mobile sensor networks is necessary. In this paper, we propose a localization scheme to improve the localization accuracy of previous work. In this proposed scheme, the normal nodes without location information can estimate their own locations by gathering the positions of location-aware nodes (anchor nodes) and the one-hop normal nodes whose locations are estimated from the anchor nodes. In addition, we propose a scheme that predicts the moving direction of sensor nodes to increase localization accuracy. Simulation results show that the localization error in our proposed scheme is lower than the previous schemes in various mobility models and moving speeds.",
"title": ""
}
] |
1840333 | Deformable Pose Traversal Convolution for 3D Action and Gesture Recognition | [
{
"docid": "pos:1840333_0",
"text": "Human action recognition from well-segmented 3D skeleton data has been intensively studied and attracting an increasing attention. Online action detection goes one step further and is more challenging, which identifies the action type and localizes the action positions on the fly from the untrimmed stream. In this paper, we study the problem of online action detection from the streaming skeleton data. We propose a multi-task end-to-end Joint Classification-Regression Recurrent Neural Network to better explore the action type and temporal localization information. By employing a joint classification and regression optimization objective, this network is capable of automatically localizing the start and end points of actions more accurately. Specifically, by leveraging the merits of the deep Long Short-Term Memory (LSTM) subnetwork, the proposed model automatically captures the complex long-range temporal dynamics, which naturally avoids the typical sliding window design and thus ensures high computational efficiency. Furthermore, the subtask of regression optimization provides the ability to forecast the action prior to its occurrence. To evaluate our proposed model, we build a large streaming video dataset with annotations. Experimental results on our dataset and the public G3D dataset both demonstrate very promising performance of our scheme.",
"title": ""
},
{
"docid": "pos:1840333_1",
"text": "Current state-of-the-art approaches to skeleton-based action recognition are mostly based on recurrent neural networks (RNN). In this paper, we propose a novel convolutional neural networks (CNN) based framework for both action classification and detection. Raw skeleton coordinates as well as skeleton motion are fed directly into CNN for label prediction. A novel skeleton transformer module is designed to rearrange and select important skeleton joints automatically. With a simple 7-layer network, we obtain 89.3% accuracy on validation set of the NTU RGB+D dataset. For action detection in untrimmed videos, we develop a window proposal network to extract temporal segment proposals, which are further classified within the same network. On the recent PKU-MMD dataset, we achieve 93.7% mAP, surpassing the baseline by a large margin.",
"title": ""
}
] | [
{
"docid": "neg:1840333_0",
"text": "This article presents the state of the art in passive devices for enhancing limb movement in people with neuromuscular disabilities. Both upper- and lower-limb projects and devices are described. Special emphasis is placed on a passive functional upper-limb orthosis called the Wilmington Robotic Exoskeleton (WREX). The development and testing of the WREX with children with limited arm strength are described. The exoskeleton has two links and 4 degrees of freedom. It uses linear elastic elements that balance the effects of gravity in three dimensions. The experiences of five children with arthrogryposis who used the WREX are described.",
"title": ""
},
{
"docid": "neg:1840333_1",
"text": "In Taobao, the largest e-commerce platform in China, billions of items are provided and typically displayed with their images.For better user experience and business effectiveness, Click Through Rate (CTR) prediction in online advertising system exploits abundant user historical behaviors to identify whether a user is interested in a candidate ad. Enhancing behavior representations with user behavior images will help understand user's visual preference and improve the accuracy of CTR prediction greatly. So we propose to model user preference jointly with user behavior ID features and behavior images. However, training with user behavior images brings tens to hundreds of images in one sample, giving rise to a great challenge in both communication and computation. To handle these challenges, we propose a novel and efficient distributed machine learning paradigm called Advanced Model Server (AMS). With the well-known Parameter Server (PS) framework, each server node handles a separate part of parameters and updates them independently. AMS goes beyond this and is designed to be capable of learning a unified image descriptor model shared by all server nodes which embeds large images into low dimensional high level features before transmitting images to worker nodes. AMS thus dramatically reduces the communication load and enables the arduous joint training process. Based on AMS, the methods of effectively combining the images and ID features are carefully studied, and then we propose a Deep Image CTR Model. Our approach is shown to achieve significant improvements in both online and offline evaluations, and has been deployed in Taobao display advertising system serving the main traffic.",
"title": ""
},
{
"docid": "neg:1840333_2",
"text": "A system that allows museums to build and manage Virtual and Augmented Reality exhibitions based on 3D models of artifacts is presented. Dynamic content creation based on pre-designed visualization templates allows content designers to create virtual exhibitions very efficiently. Virtual Reality exhibitions can be presented both inside museums, e.g. on touch-screen displays installed inside galleries and, at the same time, on the Internet. Additionally, the presentation based on Augmented Reality technologies allows museum visitors to interact with the content in an intuitive and exciting manner.",
"title": ""
},
{
"docid": "neg:1840333_3",
"text": "BACKGROUND\nMany studies have demonstrated that honey has antibacterial activity in vitro, and a small number of clinical case studies have shown that application of honey to severely infected cutaneous wounds is capable of clearing infection from the wound and improving tissue healing. Research has also indicated that honey may possess anti-inflammatory activity and stimulate immune responses within a wound. The overall effect is to reduce infection and to enhance wound healing in burns, ulcers, and other cutaneous wounds. The objective of the study was to find out the results of topical wound dressings in diabetic wounds with natural honey.\n\n\nMETHODS\nThe study was conducted at department of Orthopaedics, Unit-1, Liaquat University of Medical and Health Sciences, Jamshoro from July 2006 to June 2007. Study design was experimental. The inclusion criteria were patients of either gender with any age group having diabetic foot Wagner type I, II, III and II. The exclusion criteria were patients not willing for studies and who needed urgent amputation due to deteriorating illness. Initially all wounds were washed thoroughly and necrotic tissues removed and dressings with honey were applied and continued up to healing of wounds.\n\n\nRESULTS\nTotal number of patients was 12 (14 feet). There were 8 males (66.67%) and 4 females (33.33%), 2 cases (16.67%) were presented with bilateral diabetic feet. The age range was 35 to 65 years (46 +/- 9.07 years). Amputations of big toe in 3 patients (25%), second and third toe ray in 2 patients (16.67%) and of fourth and fifth toes at the level of metatarsophalengeal joints were done in 3 patients (25%). One patient (8.33%) had below knee amputation.\n\n\nCONCLUSION\nIn our study we observed excellent results in treating diabetic wounds with dressings soaked with natural honey. The disability of diabetic foot patients was minimized by decreasing the rate of leg or foot amputations and thus enhancing the quality and productivity of individual life.",
"title": ""
},
{
"docid": "neg:1840333_4",
"text": "Recently, with the development of artificial intelligence technologies and the popularity of mobile devices, walking detection and step counting have gained much attention since they play an important role in the fields of equipment positioning, saving energy, behavior recognition, etc. In this paper, a novel algorithm is proposed to simultaneously detect walking motion and count steps through unconstrained smartphones in the sense that the smartphone placement is not only arbitrary but also alterable. On account of the periodicity of the walking motion and sensitivity of gyroscopes, the proposed algorithm extracts the frequency domain features from three-dimensional (3D) angular velocities of a smartphone through FFT (fast Fourier transform) and identifies whether its holder is walking or not irrespective of its placement. Furthermore, the corresponding step frequency is recursively updated to evaluate the step count in real time. Extensive experiments are conducted by involving eight subjects and different walking scenarios in a realistic environment. It is shown that the proposed method achieves the precision of 93.76 % and recall of 93.65 % for walking detection, and its overall performance is significantly better than other well-known methods. Moreover, the accuracy of step counting by the proposed method is 95.74 % , and is better than both of the several well-known counterparts and commercial products.",
"title": ""
},
{
"docid": "neg:1840333_5",
"text": "Three experiments supported the hypothesis that people are more willing to express attitudes that could be viewed as prejudiced when their past behavior has established their credentials as nonprejudiced persons. In Study 1, participants given the opportunity to disagree with blatantly sexist statements were later more willing to favor a man for a stereotypically male job. In Study 2, participants who first had the opportunity to select a member of a stereotyped group (a woman or an African American) for a category-neutral job were more likely to reject a member of that group for a job stereotypically suited for majority members. In Study 3, participants who had established credentials as nonprejudiced persons revealed a greater willingness to express a politically incorrect opinion even when the audience was unaware of their credentials. The general conditions under which people feel licensed to act on illicit motives are discussed.",
"title": ""
},
{
"docid": "neg:1840333_6",
"text": "In this paper, we present a dual-band antenna for Long Term Evolution (LTE) handsets. The proposed antenna is composed of a meandered monopole operating in the 700 MHz band and a parasitic element which radiates in the 2.5–2.7 GHz band. Two identical antennas are then closely positioned on the same 120×50 mm2 ground plane (Printed Circuit Board) which represents a modern-size PDA-mobile phone. To enhance the port-to-port isolation of the antennas, a neutralization technique is implemented between them. Scattering parameters, radiations patterns and total efficiencies are presented to illustrate the performance of the antenna-system.",
"title": ""
},
{
"docid": "neg:1840333_7",
"text": "The positive effects of social popularity (i.e., information based on other consumers’ behaviors) and deal scarcity (i.e., information provided by product vendors) on consumers’ consumption behaviors are well recognized. However, few studies have investigated their potential joint and interaction effects and how such effects may differ at different timing of a shopping process. This study examines the individual and interaction effects of social popularity and deal scarcity as well as how such effects change as consumers’ shopping goals become more concrete. The results of a laboratory experiment show that in the initial shopping stage when consumers do not have specific shopping goals, social popularity and deal scarcity information weaken each other’s effects; whereas in the later shopping stage when consumers have constructed concrete shopping goals, these two information cues reinforce each other’s effects. Implications on theory and practice are discussed.",
"title": ""
},
{
"docid": "neg:1840333_8",
"text": "This study investigates the roles of cohesion and coherence in evaluations of essay quality. Cohesion generally has a facilitative effect on text comprehension and is assumed to be related to essay coherence. By contrast, recent studies of essay writing have demonstrated that computational indices of cohesion are not predictive of evaluations of writing quality. This study investigates expert ratings of individual text features, including coherence, in order to examine their relation to evaluations of holistic essay quality. The results suggest that coherence is an important attribute of overall essay quality, but that expert raters evaluate coherence based on the absence of cohesive cues in the essays rather than their presence. This finding has important implications for text understanding and the role of coherence in writing quality.",
"title": ""
},
{
"docid": "neg:1840333_9",
"text": "We present a new approach to scalable training of deep learning machines by incremental block training with intra-block parallel optimization to leverage data parallelism and blockwise model-update filtering to stabilize learning process. By using an implementation on a distributed GPU cluster with an MPI-based HPC machine learning framework to coordinate parallel job scheduling and collective communication, we have trained successfully deep bidirectional long short-term memory (LSTM) recurrent neural networks (RNNs) and fully-connected feed-forward deep neural networks (DNNs) for large vocabulary continuous speech recognition on two benchmark tasks, namely 309-hour Switchboard-I task and 1,860-hour \"Switch-board+Fisher\" task. We achieve almost linear speedup up to 16 GPU cards on LSTM task and 64 GPU cards on DNN task, with either no degradation or improved recognition accuracy in comparison with that of running a traditional mini-batch based stochastic gradient descent training on a single GPU.",
"title": ""
},
{
"docid": "neg:1840333_10",
"text": "Object detection has made great progress in the past few years along with the development of deep learning. However, most current object detection methods are resource hungry, which hinders their wide deployment to many resource restricted usages such as usages on always-on devices, battery-powered low-end devices, etc. This paper considers the resource and accuracy trade-off for resource-restricted usages during designing the whole object detection framework. Based on the deeply supervised object detection (DSOD) framework, we propose Tiny-DSOD dedicating to resource-restricted usages. Tiny-DSOD introduces two innovative and ultra-efficient architecture blocks: depthwise dense block (DDB) based backbone and depthwise feature-pyramid-network (D-FPN) based front-end. We conduct extensive experiments on three famous benchmarks (PASCAL VOC 2007, KITTI, and COCO), and compare Tiny-DSOD to the state-of-the-art ultra-efficient object detection solutions such as Tiny-YOLO, MobileNet-SSD (v1 & v2), SqueezeDet, Pelee, etc. Results show that Tiny-DSOD outperforms these solutions in all the three metrics (parameter-size, FLOPs, accuracy) in each comparison. For instance, Tiny-DSOD achieves 72.1% mAP with only 0.95M parameters and 1.06B FLOPs, which is by far the state-of-the-arts result with such a low resource requirement.∗",
"title": ""
},
{
"docid": "neg:1840333_11",
"text": "In this paper, a wireless power transfer system with magnetically coupled resonators is studied. The idea to use metamaterials to enhance the coupling coefficient and the transfer efficiency is proposed and analyzed. With numerical calculations of a system with and without metamaterials, we show that the transfer efficiency can be improved with metamaterials.",
"title": ""
},
{
"docid": "neg:1840333_12",
"text": "Deep neural networks have been shown to be very successful at learning feature hierarchies in supervised learning tasks. Generative models, on the other hand, have benefited less from hierarchical models with multiple layers of latent variables. In this paper, we prove that hierarchical latent variable models do not take advantage of the hierarchical structure when trained with some existing variational methods, and provide some limitations on the kind of features existing models can learn. Finally we propose an alternative architecture that does not suffer from these limitations. Our model is able to learn highly interpretable and disentangled hierarchical features on several natural image datasets with no taskspecific regularization.",
"title": ""
},
{
"docid": "neg:1840333_13",
"text": "Accurately drawing 3D objects is difficult for untrained individuals, as it requires an understanding of perspective and its effects on geometry and proportions. Step-by-step tutorials break the complex task of sketching an entire object down into easy-to-follow steps that even a novice can follow. However, creating such tutorials requires expert knowledge and is time-consuming. As a result, the availability of tutorials for a given object or viewpoint is limited. How2Sketch (H2S) addresses this problem by automatically generating easy-to-follow tutorials for arbitrary 3D objects. Given a segmented 3D model and a camera viewpoint, H2S computes a sequence of steps for constructing a drawing scaffold comprised of geometric primitives, which helps the user draw the final contours in correct perspective and proportion. To make the drawing scaffold easy to construct, the algorithm solves for an ordering among the scaffolding primitives and explicitly makes small geometric modifications to the size and location of the object parts to simplify relative positioning. Technically, we formulate this scaffold construction as a single selection problem that simultaneously solves for the ordering and geometric changes of the primitives. We generate different tutorials on man-made objects using our method and evaluate how easily the tutorials can be followed with a user study.",
"title": ""
},
{
"docid": "neg:1840333_14",
"text": "— A new versatile Hydraulically-powered Quadruped robot (HyQ) has been developed to serve as a platform to study not only highly dynamic motions such as running and jumping, but also careful navigation over very rough terrain. HyQ stands 1 meter tall, weighs roughly 90kg and features 12 torque-controlled joints powered by a combination of hydraulic and electric actuators. The hydraulic actuation permits the robot to perform powerful and dynamic motions that are hard to achieve with more traditional electrically actuated robots. This paper describes design and specifications of the robot and presents details on the hardware of the quadruped platform, such as the mechanical design of the four articulated legs and of the torso frame, and the configuration of the hydraulic power system. Results from the first walking experiments are presented along with test studies using a previously built prototype leg. 1 INTRODUCTION The development of mobile robotic platforms is an important and active area of research. Within this domain, the major focus has been to develop wheeled or tracked systems that cope very effectively with flat and well-structured solid surfaces (e.g. laboratories and roads). In recent years, there has been considerable success with robotic vehicles even for off-road conditions [1]. However, wheeled robots still have major limitations and difficulties in navigating uneven and rough terrain. These limitations and the capabilities of legged animals encouraged researchers for the past decades to focus on the construction of biologically inspired legged machines. These robots have the potential to outperform the more traditional designs with wheels and tracks in terms of mobility and versatility. The vast majority of the existing legged robots have been, and continue to be, actuated by electric motors with high gear-ratio reduction drives, which are popular because of their size, price, ease of use and accuracy of control. However, electric motors produce small torques relative to their size and weight, thereby making reduction drives with high ratios essential to convert velocity into torque. Unfortunately, this approach results in systems with reduced speed capability and limited passive back-driveability and therefore not very suitable for highly dynamic motions and interactions with unforeseen terrain variance. Significant examples of such legged robots are: the biped series of HRP robots [2], Toyota humanoid robot [3], and Honda's Asimo [4]; and the quadruped robot series of Hirose et al. [5], Sony's AIBO [6] and Little Dog [7]. In combination with high position gain control and …",
"title": ""
},
{
"docid": "neg:1840333_15",
"text": "YES is a simplified stroke-based method for sorting Chinese characters. It is free from stroke counting and grouping, and thus much faster and more accurate than the traditional method. This paper presents a collation element table built in YES for a large joint Chinese character set covering (a) all 20,902 characters of Unicode CJK Unified Ideographs, (b) all 11,408 characters in the Complete List of Chinese Characters Used by the Media in 2013, (c) all 13,000 plus characters in the latest versions of Xinhua Dictionary(v11) and Contemporary Chinese Dictionary(v6). Of the 20,902 Chinese characters in Unicode, 97.23% have one-to-one relationship with their stroke order codes in YES, comparing with 90.69% of the traditional method. Enhanced with the secondary and tertiary sorting levels of stroke layout and Unicode value, there is a guarantee of one-to-one relationship between the characters and collation elements. The collation element table has been successfully applied to sorting CC-CEDICT, a Chinese-English dictionary of over 112,000 word entries.",
"title": ""
},
{
"docid": "neg:1840333_16",
"text": "This paper presents the design procedure of monolithic microwave integrated circuit (MMIC) high-power amplifiers (HPAs) as well as implementation of high-efficiency and compact-size HPAs in a 0.25- μm AlGaAs-InGaAs pHEMT technology. Presented design techniques used to extend bandwidth, improve efficiency, and reduce chip area of the HPAs are described in detail. The first HPA delivers 5 W of output power with 40% power-added efficiency (PAE) in the frequency band of 8.5-12.5 GHz, while providing 20 dB of small-signal gain. The second HPA delivers 8 W of output power with 35% PAE in the frequency band of 7.5-12 GHz, while maintaining a small-signal gain of 17.5 dB. The 8-W HPA chip area is 8.8 mm2, which leads to the maximum power/area ratio of 1.14 W/mm2. These are the lowest area and highest power/area ratio reported in GaAs HPAs operating within the same frequency band.",
"title": ""
},
{
"docid": "neg:1840333_17",
"text": "This paper addresses the problem of vegetation detection from laser measurements. The ability to detect vegetation is important for robots operating outdoors, since it enables a robot to navigate more efficiently and safely in such environments. In this paper, we propose a novel approach for detecting low, grass-like vegetation using laser remission values. In our algorithm, the laser remission is modeled as a function of distance, incidence angle, and material. We classify surface terrain based on 3D scans of the surroundings of the robot. The model is learned in a self-supervised way using vibration-based terrain classification. In all real world experiments we carried out, our approach yields a classification accuracy of over 99%. We furthermore illustrate how the learned classifier can improve the autonomous navigation capabilities of mobile robots.",
"title": ""
},
{
"docid": "neg:1840333_18",
"text": "Neural Networks are prevalent in todays NLP research. Despite their success for different tasks, training time is relatively long. We use Hogwild! to counteract this phenomenon and show that it is a suitable method to speed up training Neural Networks of different architectures and complexity. For POS tagging and translation we report considerable speedups of training, especially for the latter. We show that Hogwild! can be an important tool for training complex NLP architectures.",
"title": ""
},
{
"docid": "neg:1840333_19",
"text": "The low power wide area network (LPWAN) technologies, which is now embracing a booming era with the development in the Internet of Things (IoT), may offer a brand new solution for current smart grid communications due to their excellent features of low power, long range, and high capacity. The mission-critical smart grid communications require secure and reliable connections between the utilities and the devices with high quality of service (QoS). This is difficult to achieve for unlicensed LPWAN technologies due to the crowded license-free band. Narrowband IoT (NB-IoT), as a licensed LPWAN technology, is developed based on the existing long-term evolution specifications and facilities. Thus, it is able to provide cellular-level QoS, and henceforth can be viewed as a promising candidate for smart grid communications. In this paper, we introduce NB-IoT to the smart grid and compare it with the existing representative communication technologies in the context of smart grid communications in terms of data rate, latency, range, etc. The overall requirements of communications in the smart grid from both quantitative and qualitative perspectives are comprehensively investigated and each of them is carefully examined for NB-IoT. We further explore the representative applications in the smart grid and analyze the corresponding feasibility of NB-IoT. Moreover, the performance of NB-IoT in typical scenarios of the smart grid communication environments, such as urban and rural areas, is carefully evaluated via Monte Carlo simulations.",
"title": ""
}
] |
1840334 | Modeling data entry rates for ASR and alternative input methods | [
{
"docid": "pos:1840334_0",
"text": "A study was conducted to evaluate user performance andsatisfaction in completion of a set of text creation tasks usingthree commercially available continuous speech recognition systems.The study also compared user performance on similar tasks usingkeyboard input. One part of the study (Initial Use) involved 24users who enrolled, received training and carried out practicetasks, and then completed a set of transcription and compositiontasks in a single session. In a parallel effort (Extended Use),four researchers used speech recognition to carry out real worktasks over 10 sessions with each of the three speech recognitionsoftware products. This paper presents results from the Initial Usephase of the study along with some preliminary results from theExtended Use phase. We present details of the kinds of usabilityand system design problems likely in current systems and severalcommon patterns of error correction that we found.",
"title": ""
}
] | [
{
"docid": "neg:1840334_0",
"text": "Emotional processes are important to survive. The Darwinian adaptive concept of stress refers to natural selection since evolved individuals have acquired effective strategies to adapt to the environment and to unavoidable changes. If demands are abrupt and intense, there might be insufficient time to successful responses. Usually, stress produces a cognitive or perceptual evaluation (emotional memory) which motivates to make a plan, to take a decision and to perform an action to face success‐ fully the demand. Between several kinds of stresses, there are psychosocial and emotional stresses with cultural, social and political influences. The cultural changes have modified the way in which individuals socially interact. Deficits in familiar relationships and social isolation alter physical and mental health in young students, producing reduction of their capacities of facing stressors in school. Adolescence is characterized by significant physiological, anatomical, and psychological changes in boys and girls, who become vulnerable to psychiatric disorders. In particular for young adult students, anxiety and depression symptoms could interfere in their academic performance. In this chapter, we reviewed approaches to the study of anxiety and depression symptoms related with the academic performance in adolescent and graduate students. Results from available published studies in academic journals are reviewed to discuss the importance to detect information about academic performance, which leads to discover in many cases the very commonly subdiagnosed psychiatric disorders in adolescents, that is, anxiety and depression. With the reviewed evidence of how anxiety and depression in young adult students may alter their main activity in life (studying and academic performance), we © 2015 The Author(s). Licensee InTech. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. discussed data in order to show a way in which professionals involved in schools could support students and stablish a routine of intervention in any case.",
"title": ""
},
{
"docid": "neg:1840334_1",
"text": "In semi-structured case-oriented business processes, the sequence of process steps is determined by case workers based on available document content associated with a case. Transitions between process execution steps are therefore case specific and depend on independent judgment of case workers. In this paper, we propose an instance-specific probabilistic process model (PPM) whose transition probabilities are customized to the semi-structured business process instance it represents. An instance-specific PPM serves as a powerful representation to predict the likelihood of different outcomes. We also show that certain instance-specific PPMs can be transformed into a Markov chain under some non-restrictive assumptions. For instance-specific PPMs that contain parallel execution of tasks, we provide an algorithm to map them to an extended space Markov chain. This way existing Markov techniques can be leveraged to make predictions about the likelihood of executing future tasks. Predictions provided by our technique could generate early alerts for case workers about the likelihood of important or undesired outcomes in an executing case instance. We have implemented and validated our approach on a simulated automobile insurance claims handling semi-structured business process. Results indicate that an instance-specific PPM provides more accurate predictions than other methods such as conditional probability. We also show that as more document data become available, the prediction accuracy of an instance-specific PPM increases.",
"title": ""
},
{
"docid": "neg:1840334_2",
"text": "Antibody response to the influenza immunization was investigated in 83 1st-semester healthy university freshmen. Elevated levels of loneliness throughout the semester and small social networks were independently associated with poorer antibody response to 1 component of the vaccine. Those with both high levels of loneliness and a small social network had the lowest antibody response. Loneliness was also associated with greater psychological stress and negative affect, less positive affect, poorer sleep efficiency and quality, and elevations in circulating levels of cortisol. However, only the stress data were consistent with mediation of the loneliness-antibody response relation. None of these variables were associated with social network size, and hence none were potential mediators of the relation between network size and immunization response.",
"title": ""
},
{
"docid": "neg:1840334_3",
"text": "Virtualized Cloud platforms have become increasingly common and the number of online services hosted on these platforms is also increasing rapidly. A key problem faced by providers in managing these services is detecting the performance anomalies and adjusting resources accordingly. As online services generate a very large amount of monitored data in the form of time series, it becomes very difficult to process this complex data by traditional approaches. In this work, we present a novel distributed parallel approach for performance anomaly detection. We build upon Holt-Winters forecasting for automatic aberrant behavior detection in time series. First, we extend the technique to work with MapReduce paradigm. Next, we correlate the anomalous metrics with the target Service Level Objective (SLO) in order to locate the suspicious metrics. We implemented and evaluated our approach on a production Cloud encompassing IaaS and PaaS service models. Experimental results confirm that our approach is efficient and effective in capturing the metrics causing performance anomalies in large time series datasets.",
"title": ""
},
{
"docid": "neg:1840334_4",
"text": "The subjective sense of future time plays an essential role in human motivation. Gradually, time left becomes a better predictor than chronological age for a range of cognitive, emotional, and motivational variables. Socioemotional selectivity theory maintains that constraints on time horizons shift motivational priorities in such a way that the regulation of emotional states becomes more important than other types of goals. This motivational shift occurs with age but also appears in other contexts (for example, geographical relocations, illnesses, and war) that limit subjective future time.",
"title": ""
},
{
"docid": "neg:1840334_5",
"text": "With rapid development of face recognition and detection techniques, the face has been frequently used as a biometric to find illegitimate access. It relates to a security issues of system directly, and hence, the face spoofing detection is an important issue. However, correctly classifying spoofing or genuine faces is challenging due to diverse environment conditions such as brightness and color of a face skin. Therefore we propose a novel approach to robustly find the spoofing faces using the highlight removal effect, which is based on the reflection information. Because spoofing face image is recaptured by a camera, it has additional light information. It means that spoofing image could have much more highlighted areas and abnormal reflection information. By extracting these differences, we are able to generate features for robust face spoofing detection. In addition, the spoofing face image and genuine face image have distinct textures because of surface material of medium. The skin and spoofing medium are expected to have different texture, and some genuine image characteristics are distorted such as color distribution. We achieve state-of-the-art performance by concatenating these features. It significantly outperforms especially for the error rate.",
"title": ""
},
{
"docid": "neg:1840334_6",
"text": "Cross-site scripting (also referred to as XSS) is a vulnerability that allows an attacker to send malicious code (usually in the form of JavaScript) to another user. XSS is one of the top 10 vulnerabilities on Web application. While a traditional cross-site scripting vulnerability exploits server-side codes, DOM-based XSS is a type of vulnerability which affects the script code being executed in the clients browser. DOM-based XSS vulnerabilities are much harder to be detected than classic XSS vulnerabilities because they reside on the script codes from Web sites. An automated scanner needs to be able to execute the script code without errors and to monitor the execution of this code to detect such vulnerabilities. In this paper, we introduce a distributed scanning tool for crawling modern Web applications on a large scale and detecting, validating DOMbased XSS vulnerabilities. Very few Web vulnerability scanners can really accomplish this.",
"title": ""
},
{
"docid": "neg:1840334_7",
"text": "In recent years, there has been a growing intensity of competition in virtually all areas of business in both markets upstream for raw materials such as components, supplies, capital and technology and markets downstream for consumer goods and services. This paper examines the relationships among generic strategy, competitive advantage, and organizational performance. Firstly, the nature of generic strategies, competitive advantage, and organizational performance is examined. Secondly, the relationship between generic strategies and competitive advantage is analyzed. Finally, the implications of generic strategies, organizational performance, performance measures and competitive advantage are studied. This study focuses on: (i) the relationship of generic strategy and organisational performance in Australian manufacturing companies participating in the “Best Practice Program in Australia”, (ii) the relationship between generic strategies and competitive advantage, and (iii) the relationship among generic strategies, competitive advantage and organisational performance. 1999 Elsevier Science Ltd. All rights reserved.",
"title": ""
},
{
"docid": "neg:1840334_8",
"text": "The reported power analysis attacks on hardware implementations of the MICKEY family of streams ciphers require a large number of power traces. The primary motivation of our work is to break an implementation of the cipher when only a limited number of power traces can be acquired by an adversary. In this paper, we propose a novel approach to mount a Template attack (TA) on MICKEY-128 2.0 stream cipher using Particle Swarm Optimization (PSO) generated initialization vectors (IVs). In addition, we report the results of power analysis against a MICKEY-128 2.0 implementation on a SASEBO-GII board to demonstrate our proposed attack strategy. The captured power traces were analyzed using Least Squares Support Vector Machine (LS-SVM) learning algorithm based binary classifiers to segregate the power traces into the respective Hamming distance (HD) classes. The outcomes of the experiments reveal that our proposed power analysis attack strategy requires a much lesser number of IVs compared to a standard Correlation Power Analysis (CPA) attack on MICKEY-128 2.0 during the key loading phase of the cipher.",
"title": ""
},
{
"docid": "neg:1840334_9",
"text": "This article discusses how to avoid biased questions in survey instruments, how to motivate people to complete instruments and how to evaluate instruments. In the context of survey evaluation, we discuss how to assess survey reliability i.e. how reproducible a survey's data is and survey validity i.e. how well a survey instrument measures what it sets out to measure.",
"title": ""
},
{
"docid": "neg:1840334_10",
"text": "The timing of the origin of arthropods in relation to the Cambrian explosion is still controversial, as are the timing of other arthropod macroevolutionary events such as the colonization of land and the evolution of flight. Here we assess the power of a phylogenomic approach to shed light on these major events in the evolutionary history of life on earth. Analyzing a large phylogenomic dataset (122 taxa, 62 genes) with a Bayesian-relaxed molecular clock, we simultaneously reconstructed the phylogenetic relationships and the absolute times of divergences among the arthropods. Simulations were used to test whether our analysis could distinguish between alternative Cambrian explosion scenarios with increasing levels of autocorrelated rate variation. Our analyses support previous phylogenomic hypotheses and simulations indicate a Precambrian origin of the arthropods. Our results provide insights into the 3 independent colonizations of land by arthropods and suggest that evolution of insect wings happened much earlier than the fossil record indicates, with flight evolving during a period of increasing oxygen levels and impressively large forests. These and other findings provide a foundation for macroevolutionary and comparative genomic study of Arthropoda.",
"title": ""
},
{
"docid": "neg:1840334_11",
"text": "The self-powering, long-lasting, and functional features of embedded wireless microsensors appeal to an ever-expanding application space in monitoring, control, and diagnosis for military, commercial, industrial, space, and biomedical applications. Extended operational life, however, is difficult to achieve when power-intensive functions like telemetry draw whatever little energy is available from energy-storage microdevices like thin-film lithium-ion batteries and/or microscale fuel cells. Harvesting ambient energy overcomes this deficit by continually replenishing the energy reservoir and indefinitely extending system lifetime. In this paper, a prototyped circuit that precharges, detects, and synchronizes to a variable voltage-constrained capacitor verifies experimentally that harvesting energy electrostatically from vibrations is possible. Experimental results show that, on average (excluding gate-drive and control losses), the system harvests 9.7 nJ/cycle by investing 1.7 nJ/cycle, yielding a net energy gain of approximately 8 nJ/cycle at an average of 1.6 ¿W (in typical applications) for every 200 pF variation. Projecting and including reasonable gate-drive and controller losses reduces the net energy gain to 6.9 nJ/cycle at 1.38 ¿W.",
"title": ""
},
{
"docid": "neg:1840334_12",
"text": "In this paper, a Y-Δ hybrid connection for a high-voltage induction motor is described. Low winding harmonic content is achieved by careful consideration of the interaction between the Y- and Δ-connected three-phase winding sets so that the magnetomotive force (MMF) in the air gap is close to sinusoid. Essentially, the two winding sets operate in a six-phase mode. This paper goes on to verify that the fundamental distribution coefficient for the stator MMF is enhanced compared to a standard three-phase winding set. The design method for converting a conventional double-layer lap winding in a high-voltage induction motor into a Y-Δ hybrid lap winding is described using standard winding theory as often applied to small- and medium-sized motors. The main parameters addressed when designing the winding are the conductor wire gauge, coil turns, and parallel winding branches in the Y and Δ connections. A winding design scheme for a 1250-kW 6-kV induction motor is put forward and experimentally validated; the results show that the efficiency can be raised effectively without increasing the cost.",
"title": ""
},
{
"docid": "neg:1840334_13",
"text": "Metabolomics is the comprehensive study of small molecule metabolites in biological systems. By assaying and analyzing thousands of metabolites in biological samples, it provides a whole picture of metabolic status and biochemical events happening within an organism and has become an increasingly powerful tool in the disease research. In metabolomics, it is common to deal with large amounts of data generated by nuclear magnetic resonance (NMR) and/or mass spectrometry (MS). Moreover, based on different goals and designs of studies, it may be necessary to use a variety of data analysis methods or a combination of them in order to obtain an accurate and comprehensive result. In this review, we intend to provide an overview of computational and statistical methods that are commonly applied to analyze metabolomics data. The review is divided into five sections. The first two sections will introduce the background and the databases and resources available for metabolomics research. The third section will briefly describe the principles of the two main experimental methods that produce metabolomics data: MS and NMR, followed by the fourth section that describes the preprocessing of the data from these two approaches. In the fifth and the most important section, we will review four main types of analysis that can be performed on metabolomics data with examples in metabolomics. These are unsupervised learning methods, supervised learning methods, pathway analysis methods and analysis of time course metabolomics data. We conclude by providing a table summarizing the principles and tools that we discussed in this review.",
"title": ""
},
{
"docid": "neg:1840334_14",
"text": "A digital PLL employing an adaptive tracking technique and a novel frequency acquisition scheme achieves a wide tracking range and fast frequency acquisition. The test chip fabricated in a 0.13 mum CMOS process operates from 0.6 GHz to 2 GHz and achieves better than plusmn3200 ppm frequency tracking range when the reference clock is modulated with a 1 MHz sine wave.",
"title": ""
},
{
"docid": "neg:1840334_15",
"text": "STUDY OBJECTIVE\nTo (1) examine the prevalence of abnormal genital findings in a large cohort of female children presenting with concerns of sexual abuse; and (2) explore how children use language when describing genital contact and genital anatomy.\n\n\nDESIGN\nIn this prospective study we documented medical histories and genital findings in all children who met inclusion criteria. Findings were categorized as normal, indeterminate, and diagnostic of trauma. Logistic regression analysis was used to determine the effects of key covariates on predicting diagnostic findings. Children older than 4 years of age were asked questions related to genital anatomy to assess their use of language.\n\n\nSETTING\nA regional, university-affiliated sexual abuse clinic.\n\n\nPARTICIPANTS\nFemale children (N = 1500) aged from birth to 17 years (inclusive) who received an anogenital examination with digital images.\n\n\nINTERVENTIONS AND MAIN OUTCOME MEASURES\nPhysical exam findings, medical history, and the child's use of language were recorded.\n\n\nRESULTS\nPhysical findings were determined in 99% (n = 1491) of patients. Diagnostic findings were present in 7% (99 of 1491). After adjusting for age, acuity, and type of sexual contact reported by the adult, the estimated odds of diagnostic findings were 12.5 times higher for children reporting genital penetration compared with those who reported only contact (95% confidence interval, 3.46-45.34). Finally, children used the word \"inside\" to describe contact other than penetration of the vaginal canal (ie, labial penetration).\n\n\nCONCLUSION\nA history of penetration by the child was the primary predictor of diagnostic findings. Interpretation of children's use of \"inside\" might explain the low prevalence of diagnostic findings and warrants further study.",
"title": ""
},
{
"docid": "neg:1840334_16",
"text": "The sampling rate of the sensors in wireless sensor networks (WSNs) determines the rate of its energy consumption since most of the energy is used in sampling and transmission. To save the energy in WSNs and thus prolong the network lifetime, we present a novel approach based on the compressive sensing (CS) framework to monitor 1-D environmental information in WSNs. The proposed technique is based on CS theory to minimize the number of samples taken by sensor nodes. An innovative feature of our approach is a new random sampling scheme that considers the causality of sampling, hardware limitations and the trade-off between the randomization scheme and computational complexity. In addition, a sampling rate indicator (SRI) feedback scheme is proposed to enable the sensor to adjust its sampling rate to maintain an acceptable reconstruction performance while minimizing the number of samples. A significant reduction in the number of samples required to achieve acceptable reconstruction error is demonstrated using real data gathered by a WSN located in the Hessle Anchorage of the Humber Bridge.",
"title": ""
},
{
"docid": "neg:1840334_17",
"text": "Indirect field oriented control for induction machine requires the knowledge of rotor time constant to estimate the rotor flux linkages. Here an online method for estimating the rotor time constant and stator resistance is presented. The problem is formulated as a nonlinear least-squares problem and a procedure is presented that guarantees the minimum is found in a finite number of steps. Experimental results are presented. Two different approaches to implementing the algorithm online are discussed. Simulations are also presented to show how the algorithm works online",
"title": ""
},
{
"docid": "neg:1840334_18",
"text": "Attack graph is a tool to analyze multi-stage, multi-host attack scenarios in a network. It is a complete graph where each attack scenario is depicted by an attack path which is essentially a series of exploits. Each exploit in the series satisfies the pre-conditions for subsequent exploits and makes a casual relationship among them. One of the intrinsic problem with the generation of such a full attack graph is its scalability. In this work, an approach based on planner has been proposed for time-efficient scalable representation of the attack graphs. A planner is a special purpose search algorithm from artificial intelligence domain, used for finding out solutions within a large state space without suffering state space explosion. A case study has also been presented and the proposed methodology is found to be efficient than some of the earlier reported works.",
"title": ""
}
] |
1840335 | Perceived learning environment and students ’ emotional experiences : A multilevel analysis of mathematics classrooms * | [
{
"docid": "pos:1840335_0",
"text": "We assessed math anxiety in 6ththrough 12th-grade children (N = 564) as part of a comprehensive longitudinal investigation of children's beliefs, attitudes, and values concerning mathematics. Confirmatory factor analyses provided evidence for two components of math anxiety, a negative affective reactions component and a cognitive component. The affective component of math anxiety related more strongly and negatively than did the worry component to children's ability perceptions, performance perceptions, and math performance. The worry component related more strongly and positively than did the affective component to the importance that children attach to math and their reported actual effort in math. Girls reported stronger negative affective reactions to math than did boys. Ninth-grade students reported experiencing the most worry about math and sixth graders the least.",
"title": ""
},
{
"docid": "pos:1840335_1",
"text": "AN INDIVIDUAL CORRELATION is a correlation in which the statistical object or thing described is indivisible. The correlation between color and illiteracy for persons in the United States, shown later in Table I, is an individual correlation, because the kind of thing described is an indivisible unit, a person. In an individual correlation the variables are descriptive properties of individuals, such as height, income, eye color, or race, and not descriptive statistical constants such as rates or means. In an ecological correlation the statistical object is a group of persons. The correlation between the percentage of the population which is Negro and the percentage of the population which is illiterate for the 48 states, shown later as Figure 2, is an ecological correlation. The thing described is the population of a state, and not a single individual. The variables are percentages, descriptive properties of groups, and not descriptive properties of individuals. Ecological correlations are used in an impressive number of quantitative sociological studies, some of which by now have attained the status of classics: Cowles’ ‘‘Statistical Study of Climate in Relation to Pulmonary Tuberculosis’’; Gosnell’s ‘‘Analysis of the 1932 Presidential Vote in Chicago,’’ Factorial and Correlational Analysis of the 1934 Vote in Chicago,’’ and the more elaborate factor analysis in Machine Politics; Ogburn’s ‘‘How women vote,’’ ‘‘Measurement of the Factors in the Presidential Election of 1928,’’ ‘‘Factors in the Variation of Crime Among Cities,’’ and Groves and Ogburn’s correlation analyses in American Marriage and Family Relationships; Ross’ study of school attendance in Texas; Shaw’s Delinquency Areas study of the correlates of delinquency, as well as The more recent analyses in Juvenile Delinquency in Urban Areas; Thompson’s ‘‘Some Factors Influencing the Ratios of Children to Women in American Cities, 1930’’; Whelpton’s study of the correlates of birth rates, in ‘‘Geographic and Economic Differentials in Fertility;’’ and White’s ‘‘The Relation of Felonies to Environmental Factors in Indianapolis.’’ Although these studies and scores like them depend upon ecological correlations, it is not because their authors are interested in correlations between the properties of areas as such. Even out-and-out ecologists, in studying delinquency, for example, rely primarily upon data describing individuals, not areas. In each study which uses ecological correlations, the obvious purpose is to discover something about the behavior of individuals. Ecological correlations are used simply because correlations between the properties of individuals are not available. In each instance, however, the substitution is made tacitly rather than explicitly. The purpose of this paper is to clarify the ecological correlation problem by stating, mathematically, the exact relation between ecological and individual correlations, and by showing the bearing of that relation upon the practice of using ecological correlations as substitutes for individual correlations.",
"title": ""
},
{
"docid": "pos:1840335_2",
"text": "A theory of motivation and emotion is proposed in which causal ascriptions play a key role. It is first documented that in achievement-related contexts there are a few dominant causal perceptions. The perceived causes of success and failure share three common properties: locus, stability, and controllability, with intentionality and globality as other possible causal structures. The perceived stability of causes influences changes in expectancy of success; all three dimensions of causality affect a variety of common emotional experiences, including anger, gratitude, guilt, hopelessness, pity, pride, and shame. Expectancy and affect, in turn, are presumed to guide motivated behavior. The theory therefore relates the structure of thinking to the dynamics of feeling and action. Analysis of a created motivational episode involving achievement strivings is offered, and numerous empirical observations are examined from this theoretical position. The strength of the empirical evidence, the capability of this theory to address prevalent human emotions, and the potential generality of the conception are stressed.",
"title": ""
}
] | [
{
"docid": "neg:1840335_0",
"text": "While the terminology has changed over time, the basic concept of the Digital Twin model has remained fairly stable from its inception in 2001. It is based on the idea that a digital informational construct about a physical system could be created as an entity on its own. This digital information would be a “twin” of the information that was embedded within the physical system itself and be linked with that physical system through the entire lifecycle of the system.",
"title": ""
},
{
"docid": "neg:1840335_1",
"text": "Language resources that systematically organize paraphrases for binary relations are of great value for various NLP tasks and have recently been advanced in projects like PATTY, WiseNet and DEFIE. This paper presents a new method for building such a resource and the resource itself, called POLY. Starting with a very large collection of multilingual sentences parsed into triples of phrases, our method clusters relational phrases using probabilistic measures. We judiciously leverage fine-grained semantic typing of relational arguments for identifying synonymous phrases. The evaluation of POLY shows significant improvements in precision and recall over the prior works on PATTY and DEFIE. An extrinsic use case demonstrates the benefits of POLY for question answering.",
"title": ""
},
{
"docid": "neg:1840335_2",
"text": "The possibility that wind turbine noise (WTN) affects human health remains controversial. The current analysis presents results related to WTN annoyance reported by randomly selected participants (606 males, 632 females), aged 18-79, living between 0.25 and 11.22 km from wind turbines. WTN levels reached 46 dB, and for each 5 dB increase in WTN levels, the odds of reporting to be either very or extremely (i.e., highly) annoyed increased by 2.60 [95% confidence interval: (1.92, 3.58), p < 0.0001]. Multiple regression models had R(2)'s up to 58%, with approximately 9% attributed to WTN level. Variables associated with WTN annoyance included, but were not limited to, other wind turbine-related annoyances, personal benefit, noise sensitivity, physical safety concerns, property ownership, and province. Annoyance was related to several reported measures of health and well-being, although these associations were statistically weak (R(2 )< 9%), independent of WTN levels, and not retained in multiple regression models. The role of community tolerance level as a complement and/or an alternative to multiple regression in predicting the prevalence of WTN annoyance is also provided. The analysis suggests that communities are between 11 and 26 dB less tolerant of WTN than of other transportation noise sources.",
"title": ""
},
{
"docid": "neg:1840335_3",
"text": "Phosphorus is one of the most abundant elements preserved in earth, and it comprises a fraction of ∼0.1% of the earth crust. In general, phosphorus has several allotropes, and the two most commonly seen allotropes, i.e. white and red phosphorus, are widely used in explosives and safety matches. In addition, black phosphorus, though rarely mentioned, is a layered semiconductor and has great potential in optical and electronic applications. Remarkably, this layered material can be reduced to one single atomic layer in the vertical direction owing to the van der Waals structure, and is known as phosphorene, in which the physical properties can be tremendously different from its bulk counterpart. In this review article, we trace back to the research history on black phosphorus of over 100 years from the synthesis to material properties, and extend the topic from black phosphorus to phosphorene. The physical and transport properties are highlighted for further applications in electronic and optoelectronics devices.",
"title": ""
},
{
"docid": "neg:1840335_4",
"text": "41 Abstract— This project deals with a design and motion planning algorithm of a caterpillar-based pipeline robot that can be used for inspection of 80–100-mm pipelines in an indoor pipeline environment. The robot system consists of a Robot body, a control system, a CMOS camera, an accelerometer, a temperature sensor, a ZigBee module. The robot module will be designed with the help of CAD tool. The control system consists of Atmega16 micro controller and Atmel studio IDE. The robot system uses a differential drive to steer the robot and spring loaded four-bar mechanisms to assure that the robot expands to have grip of the pipe walls. Unique features of this robot are the caterpillar wheel, the four-bar mechanism supports the well grip of wall, a simple and easy user interface.",
"title": ""
},
{
"docid": "neg:1840335_5",
"text": "Registered nurses were queried about their knowledge and attitudes regarding pain management. Results suggest knowledge of pain management principles and interventions is insufficient.",
"title": ""
},
{
"docid": "neg:1840335_6",
"text": "Most of the lane marking detection algorithms reported in the literature are suitable for highway scenarios. This paper presents a novel clustered particle filter based approach to lane detection, which is suitable for urban streets in normal traffic conditions. Furthermore, a quality measure for the detection is calculated as a measure of reliability. The core of this approach is the usage of weak models, i.e. the avoidance of strong assumptions about the road geometry. Experiments were carried out in Sydney urban areas with a vehicle mounted laser range scanner and a ccd camera. Through experimentations, we have shown that a clustered particle filter can be used to efficiently extract lane markings.",
"title": ""
},
{
"docid": "neg:1840335_7",
"text": "Guilt proneness is a personality trait indicative of a predisposition to experience negative feelings about personal wrongdoing, even when the wrongdoing is private. It is characterized by the anticipation of feeling bad about committing transgressions rather than by guilty feelings in a particular moment or generalized guilty feelings that occur without an eliciting event. Our research has revealed that guilt proneness is an important character trait because knowing a person’s level of guilt proneness helps us to predict the likelihood that they will behave unethically. For example, online studies of adults across the U.S. have shown that people who score high in guilt proneness (compared to low scorers) make fewer unethical business decisions, commit fewer delinquent behaviors, and behave more honestly when they make economic decisions. In the workplace, guilt-prone employees are less likely to engage in counterproductive behaviors that harm their organization.",
"title": ""
},
{
"docid": "neg:1840335_8",
"text": "Advanced Driver Assistance Systems (ADAS) based on video camera tends to be generalized in today's automotive. However, if most of these systems perform nicely in good weather conditions, they perform very poorly under adverse weather particularly under rain. We present a novel approach that aims at detecting raindrops on a car windshield using only images from an in-vehicle camera. Based on the photometric properties of raindrops, the algorithm relies on image processing technics to highlight raindrops. Its results can be further used for image restoration and vision enhancement and hence it is a valuable tool for ADAS.",
"title": ""
},
{
"docid": "neg:1840335_9",
"text": "In this paper a novel method is introduced based on the use of an unsupervised version of kernel least mean square (KLMS) algorithm for solving ordinary differential equations (ODEs). The algorithm is unsupervised because here no desired signal needs to be determined by user and the output of the model is generated by iterating the algorithm progressively. However, there are several new implementation, fast convergence and also little error. Furthermore, it is also a KLMS with obvious characteristics. In this paper the ability of KLMS is used to estimate the answer of ODE. First a trial solution of ODE is written as a sum of two parts, the first part satisfies the initial condition and the second part is trained using the KLMS algorithm so as the trial solution solves the ODE. The accuracy of the method is illustrated by solving several problems. Also the sensitivity of the convergence is analyzed by changing the step size parameters and kernel functions. Finally, the proposed method is compared with neuro-fuzzy [21] approach. Crown Copyright & 2011 Published by Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "neg:1840335_10",
"text": "We address the problem of multi-class classification in the case where the number of classes is very large. We propose a double sampling strategy on top of a multi-class to binary reduction strategy, which transforms the original multi-class problem into a binary classification problem over pairs of examples. The aim of the sampling strategy is to overcome the curse of long-tailed class distributions exhibited in majority of large-scale multi-class classification problems and to reduce the number of pairs of examples in the expanded data. We show that this strategy does not alter the consistency of the empirical risk minimization principle defined over the double sample reduction. Experiments are carried out on DMOZ and Wikipedia collections with 10,000 to 100,000 classes where we show the efficiency of the proposed approach in terms of training and prediction time, memory consumption, and predictive performance with respect to state-of-the-art approaches.",
"title": ""
},
{
"docid": "neg:1840335_11",
"text": "Packet capture is an essential function for many network applications. However, packet drop is a major problem with packet capture in high-speed networks. This paper presents WireCAP, a novel packet capture engine for commodity network interface cards (NICs) in high-speed networks. WireCAP provides lossless zero-copy packet capture and delivery services by exploiting multi-queue NICs and multicore architectures. WireCAP introduces two new mechanisms-the ring-buffer-pool mechanism and the buddy-group-based offloading mechanism-to address the packet drop problem of packet capture in high-speed network. WireCAP is efficient. It also facilitates the design and operation of a user-space packet-processing application. Experiments have demonstrated that WireCAP achieves better packet capture performance when compared to existing packet capture engines.\n In addition, WireCAP implements a packet transmit function that allows captured packets to be forwarded, potentially after the packets are modified or inspected in flight. Therefore, WireCAP can be used to support middlebox-type applications. Thus, at a high level, WireCAP provides a new packet I/O framework for commodity NICs in high-speed networks.",
"title": ""
},
{
"docid": "neg:1840335_12",
"text": "Micro aerial vehicles (MAVs) are an excellent platform for autonomous exploration. Most MAVs rely mainly on cameras for buliding a map of the 3D environment. Therefore, vision-based MAVs require an efficient exploration algorithm to select viewpoints that provide informative measurements. In this paper, we propose an exploration approach that selects in real time the next-best-view that maximizes the expected information gain of new measurements. In addition, we take into account the cost of reaching a new viewpoint in terms of distance and predictability of the flight path for a human observer. Finally, our approach selects a path that reduces the risk of crashes when the expected battery life comes to an end, while still maximizing the information gain in the process. We implemented and thoroughly tested our approach and the experiments show that it offers an improved performance compared to other state-of-the-art algorithms in terms of precision of the reconstruction, execution time, and smoothness of the path.",
"title": ""
},
{
"docid": "neg:1840335_13",
"text": "The resolution of a synthetic aperture radar (SAR) image, in range and azimuth, is determined by the transmitted bandwidth and the synthetic aperture length, respectively. Various superresolution techniques for improving resolution have been proposed, and we have proposed an algorithm that we call polarimetric bandwidth extrapolation (PBWE). To apply PBWE to a radar image, one needs to first apply PBWE in the range direction and then in the azimuth direction, or vice versa . In this paper, PBWE is further extended to the 2-D case. This extended case (2D-PBWE) utilizes a 2-D polarimetric linear prediction model and expands the spatial frequency bandwidth in range and azimuth directions simultaneously. The performance of the 2D-PBWE is shown through a simulated radar image and a real polarimetric SAR image",
"title": ""
},
{
"docid": "neg:1840335_14",
"text": "Ubiquitous networks support the roaming service for mobile communication devices. The mobile user can use the services in the foreign network with the help of the home network. Mutual authentication plays an important role in the roaming services, and researchers put their interests on the authentication schemes. Recently, in 2016, Gope and Hwang found that mutual authentication scheme of He et al. for global mobility networks had security disadvantages such as vulnerability to forgery attacks, unfair key agreement, and destitution of user anonymity. Then, they presented an improved scheme. However, we find that the scheme cannot resist the off-line guessing attack and the de-synchronization attack. Also, it lacks strong forward security. Moreover, the session key is known to HA in that scheme. To get over the weaknesses, we propose a new two-factor authentication scheme for global mobility networks. We use formal proof with random oracle model, formal verification with the tool Proverif, and informal analysis to demonstrate the security of the proposed scheme. Compared with some very recent schemes, our scheme is more applicable. Copyright © 2016 John Wiley & Sons, Ltd.",
"title": ""
},
{
"docid": "neg:1840335_15",
"text": "We describe a new class of learning models called memory networks. Memory networks reason with inference components combined with a long-term memory component; they learn how to use these jointly. The long-term memory can be read and written to, with the goal of using it for prediction. We investigate these models in the context of question answering (QA) where the long-term memory effectively acts as a (dynamic) knowledge base, and the output is a textual response. We evaluate them on a large-scale QA task, and a smaller, but more complex, toy task generated from a simulated world. In the latter, we show the reasoning power of such models by chaining multiple supporting sentences to answer questions that require understanding the intension of verbs.",
"title": ""
},
{
"docid": "neg:1840335_16",
"text": "The digital divide refers to the separation between those who have access to digital information and communications technology (ICT) and those who do not. Many believe that universal access to ICT would bring about a global community of interaction, commerce, and learning resulting in higher standards of living and improved social welfare. However, the digital divide threatens this outcome, leading many public policy makers to debate the best way to bridge the divide. Much of the research on the digital divide focuses on first order effects regarding who has access to the technology, but some work addresses the second order effects of inequality in the ability to use the technology among those who do have access. In this paper, we examine both first and second order effects of the digital divide at three levels of analysis the individual level, the organizational level, and the global level. At each level, we survey the existing research noting the theoretical perspective taken in the work, the research methodology employed, and the key results that were obtained. We then suggest a series of research questions at each level of analysis to guide researchers seeking to further examine the digital divide and how it impacts citizens, managers, and economies.",
"title": ""
},
{
"docid": "neg:1840335_17",
"text": "VerSum allows lightweight clients to outsource expensive computations over large and frequently changing data structures, such as the Bitcoin or Namecoin blockchains, or a Certificate Transparency log. VerSum clients ensure that the output is correct by comparing the outputs from multiple servers. VerSum assumes that at least one server is honest, and crucially, when servers disagree, VerSum uses an efficient conflict resolution protocol to determine which server(s) made a mistake and thus obtain the correct output.\n VerSum's contribution lies in achieving low server-side overhead for both incremental re-computation and conflict resolution, using three key ideas: (1) representing the computation as a functional program, which allows memoization of previous results; (2) recording the evaluation trace of the functional program in a carefully designed computation history to help clients determine which server made a mistake; and (3) introducing a new authenticated data structure for sequences, called SeqHash, that makes it efficient for servers to construct summaries of computation histories in the presence of incremental re-computation. Experimental results with an implementation of VerSum show that VerSum can be used for a variety of computations, that it can support many clients, and that it can easily keep up with Bitcoin's rate of new blocks with transactions.",
"title": ""
},
{
"docid": "neg:1840335_18",
"text": "Currently in the US, over 97% of food waste is estimated to be buried in landfills. There is nonetheless interest in strategies to divert this waste from landfills as evidenced by a number of programs and policies at the local and state levels, including collection programs for source separated organic wastes (SSO). The objective of this study was to characterize the state-of-the-practice of food waste treatment alternatives in the US and Canada. Site visits were conducted to aerobic composting and two anaerobic digestion facilities, in addition to meetings with officials that are responsible for program implementation and financing. The technology to produce useful products from either aerobic or anaerobic treatment of SSO is in place. However, there are a number of implementation issues that must be addressed, principally project economics and feedstock purity. Project economics varied by region based on landfill disposal fees. Feedstock purity can be obtained by enforcement of contaminant standards and/or manual or mechanical sorting of the feedstock prior to and after treatment. Future SSO diversion will be governed by economics and policy incentives, including landfill organics bans and climate change mitigation policies.",
"title": ""
},
{
"docid": "neg:1840335_19",
"text": "With the proliferation of the internet and increased global access to online media, cybercrime is also occurring at an increasing rate. Currently, both personal users and companies are vulnerable to cybercrime. A number of tools including firewalls and Intrusion Detection Systems (IDS) can be used as defense mechanisms. A firewall acts as a checkpoint which allows packets to pass through according to predetermined conditions. In extreme cases, it may even disconnect all network traffic. An IDS, on the other hand, automates the monitoring process in computer networks. The streaming nature of data in computer networks poses a significant challenge in building IDS. In this paper, a method is proposed to overcome this problem by performing online classification on datasets. In doing so, an incremental naive Bayesian classifier is employed. Furthermore, active learning enables solving the problem using a small set of labeled data points which are often very expensive to acquire. The proposed method includes two groups of actions i.e. offline and online. The former involves data preprocessing while the latter introduces the NADAL online method. The proposed method is compared to the incremental naive Bayesian classifier using the NSL-KDD standard dataset. There are three advantages with the proposed method: (1) overcoming the streaming data challenge; (2) reducing the high cost associated with instance labeling; and (3) improved accuracy and Kappa compared to the incremental naive Bayesian approach. Thus, the method is well-suited to IDS applications.",
"title": ""
}
] |
1840336 | BilBOWA: Fast Bilingual Distributed Representations without Word Alignments | [
{
"docid": "pos:1840336_0",
"text": "A major assumption in many machine learning and data mining algorithms is that the training and future data must be in the same feature space and have the same distribution. However, in many real-world applications, this assumption may not hold. For example, we sometimes have a classification task in one domain of interest, but we only have sufficient training data in another domain of interest, where the latter data may be in a different feature space or follow a different data distribution. In such cases, knowledge transfer, if done successfully, would greatly improve the performance of learning by avoiding much expensive data-labeling efforts. In recent years, transfer learning has emerged as a new learning framework to address this problem. This survey focuses on categorizing and reviewing the current progress on transfer learning for classification, regression, and clustering problems. In this survey, we discuss the relationship between transfer learning and other related machine learning techniques such as domain adaptation, multitask learning and sample selection bias, as well as covariate shift. We also explore some potential future issues in transfer learning research.",
"title": ""
},
{
"docid": "pos:1840336_1",
"text": "If we take an existing supervised NLP system, a simple and general way to improve accuracy is to use unsupervised word representations as extra word features. We evaluate Brown clusters, Collobert and Weston (2008) embeddings, and HLBL (Mnih & Hinton, 2009) embeddings of words on both NER and chunking. We use near state-of-the-art supervised baselines, and find that each of the three word representations improves the accuracy of these baselines. We find further improvements by combining different word representations. You can download our word features, for off-the-shelf use in existing NLP systems, as well as our code, here: http://metaoptimize. com/projects/wordreprs/",
"title": ""
},
{
"docid": "pos:1840336_2",
"text": "We present a simple log-linear reparameterization of IBM Model 2 that overcomes problems arising from Model 1’s strong assumptions and Model 2’s overparameterization. Efficient inference, likelihood evaluation, and parameter estimation algorithms are provided. Training the model is consistently ten times faster than Model 4. On three large-scale translation tasks, systems built using our alignment model outperform IBM Model 4. An open-source implementation of the alignment model described in this paper is available from http://github.com/clab/fast align .",
"title": ""
}
] | [
{
"docid": "neg:1840336_0",
"text": "This article introduces the Swiss Army Menu (SAM), a radial menu that enables a very large number of functions on a single small tactile screen. The design of SAM relies on four different kinds of items, support for navigating in hierarchies of items and a control based on small thumb movements. SAM can thus offer a set of functions so large that it would typically have required a number of widgets that could not have been displayed in a single viewport at the same time.",
"title": ""
},
{
"docid": "neg:1840336_1",
"text": "There is an ever-increasing interest in the development of automatic medical diagnosis systems due to the advancement in computing technology and also to improve the service by medical community. The knowledge about health and disease is required for reliable and accurate medical diagnosis. Diabetic Retinopathy (DR) is one of the most common causes of blindness and it can be prevented if detected and treated early. DR has different signs and the most distinctive are microaneurysm and haemorrhage which are dark lesions and hard exudates and cotton wool spots which are bright lesions. Location and structure of blood vessels and optic disk play important role in accurate detection and classification of dark and bright lesions for early detection of DR. In this article, we propose a computer aided system for the early detection of DR. The article presents algorithms for retinal image preprocessing, blood vessel enhancement and segmentation and optic disk localization and detection which eventually lead to detection of different DR lesions using proposed hybrid fuzzy classifier. The developed methods are tested on four different publicly available databases. The presented methods are compared with recently published methods and the results show that presented methods outperform all others.",
"title": ""
},
{
"docid": "neg:1840336_2",
"text": "Our object oriented programming approach have great ability to improve the programming behavior for modern system and software engineering but it does not give the proper interaction of real world .In real world , programming required powerful interlinking among properties and characteristics towards the various objects. Basically this approach of programming gives the better presentation of object with real world and provide the better relationship among the objects. I have explained the new concept of my neuro object oriented approach .This approach contains many new features like originty , new concept of inheritance , new concept of encapsulation , object relation with dimensions , originty relation with dimensions and time , category of NOOPA like high order thinking object and low order thinking object , differentiation model for achieving the various requirements from the user and a rotational model .",
"title": ""
},
{
"docid": "neg:1840336_3",
"text": "Product recommender systems are often deployed by e-commerce websites to improve user experience and increase sales. However, recommendation is limited by the product information hosted in those e-commerce sites and is only triggered when users are performing e-commerce activities. In this paper, we develop a novel product recommender system called METIS, a MErchanT Intelligence recommender System, which detects users' purchase intents from their microblogs in near real-time and makes product recommendation based on matching the users' demographic information extracted from their public profiles with product demographics learned from microblogs and online reviews. METIS distinguishes itself from traditional product recommender systems in the following aspects: 1) METIS was developed based on a microblogging service platform. As such, it is not limited by the information available in any specific e-commerce website. In addition, METIS is able to track users' purchase intents in near real-time and make recommendations accordingly. 2) In METIS, product recommendation is framed as a learning to rank problem. Users' characteristics extracted from their public profiles in microblogs and products' demographics learned from both online product reviews and microblogs are fed into learning to rank algorithms for product recommendation. We have evaluated our system in a large dataset crawled from Sina Weibo. The experimental results have verified the feasibility and effectiveness of our system. We have also made a demo version of our system publicly available and have implemented a live system which allows registered users to receive recommendations in real time.",
"title": ""
},
{
"docid": "neg:1840336_4",
"text": "The rise of graph-structured data such as social networks, regulatory networks, citation graphs, and functional brain networks, in combination with resounding success of deep learning in various applications, has brought the interest in generalizing deep learning models to non-Euclidean domains. In this paper, we introduce a new spectral domain convolutional architecture for deep learning on graphs. The core ingredient of our model is a new class of parametric rational complex functions (Cayley polynomials) allowing to efficiently compute spectral filters on graphs that specialize on frequency bands of interest. Our model generates rich spectral filters that are localized in space, scales linearly with the size of the input data for sparsely-connected graphs, and can handle different constructions of Laplacian operators. Extensive experimental results show the superior performance of our approach on spectral image classification, community detection, vertex classification and matrix completion tasks.",
"title": ""
},
{
"docid": "neg:1840336_5",
"text": "Drawing on semi-structured interviews and cognitive mapping with 14 craftspeople, this paper analyzes the socio-technical arrangements of people and tools in the context of workspaces and productivity. Using actor-network theory and the concept of companionability, both of which emphasize the role of human and non-human actants in the socio-technical fabrics of everyday life, I analyze the relationships between people, productivity and technology through the following themes: embodiment, provenance, insecurity, flow and companionability. The discussion section develops these themes further through comparison with rhetoric surrounding the Internet of Things (IoT). By putting the experiences of craftspeople in conversation with IoT rhetoric, I suggest several policy interventions for understanding connectivity and inter-device operability as material, flexible and respectful of human agency.",
"title": ""
},
{
"docid": "neg:1840336_6",
"text": "Despite significant progress in object categorization, in recent years, a number of important challenges remain, mainly, ability to learn from limited labeled data and ability to recognize object classes within large, potentially open, set of labels. Zero-shot learning is one way of addressing these challenges, but it has only been shown to work with limited sized class vocabularies and typically requires separation between supervised and unsupervised classes, allowing former to inform the latter but not vice versa. We propose the notion of semi-supervised vocabulary-informed learning to alleviate the above mentioned challenges and address problems of supervised, zero-shot and open set recognition using a unified framework. Specifically, we propose a maximum margin framework for semantic manifold-based recognition that incorporates distance constraints from (both supervised and unsupervised) vocabulary atoms, ensuring that labeled samples are projected closest to their correct prototypes, in the embedding space, than to others. We show that resulting model shows improvements in supervised, zero-shot, and large open set recognition, with up to 310K class vocabulary on AwA and ImageNet datasets.",
"title": ""
},
{
"docid": "neg:1840336_7",
"text": "Two experiments found that when asked to perform the physically exerting tasks of clapping and shouting, people exhibit a sizable decrease in individual effort when performing in groups as compared to when they perform alone. This decrease, which we call social loafing, is in addition to losses due to faulty coordination of group efforts. Social loafing is discussed in terms of its experimental generality and theoretical importance. The widespread occurrence, the negative consequences for society, and some conditions that can minimize social loafing are also explored.",
"title": ""
},
{
"docid": "neg:1840336_8",
"text": "When learning to program, frustrating experiences contribute to negative learning outcomes and poor retention in the field. Defining a common framework that explains why these experiences occur can lead to better interventions and learning mechanisms. To begin constructing such a framework, we asked 45 software developers about the severity of their frustration and to recall their most recent frustrating programming experience. As a result, 67% considered their frustration to be severe. Further, we distilled the reported experiences into 11 categories, which include issues with mapping behaviors to code and broken programming tools. Finally, we discuss future directions for defining our framework and designing future interventions.",
"title": ""
},
{
"docid": "neg:1840336_9",
"text": "Automated story generation is the problem of automatically selecting a sequence of events, actions, or words that can be told as a story. We seek to develop a system that can generate stories by learning everything it needs to know from textual story corpora. To date, recurrent neural networks that learn language models at character, word, or sentence levels have had little success generating coherent stories. We explore the question of event representations that provide a midlevel of abstraction between words and sentences in order to retain the semantic information of the original data while minimizing event sparsity. We present a technique for preprocessing textual story data into event sequences. We then present a technique for automated story generation whereby we decompose the problem into the generation of successive events (event2event) and the generation of natural language sentences from events (event2sentence). We give empirical results comparing different event representations and their effects on event successor generation and the translation of events to natural language.",
"title": ""
},
{
"docid": "neg:1840336_10",
"text": "Unity am e Deelopm nt w ith C# Alan Thorn In Pro Unity Game Development with C#, Alan Thorn, author of Learn Unity for 2D` Game Development and experienced game developer, takes you through the complete C# workflow for developing a cross-platform first person shooter in Unity. C# is the most popular programming language for experienced Unity developers, helping them get the most out of what Unity offers. If you’re already using C# with Unity and you want to take the next step in becoming an experienced, professional-level game developer, this is the book you need. Whether you are a student, an indie developer, or a seasoned game dev professional, you’ll find helpful C# examples of how to build intelligent enemies, create event systems and GUIs, develop save-game states, and lots more. You’ll understand and apply powerful programming concepts such as singleton classes, component based design, resolution independence, delegates, and event driven programming.",
"title": ""
},
{
"docid": "neg:1840336_11",
"text": "Human brain imaging studies have shown that greater amygdala activation to emotional relative to neutral events leads to enhanced episodic memory. Other studies have shown that fearful faces also elicit greater amygdala activation relative to neutral faces. To the extent that amygdala recruitment is sufficient to enhance recollection, these separate lines of evidence predict that recognition memory should be greater for fearful relative to neutral faces. Experiment 1 demonstrated enhanced memory for emotionally negative relative to neutral scenes; however, fearful faces were not subject to enhanced recognition across a variety of delays (15 min to 2 wk). Experiment 2 demonstrated that enhanced delayed recognition for emotional scenes was associated with increased sympathetic autonomic arousal, indexed by the galvanic skin response, relative to fearful faces. These results suggest that while amygdala activation may be necessary, it alone is insufficient to enhance episodic memory formation. It is proposed that a sufficient level of systemic arousal is required to alter memory consolidation resulting in enhanced recollection of emotional events.",
"title": ""
},
{
"docid": "neg:1840336_12",
"text": "Face recognition is a widely used technology with numerous large-scale applications, such as surveillance, social media and law enforcement. There has been tremendous progress in face recognition accuracy over the past few decades, much of which can be attributed to deep learning based approaches during the last five years. Indeed, automated face recognition systems are now believed to surpass human performance in some scenarios. Despite this progress, a crucial question still remains unanswered: given a face representation, how many identities can it resolve? In other words, what is the capacity of the face representation? A scientific basis for estimating the capacity of a given face representation will not only benefit the evaluation and comparison of different face representation methods, but will also establish an upper bound on the scalability of an automatic face recognition system. We cast the face capacity estimation problem under the information theoretic framework of capacity of a Gaussian noise channel. By explicitly accounting for two sources of representational noise: epistemic (model) uncertainty and aleatoric (data) variability, our approach is able to estimate the capacity of any given face representation. To demonstrate the efficacy of our approach, we estimate the capacity of a 128-dimensional deep neural network based face representation, FaceNet [1], and that of the classical Eigenfaces [2] representation of the same dimensionality. Our numerical experiments on unconstrained faces indicate that, (a) our capacity estimation model yields a capacity upper bound of 5.8×108 for FaceNet and 1×100 for Eigenface representation at a false acceptance rate (FAR) of 1%, (b) the capacity of the face representation reduces drastically as you lower the desired FAR (for FaceNet representation; the capacity at FAR of 0.1% and 0.001% is 2.4×106 and 7.0×102, respectively), and (c) the empirical performance of the FaceNet representation is significantly below the theoretical limit.",
"title": ""
},
{
"docid": "neg:1840336_13",
"text": "Nowadays, computer systems are presented in almost all types of human activity and they support any kind of industry as well. Most of these systems are distributed where the communication between nodes is based on computer networks of any kind. Connectivity between system components is the key issue when designing distributed systems, especially systems of industrial informatics. The industrial area requires a wide range of computer communication means, particularly time-constrained and safety-enhancing ones. From fieldbus and industrial Ethernet technologies through wireless and internet-working solutions to standardization issues, there are many aspects of computer networks uses and many interesting research domains. Lots of them are quite sophisticated or even unique. The main goal of this paper is to present the survey of the latest trends in the communication domain of industrial distributed systems and to emphasize important questions as dependability, and standardization. Finally, the general assessment and estimation of the future development is provided. The presentation is based on the abstract description of dataflow within a system.",
"title": ""
},
{
"docid": "neg:1840336_14",
"text": "Measuring similarity between two data objects is a more challenging problem for data mining and knowledge discovery tasks. The traditional clustering algorithms have been mainly stressed on numerical data, the implicit property of which can be exploited to define distance function between the data points to define similarity measure. The problem of similarity becomes more complex when the data is categorical which do not have a natural ordering of values or can be called as non geometrical attributes. Clustering on relational data sets when majority of its attributes are of categorical types makes interesting facts. No earlier work has been done on clustering categorical attributes of relational data set types making use of the property of functional dependency as parameter to measure similarity. This paper is an extension of earlier work on clustering relational data sets where domains are unique and similarity is context based and introduces a new notion of similarity based on dependency of an attribute on other attributes prevalent in the relational data set. This paper also gives a brief overview of popular similarity measures of categorical attributes. This novel similarity measure can be used to apply on tuples and their respective values. The important property of categorical domain is that they have smaller number of attribute values. The similarity measure of relational data sets then can be applied to the smaller data sets for efficient results.",
"title": ""
},
{
"docid": "neg:1840336_15",
"text": "In-memory key/value store (KV-store) is a key building block for many systems like databases and large websites. Two key requirements for such systems are efficiency and availability, which demand a KV-store to continuously handle millions of requests per second. A common approach to availability is using replication, such as primary-backup (PBR), which, however, requires M+1 times memory to tolerate M failures. This renders scarce memory unable to handle useful user jobs.\n This article makes the first case of building highly available in-memory KV-store by integrating erasure coding to achieve memory efficiency, while not notably degrading performance. A main challenge is that an in-memory KV-store has much scattered metadata. A single KV put may cause excessive coding operations and parity updates due to excessive small updates to metadata. Our approach, namely Cocytus, addresses this challenge by using a hybrid scheme that leverages PBR for small-sized and scattered data (e.g., metadata and key), while only applying erasure coding to relatively large data (e.g., value). To mitigate well-known issues like lengthy recovery of erasure coding, Cocytus uses an online recovery scheme by leveraging the replicated metadata information to continuously serve KV requests. To further demonstrate the usefulness of Cocytus, we have built a transaction layer by using Cocytus as a fast and reliable storage layer to store database records and transaction logs. We have integrated the design of Cocytus to Memcached and extend it to support in-memory transactions. Evaluation using YCSB with different KV configurations shows that Cocytus incurs low overhead for latency and throughput, can tolerate node failures with fast online recovery, while saving 33% to 46% memory compared to PBR when tolerating two failures. A further evaluation using the SmallBank OLTP benchmark shows that in-memory transactions can run atop Cocytus with high throughput, low latency, and low abort rate and recover fast from consecutive failures.",
"title": ""
},
{
"docid": "neg:1840336_16",
"text": "One of the challenges in computer vision is how to learn an accurate classifier for a new domain by using labeled images from an old domain under the condition that there is no available labeled images in the new domain. Domain adaptation is an outstanding solution that tackles this challenge by employing available source-labeled datasets, even with significant difference in distribution and properties. However, most prior methods only reduce the difference in subspace marginal or conditional distributions across domains while completely ignoring the source data label dependence information in a subspace. In this paper, we put forward a novel domain adaptation approach, referred to as Enhanced Subspace Distribution Matching. Specifically, it aims to jointly match the marginal and conditional distributions in a kernel principal dimensionality reduction procedure while maximizing the source label dependence in a subspace, thus raising the subspace distribution matching degree. Extensive experiments verify that it can significantly outperform several state-of-the-art methods for cross-domain image classification problems.",
"title": ""
},
{
"docid": "neg:1840336_17",
"text": "This paper presents how to generate questions from given passages using neural networks, where large scale QA pairs are automatically crawled and processed from Community-QA website, and used as training data. The contribution of the paper is 2-fold: First, two types of question generation approaches are proposed, one is a retrieval-based method using convolution neural network (CNN), the other is a generation-based method using recurrent neural network (RNN); Second, we show how to leverage the generated questions to improve existing question answering systems. We evaluate our question generation method for the answer sentence selection task on three benchmark datasets, including SQuAD, MS MARCO, and WikiQA. Experimental results show that, by using generated questions as an extra signal, significant QA improvement can be achieved.",
"title": ""
},
{
"docid": "neg:1840336_18",
"text": "To truly understand the visual world our models should be able not only to recognize images but also generate them. To this end, there has been exciting recent progress on generating images from natural language descriptions. These methods give stunning results on limited domains such as descriptions of birds or flowers, but struggle to faithfully reproduce complex sentences with many objects and relationships. To overcome this limitation we propose a method for generating images from scene graphs, enabling explicitly reasoning about objects and their relationships. Our model uses graph convolution to process input graphs, computes a scene layout by predicting bounding boxes and segmentation masks for objects, and converts the layout to an image with a cascaded refinement network. The network is trained adversarially against a pair of discriminators to ensure realistic outputs. We validate our approach on Visual Genome and COCO-Stuff, where qualitative results, ablations, and user studies demonstrate our method's ability to generate complex images with multiple objects.",
"title": ""
},
{
"docid": "neg:1840336_19",
"text": "– The recent work on cross-country regressions can be compared to looking at “a black cat in a dark room”. Whether or not all this work has accomplished anything on the substantive economic issues is a moot question. But the search for “a black cat ” has led to some progress on the econometric front. The purpose of this paper is to comment on this progress. We discuss the problems with the use of cross-country panel data in the context of two problems: The analysis of economic growth and that of the purchasing power parity (PPP) theory. A propos de l’emploi des méthodes de panel sur des données inter-pays RÉSUMÉ. – Les travaux récents utilisant des régressions inter-pays peuvent être comparés à la recherche d'« un chat noir dans une pièce sans lumière ». La question de savoir si ces travaux ont apporté quelque chose de significatif à la connaissance économique est assez controversée. Mais la recherche du « chat noir » a conduit à quelques progrès en économétrie. L'objet de cet article est de discuter de ces progrès. Les problèmes posés par l'utilisation de panels de pays sont discutés dans deux contextes : celui de la croissance économique et de la convergence d'une part ; celui de la théorie de la parité des pouvoirs d'achat d'autre part. * G.S. MADDALA: Department of Economics, The Ohio State University. I would like to thank M. NERLOVE, P. SEVESTRE and an anonymous referee for helpful comments. Responsability for the omissions and any errors is my own. ANNALES D’ÉCONOMIE ET DE STATISTIQUE. – N° 55-56 – 1999 « The Gods love the obscure and hate the obvious » BRIHADARANYAKA UPANISHAD",
"title": ""
}
] |
1840337 | Recorded Behavior as a Valuable Resource for Diagnostics in Mobile Phone Addiction: Evidence from Psychoinformatics | [
{
"docid": "pos:1840337_0",
"text": "OBJECTIVE\nThe aim of this study was to develop a self-diagnostic scale that could distinguish smartphone addicts based on the Korean self-diagnostic program for Internet addiction (K-scale) and the smartphone's own features. In addition, the reliability and validity of the smartphone addiction scale (SAS) was demonstrated.\n\n\nMETHODS\nA total of 197 participants were selected from Nov. 2011 to Jan. 2012 to accomplish a set of questionnaires, including SAS, K-scale, modified Kimberly Young Internet addiction test (Y-scale), visual analogue scale (VAS), and substance dependence and abuse diagnosis of DSM-IV. There were 64 males and 133 females, with ages ranging from 18 to 53 years (M = 26.06; SD = 5.96). Factor analysis, internal-consistency test, t-test, ANOVA, and correlation analysis were conducted to verify the reliability and validity of SAS.\n\n\nRESULTS\nBased on the factor analysis results, the subscale \"disturbance of reality testing\" was removed, and six factors were left. The internal consistency and concurrent validity of SAS were verified (Cronbach's alpha = 0.967). SAS and its subscales were significantly correlated with K-scale and Y-scale. The VAS of each factor also showed a significant correlation with each subscale. In addition, differences were found in the job (p<0.05), education (p<0.05), and self-reported smartphone addiction scores (p<0.001) in SAS.\n\n\nCONCLUSIONS\nThis study developed the first scale of the smartphone addiction aspect of the diagnostic manual. This scale was proven to be relatively reliable and valid.",
"title": ""
},
{
"docid": "pos:1840337_1",
"text": "For the first time in history, it is possible to study human behavior on great scale and in fine detail simultaneously. Online services and ubiquitous computational devices, such as smartphones and modern cars, record our everyday activity. The resulting Big Data offers unprecedented opportunities for tracking and analyzing behavior. This paper hypothesizes the applicability and impact of Big Data technologies in the context of psychometrics both for research and clinical applications. It first outlines the state of the art, including the severe shortcomings with respect to quality and quantity of the resulting data. It then presents a technological vision, comprised of (i) numerous data sources such as mobile devices and sensors, (ii) a central data store, and (iii) an analytical platform, employing techniques from data mining and machine learning. To further illustrate the dramatic benefits of the proposed methodologies, the paper then outlines two current projects, logging and analyzing smartphone usage. One such study attempts to thereby quantify severity of major depression dynamically; the other investigates (mobile) Internet Addiction. Finally, the paper addresses some of the ethical issues inherent to Big Data technologies. In summary, the proposed approach is about to induce the single biggest methodological shift since the beginning of psychology or psychiatry. The resulting range of applications will dramatically shape the daily routines of researches and medical practitioners alike. Indeed, transferring techniques from computer science to psychiatry and psychology is about to establish Psycho-Informatics, an entire research direction of its own.",
"title": ""
}
] | [
{
"docid": "neg:1840337_0",
"text": "A boosted convolutional neural network (BCNN) system is proposed to enhance the pedestrian detection performance in this work. Being inspired by the classic boosting idea, we develop a weighted loss function that emphasizes challenging samples in training a convolutional neural network (CNN). Two types of samples are considered challenging: 1) samples with detection scores falling in the decision boundary, and 2) temporally associated samples with inconsistent scores. A weighting scheme is designed for each of them. Finally, we train a boosted fusion layer to benefit from the integration of these two weighting schemes. We use the Fast-RCNN as the baseline, and test the corresponding BCNN on the Caltech pedestrian dataset in the experiment, and show a significant performance gain of the BCNN over its baseline.",
"title": ""
},
{
"docid": "neg:1840337_1",
"text": "Mobile online social networks (OSNs) are emerging as the popular mainstream platform for information and content sharing among people. In order to provide Quality of Experience (QoE) support for mobile OSN services, in this paper we propose a socially-driven learning-based framework, namely Spice, for media content prefetching to reduce the access delay and enhance mobile user's satisfaction. Through a large-scale data-driven analysis over real-life mobile Twitter traces from over 17,000 users during a period of five months, we reveal that the social friendship has a great impact on user's media content click behavior. To capture this effect, we conduct social friendship clustering over the set of user's friends, and then develop a cluster-based Latent Bias Model for socially-driven learning-based prefetching prediction. We then propose a usage-adaptive prefetching scheduling scheme by taking into account that different users may possess heterogeneous patterns in the mobile OSN app usage. We comprehensively evaluate the performance of Spice framework using trace-driven emulations on smartphones. Evaluation results corroborate that the Spice can achieve superior performance, with an average 67.2% access delay reduction at the low cost of cellular data and energy consumption. Furthermore, by enabling users to offload their machine learning procedures to a cloud server, our design can achieve speed-up of a factor of 1000 over the local data training execution on smartphones.",
"title": ""
},
{
"docid": "neg:1840337_2",
"text": "Received: 2013-04-15 Accepted: 2013-05-13 Accepted after one revision by Prof. Dr. Sinz. Published online: 2013-06-14 This article is also available in German in print and via http://www. wirtschaftsinformatik.de: Blohm I, Leimeister JM (2013) Gamification. Gestaltung IT-basierter Zusatzdienstleistungen zur Motivationsunterstützung und Verhaltensänderung. WIRTSCHAFTSINFORMATIK. doi: 10.1007/s11576-013-0368-0.",
"title": ""
},
{
"docid": "neg:1840337_3",
"text": "The agricultural productivity of India is gradually declining due to destruction of crops by various natural calamities and the crop rotation process being affected by irregular climate patterns. Also, the interest and efforts put by farmers lessen as they grow old which forces them to sell their agricultural lands, which automatically affects the production of agricultural crops and dairy products. This paper mainly focuses on the ways by which we can protect the crops during an unavoidable natural disaster and implement technology induced smart agro-environment, which can help the farmer manage large fields with less effort. Three common issues faced during agricultural practice are shearing furrows in case of excess rain or flood, manual watering of plants and security against animal grazing. This paper provides a solution for these problems by helping farmer monitor and control various activities through his mobile via GSM and DTMF technology in which data is transmitted from various sensors placed in the agricultural field to the controller and the status of the agricultural parameters are notified to the farmer using which he can take decisions accordingly. The main advantage of this system is that it is semi-automated i.e. the decision is made by the farmer instead of fully automated decision that results in precision agriculture. It also overcomes the existing traditional practices that require high money investment, energy, labour and time.",
"title": ""
},
{
"docid": "neg:1840337_4",
"text": "OBJECTIVE\nThis study aims to compare how national guidelines approach the management of obesity in reproductive age women.\n\n\nSTUDY DESIGN\nWe conducted a search for national guidelines in the English language on the topic of obesity surrounding the time of a pregnancy. We identified six primary source documents and several secondary source documents from five countries. Each document was then reviewed to identify: (1) statements acknowledging increased health risks related to obesity and reproductive outcomes, (2) recommendations for the management of obesity before, during, or after pregnancy.\n\n\nRESULTS\nAll guidelines cited an increased risk for miscarriage, birth defects, gestational diabetes, hypertension, fetal growth abnormalities, cesarean sections, difficulty with anesthesia, postpartum hemorrhage, and obesity in offspring. Counseling on the risks of obesity and weight loss before pregnancy were universal recommendations. There were substantial differences in the recommendations pertaining to gestational weight gain goals, nutrient and vitamin supplements, screening for gestational diabetes, and thromboprophylaxis among the guidelines.\n\n\nCONCLUSION\nStronger evidence from randomized trials is needed to devise consistent recommendations for obese reproductive age women. This research may also assist clinicians in overcoming one of the many obstacles they encounter when providing care to obese women.",
"title": ""
},
{
"docid": "neg:1840337_5",
"text": "Equivalent time oscilloscopes are widely used as an alternative to real-time oscilloscopes when high timing resolution is needed. For their correct operation, they need the trigger signal to be accurately aligned to the incoming data, which is achieved by the use of a clock and data recovery circuit (CDR). In this paper, a new multilevel bang-bang phase detector (BBPD) for CDRs is presented; the proposed phase detection scheme disregards samples taken close to the data transitions for the calculation of the phase difference between the inputs, thus eliminating metastability, one of the main issues hindering the performance of BBPDs.",
"title": ""
},
{
"docid": "neg:1840337_6",
"text": "GF (Grammatical Framework) is a grammar formalism based on the distinction between abstract and concrete syntax. An abstract syntax is a free algebra of trees, and a concrete syntax is a mapping from trees to nested records of strings and features. These mappings are naturally defined as functions in a functional programming language; the GF language provides the customary functional programming constructs such as algebraic data types, pattern matching, and higher-order functions, which enable productive grammar writing and linguistic generalizations. Given the seemingly transformational power of the GF language, its computational properties are not obvious. However, all grammars written in GF can be compiled into a simple and austere core language, Canonical GF (CGF). CGF is well suited for implementing parsing and generation with grammars, as well as for proving properties of GF. This paper gives a concise description of both the core and the source language, the algorithm used in compiling GF to CGF, and some back-end optimizations on CGF.",
"title": ""
},
{
"docid": "neg:1840337_7",
"text": "This paper presents a Phantom Go program. It is based on a MonteCarlo approach. The program plays Phantom Go at an intermediate level.",
"title": ""
},
{
"docid": "neg:1840337_8",
"text": "Cell division in eukaryotes requires extensive architectural changes of the nuclear envelope (NE) to ensure that segregated DNA is finally enclosed in a single cell nucleus in each daughter cell. Higher eukaryotic cells have evolved 'open' mitosis, the most extreme mechanism to solve the problem of nuclear division, in which the NE is initially completely disassembled and then reassembled in coordination with DNA segregation. Recent progress in the field has now started to uncover mechanistic and molecular details that underlie the changes in NE reorganization during open mitosis. These studies reveal a tight interplay between NE components and the mitotic machinery.",
"title": ""
},
{
"docid": "neg:1840337_9",
"text": "We explore the use of convolutional neural networks for the semantic classification of remote sensing scenes. Two recently proposed architectures, CaffeNet and GoogLeNet, are adopted, with three different learning modalities. Besides conventional training from scratch, we resort to pre-trained networks that are only fine-tuned on the target data, so as to avoid overfitting problems and reduce design time. Experiments on two remote sensing datasets, with markedly different characteristics, testify on the effectiveness and wide applicability of the proposed solution, which guarantees a significant performance improvement over all state-of-the-art references.",
"title": ""
},
{
"docid": "neg:1840337_10",
"text": "The evolutionary origin of the eukaryotic cell represents an enigmatic, yet largely incomplete, puzzle. Several mutually incompatible scenarios have been proposed to explain how the eukaryotic domain of life could have emerged. To date, convincing evidence for these scenarios in the form of intermediate stages of the proposed eukaryogenesis trajectories is lacking, presenting the emergence of the complex features of the eukaryotic cell as an evolutionary deus ex machina. However, recent advances in the field of phylogenomics have started to lend support for a model that places a cellular fusion event at the basis of the origin of eukaryotes (symbiogenesis), involving the merger of an as yet unknown archaeal lineage that most probably belongs to the recently proposed 'TACK superphylum' (comprising Thaumarchaeota, Aigarchaeota, Crenarchaeota and Korarchaeota) with an alphaproteobacterium (the protomitochondrion). Interestingly, an increasing number of so-called ESPs (eukaryotic signature proteins) is being discovered in recently sequenced archaeal genomes, indicating that the archaeal ancestor of the eukaryotic cell might have been more eukaryotic in nature than presumed previously, and might, for example, have comprised primitive phagocytotic capabilities. In the present paper, we review the evolutionary transition from archaeon to eukaryote, and propose a new model for the emergence of the eukaryotic cell, the 'PhAT (phagocytosing archaeon theory)', which explains the emergence of the cellular and genomic features of eukaryotes in the light of a transiently complex phagocytosing archaeon.",
"title": ""
},
{
"docid": "neg:1840337_11",
"text": "RNA turnover is an integral part of cellular RNA homeostasis and gene expression regulation. Whereas the cytoplasmic control of protein-coding mRNA is often the focus of study, we discuss here the less appreciated role of nuclear RNA decay systems in controlling RNA polymerase II (RNAPII)-derived transcripts. Historically, nuclear RNA degradation was found to be essential for the functionalization of transcripts through their proper maturation. Later, it was discovered to also be an important caretaker of nuclear hygiene by removing aberrant and unwanted transcripts. Recent years have now seen a set of new protein complexes handling a variety of new substrates, revealing functions beyond RNA processing and the decay of non-functional transcripts. This includes an active contribution of nuclear RNA metabolism to the overall cellular control of RNA levels, with mechanistic implications during cellular transitions. RNA is controlled at various stages of transcription and processing to achieve appropriate gene regulation. Whereas much research has focused on the cytoplasmic control of RNA levels, this Review discusses our emerging appreciation of the importance of nuclear RNA regulation, including the molecular machinery involved in nuclear RNA decay, how functional RNAs bypass degradation and roles for nuclear RNA decay in physiology and disease.",
"title": ""
},
{
"docid": "neg:1840337_12",
"text": "PURPOSE AND DESIGN\nSnack and Relax® (S&R), a program providing healthy snacks and holistic relaxation modalities to hospital employees, was evaluated for immediate impact. A cross-sectional survey was then conducted to assess the professional quality of life (ProQOL) in registered nurses (RNs); compare S&R participants/nonparticipants on compassion satisfaction (CS), burnout, and secondary traumatic stress (STS); and identify situations in which RNs experienced compassion fatigue or burnout and the strategies used to address these situations.\n\n\nMETHOD\nPre- and post vital signs and self-reported stress were obtained from S&R attendees (N = 210). RNs completed the ProQOL Scale measuring CS, burnout, and STS (N = 158).\n\n\nFINDINGS\nSignificant decreases in self-reported stress, respirations, and heart rate were found immediately after S&R. Low CS was noted in 28.5% of participants, 25.3% had high burnout, and 23.4% had high STS. S&R participants and nonparticipants did not differ on any of the ProQOL scales. Situations in which participants experienced compassion fatigue/burnout were categorized as patient-related, work-related, and personal/family-related. Strategies to address these situations were holistic and stress reducing.\n\n\nCONCLUSION\nProviding holistic interventions such as S&R for nurses in the workplace may alleviate immediate feelings of stress and provide a moment of relaxation in the workday.",
"title": ""
},
{
"docid": "neg:1840337_13",
"text": "Although touch is one of the most neglected modalities of communication, several lines of research bear on the important communicative functions served by the modality. The authors highlighted the importance of touch by reviewing and synthesizing the literatures pertaining to the communicative functions served by touch among humans, nonhuman primates, and rats. In humans, the authors focused on the role that touch plays in emotional communication, attachment, bonding, compliance, power, intimacy, hedonics, and liking. In nonhuman primates, the authors examined the relations among touch and status, stress, reconciliation, sexual relations, and attachment. In rats, the authors focused on the role that touch plays in emotion, learning and memory, novelty seeking, stress, and attachment. The authors also highlighted the potential phylogenetic and ontogenetic continuities and discussed suggestions for future research.",
"title": ""
},
{
"docid": "neg:1840337_14",
"text": "The pervasiveness of cell phones and mobile social media applications is generating vast amounts of geolocalized user-generated content. Since the addition of geotagging information, Twitter has become a valuable source for the study of human dynamics. Its analysis is shedding new light not only on understanding human behavior but also on modeling the way people live and interact in their urban environments. In this paper, we evaluate the use of geolocated tweets as a complementary source of information for urban planning applications. Our contributions are focussed in two urban planing areas: (1) a technique to automatically determine land uses in a specific urban area based on tweeting patterns, and (2) a technique to automatically identify urban points of interest as places with high activity of tweets. We apply our techniques in Manhattan (NYC) using 49 days of geolocated tweets and validate them using land use and landmark information provided by various NYC departments. Our results indicate that geolocated tweets are a powerful and dynamic data source to characterize urban environments.",
"title": ""
},
{
"docid": "neg:1840337_15",
"text": "With the rise of social media and advancements in AI technology, human-bot interaction will soon be commonplace. In this paper we explore human-bot interaction in STACK OVERFLOW, a question and answer website for developers. For this purpose, we built a bot emulating an ordinary user answering questions concerning the resolution of git error messages. In a first run this bot impersonated a human, while in a second run the same bot revealed its machine identity. Despite being functionally identical, the two bot variants elicited quite different reactions.",
"title": ""
},
{
"docid": "neg:1840337_16",
"text": "The accomplishments to date on the development of automatic vehicle control (AVC) technology in the Program on Advanced Technology for the Highway (PATH) at the University of California, Berkeley, are summarized. The basic prqfiiples and assumptions underlying the PATH work are identified, ‘followed by explanations of the work on automating vehicle lateral (steering) and longitudinal (spacing and speed) control. For both lateral and longitudinal control, the modeling of plant dynamics is described first, followed by development of the additional subsystems needed (communications, reference/sensor systems) and the derivation of the control laws. Plans for testing on vehicles in both near and long term are then discussed.",
"title": ""
},
{
"docid": "neg:1840337_17",
"text": "In a variety of Network-based Intrusion Detection System (NIDS) applications, one desires to detect groups of unknown attack (e.g., botnet) packet-flows, with a group potentially manifesting its atypicality (relative to a known reference “normal”/null model) on a low-dimensional subset of the full measured set of features used by the IDS. What makes this anomaly detection problem quite challenging is that it is a priori unknown which (possibly sparse) subset of features jointly characterizes a particular application, especially one that has not been seen before, which thus represents an unknown behavioral class (zero-day threat). Moreover, nowadays botnets have become evasive, evolving their behavior to avoid signature-based IDSes. In this work, we apply a novel active learning (AL) framework for botnet detection, facilitating detection of unknown botnets (assuming no ground truth examples of same). We propose a new anomaly-based feature set that captures the informative features and exploits the sequence of packet directions in a given flow. Experiments on real world network traffic data, including several common Zeus botnet instances, demonstrate the advantage of our proposed features and AL system.",
"title": ""
},
{
"docid": "neg:1840337_18",
"text": "The paper analyses potentials, challenges and problems of the rural tourism from the point of view of its impact on sustainable rural development. It explores alternative sources of income for rural people by means of tourism and investigates effects of the rural tourism on agricultural production in local rural communities. The aim is to identify the existing and potential tourist attractions within the rural areas in Southern Russia and to provide solutions to be introduced in particular rural settlements in order to make them attractive for tourists. The paper includes the elaboration and testing of a methodology for evaluating the rural tourism potentials using the case of rural settlements of Stavropol Krai, Russia. The paper concludes with a ranking of the selected rural settlements according to their rural tourist capacity and substantiation of the tourism models to be implemented to ensure a sustainable development of the considered rural areas.",
"title": ""
}
] |
1840338 | A Systematic Review of the Use of Blockchain in Healthcare | [
{
"docid": "pos:1840338_0",
"text": "Blockchains as a technology emerged to facilitate money exchange transactions and eliminate the need for a trusted third party to notarize and verify such transactions as well as protect data security and privacy. New structures of Blockchains have been designed to accommodate the need for this technology in other fields such as e-health, tourism and energy. This paper is concerned with the use of Blockchains in managing and sharing electronic health and medical records to allow patients, hospitals, clinics, and other medical stakeholder to share data amongst themselves, and increase interoperability. The selection of the Blockchains used architecture depends on the entities participating in the constructed chain network. Although the use of Blockchains may reduce redundancy and provide caregivers with consistent records about their patients, it still comes with few challenges which could infringe patients' privacy, or potentially compromise the whole network of stakeholders. In this paper, we investigate different Blockchains structures, look at existing challenges and provide possible solutions. We focus on challenges that may expose patients' privacy and the resiliency of Blockchains to possible attacks.",
"title": ""
},
{
"docid": "pos:1840338_1",
"text": "Permissionless blockchain-based cryptocurrencies commonly use proof-of-work (PoW) or proof-of-stake (PoS) to ensure their security, e.g. to prevent double spending attacks. However, both approaches have disadvantages: PoW leads to massive amounts of wasted electricity and re-centralization, whereas major stakeholders in PoS might be able to create a monopoly. In this work, we propose proof-of-personhood (PoP), a mechanism that binds physical entities to virtual identities in a way that enables accountability while preserving anonymity. Afterwards we introduce PoPCoin, a new cryptocurrency, whose consensus mechanism leverages PoP to eliminate the dis-advantages of PoW and PoS while ensuring security. PoPCoin leads to a continuously fair and democratic wealth creation process which paves the way for an experimental basic income infrastructure.",
"title": ""
}
] | [
{
"docid": "neg:1840338_0",
"text": "The next major step in the evolution of LTE targets the rapidly increasing demand for mobile broadband services and traffic volumes. One of the key technologies is a new carrier type, referred to in this article as a Lean Carrier, an LTE carrier with minimized control channel overhead and cell-specific reference signals. The Lean Carrier can enhance spectral efficiency, increase spectrum flexibility, and reduce energy consumption. This article provides an overview of the motivations and main use cases of the Lean Carrier. Technical challenges are highlighted, and design options are discussed; finally, a performance evaluation quantifies the benefits of the Lean Carrier.",
"title": ""
},
{
"docid": "neg:1840338_1",
"text": "We study the problem of video-to-video synthesis, whose goal is to learn a mapping function from an input source video (e.g., a sequence of semantic segmentation masks) to an output photorealistic video that precisely depicts the content of the source video. While its image counterpart, the image-to-image translation problem, is a popular topic, the video-to-video synthesis problem is less explored in the literature. Without modeling temporal dynamics, directly applying existing image synthesis approaches to an input video often results in temporally incoherent videos of low visual quality. In this paper, we propose a video-to-video synthesis approach under the generative adversarial learning framework. Through carefully-designed generators and discriminators, coupled with a spatio-temporal adversarial objective, we achieve high-resolution, photorealistic, temporally coherent video results on a diverse set of input formats including segmentation masks, sketches, and poses. Experiments on multiple benchmarks show the advantage of our method compared to strong baselines. In particular, our model is capable of synthesizing 2K resolution videos of street scenes up to 30 seconds long, which significantly advances the state-of-the-art of video synthesis. Finally, we apply our method to future video prediction, outperforming several competing systems. Code, models, and more results are available at our website.",
"title": ""
},
{
"docid": "neg:1840338_2",
"text": "Linear variable differential transformer (LVDT) sensors are widely used in hydraulic and pneumatic mechatronic systems for measuring physical quantities like displacement, force or pressure. The LVDT sensor consists of two magnetic coupled coils with a common core and this sensor converts the displacement of core into reluctance variation of magnetic circuit. LVDT sensors combines good accuracy (0.1 % error) with low cost, but they require relative complex electronics. Standard electronics for LVDT sensor conditioning is analog $the coupled coils constitute an inductive half-bridge supplied with 5 kHz sinus excitation from a quadrate oscillator. The output phase span is amplified and synchronous demodulated. This analog technology works well but has its drawbacks - hard to adjust, many components and packages, no connection to computer systems. To eliminate all these disadvantages, our team from \"Politehnica\" University of Bucharest has developed a LVDT signal conditioner using system on chip microcontroller MSP430F149 from Texas Instruments. This device integrates all peripherals required for LVDT signal conditioning (pulse width modulation modules, analog to digital converter, timers, enough memory resources and processing power) and offers also excellent low-power options. Resulting electronic module is a one-chip solution made entirely in SMD technology and its small dimensions allow its integration into sensor's body. Present paper focuses on specific issues of this digital solution for LVDT conditioning and compares it with classic analog solution from different points of view: error curve, power consumption, communication options, dimensions and production cost. Microcontroller software (firmware) and digital signal conditioning techniques for LVDT are also analyzed. Use of system on chip devices for signal conditioning allows realization of low cost compact transducers with same or better performances than their analog counterparts, but with extra options like serial communication channels, self-calibration, local storage of measured values and fault detection",
"title": ""
},
{
"docid": "neg:1840338_3",
"text": "The purpose of this paper is twofold. First, we give a survey of the known methods of constructing lattices in complex hyperbolic space. Secondly, we discuss some of the lattices constructed by Deligne and Mostow and by Thurston in detail. In particular, we give a unified treatment of the constructions of fundamental domains and we relate this to other properties of these lattices.",
"title": ""
},
{
"docid": "neg:1840338_4",
"text": "The development of structural health monitoring (SHM) technology has evolved for over fifteen years in Hong Kong since the implementation of the “Wind And Structural Health Monitoring System (WASHMS)” on the suspension Tsing Ma Bridge in 1997. Five cable-supported bridges in Hong Kong, namely the Tsing Ma (suspension) Bridge, the Kap Shui Mun (cable-stayed) Bridge, the Ting Kau (cable-stayed) Bridge, the Western Corridor (cable-stayed) Bridge, and the Stonecutters (cable-stayed) Bridge, have been instrumented with sophisticated long-term SHM systems. These SHM systems mainly focus on the tracing of structural behavior and condition of the long-span bridges over their lifetime. Recently, a structural health monitoring and maintenance management system (SHM&MMS) has been designed and will be implemented on twenty-one sea-crossing viaduct bridges with a total length of 9,283 km in the Hong Kong Link Road (HKLR) of the Hong Kong – Zhuhai – Macao Bridge of which the construction commenced in mid-2012. The SHM&MMS gives more emphasis on durability monitoring of the reinforced concrete viaduct bridges in marine environment and integration of the SHM system and bridge maintenance management system. It is targeted to realize the transition from traditional corrective and preventive maintenance to condition-based maintenance (CBM) of in-service bridges. The CBM uses real-time and continuous monitoring data and monitoring-derived information on the condition of bridges (including structural performance and deterioration mechanisms) to identify when the actual maintenance is necessary and how cost-effective maintenance can be conducted. This paper outlines how to incorporate SHM technology into bridge maintenance strategy to realize CBM management of bridges.",
"title": ""
},
{
"docid": "neg:1840338_5",
"text": "Many problems in AI are simplified by clever representations of sensory or symbolic input. How to discover such representations automatically, from large amounts of unlabeled data, remains a fundamental challenge. The goal of statistical methods for dimensionality reduction is to detect and discover low dimensional structure in high dimensional data. In this paper, we review a recently proposed algorithm— maximum variance unfolding—for learning faithful low dimensional representations of high dimensional data. The algorithm relies on modern tools in convex optimization that are proving increasingly useful in many areas of machine learning.",
"title": ""
},
{
"docid": "neg:1840338_6",
"text": "Although social skills group interventions for children with autism are common in outpatient clinic settings, little research has been conducted to determine the efficacy of such treatments. This study examined the effectiveness of an outpatient clinic-based social skills group intervention with four high-functioning elementary-aged children with autism. The group was designed to teach specific social skills, including greeting, conversation, and play skills in a brief therapy format (eight sessions total). At the end of each skills-training session, children with autism were observed in play sessions with typical peers. Typical peers received peer education about ways to interact with children with autism. Results indicate that a social skills group implemented in an outpatient clinic setting was effective in improving greeting and play skills, with less clear improvements noted in conversation skills. In addition, children with autism reported increased feelings of social support from classmates at school following participation in the group. However, parent report data of greeting, conversation, and play skills outside of the clinic setting indicated significant improvements in only greeting skills. Thus, although the clinic-based intervention led to improvements in social skills, fewer changes were noted in the generalization to nonclinic settings.",
"title": ""
},
{
"docid": "neg:1840338_7",
"text": "We extend classic review mining work by building a binary classifier that predicts whether a review of a documentary film was written by an expert or a layman with 90.70% accuracy (F1 score), and compare the characteristics of the predicted classes. A variety of standard lexical and syntactic features was used for this supervised learning task. Our results suggest that experts write comparatively lengthier and more detailed reviews that feature more complex grammar and a higher diversity in their vocabulary. Layman reviews are more subjective and contextualized in peoples’ everyday lives. Our error analysis shows that laymen are about twice as likely to be mistaken as experts than vice versa. We argue that the type of author might be a useful new feature for improving the accuracy of predicting the rating, helpfulness and authenticity of reviews. Finally, the outcomes of this work might help researchers and practitioners in the field of impact assessment to gain a more fine-grained understanding of the perception of different types of media consumers and reviewers of a topic, genre or information product.",
"title": ""
},
{
"docid": "neg:1840338_8",
"text": "Cancer and other chronic diseases have constituted (and will do so at an increasing pace) a significant portion of healthcare costs in the United States in recent years. Although prior research has shown that diagnostic and treatment recommendations might be altered based on the severity of comorbidities, chronic diseases are still being investigated in isolation from one another in most cases. To illustrate the significance of concurrent chronic diseases in the course of treatment, this study uses SEER’s cancer data to create two comorbid data sets: one for breast and female genital cancers and another for prostate and urinal cancers. Several popular machine learning techniques are then applied to the resultant data sets to build predictive models. Comparison of the results shows that having more information about comorbid conditions of patients can improve models’ predictive power, which in turn, can help practitioners make better diagnostic and treatment decisions. Therefore, proper identification, recording, and use of patients’ comorbidity status can potentially lower treatment costs and ease the healthcare related economic challenges.",
"title": ""
},
{
"docid": "neg:1840338_9",
"text": "Community structure is one of the key properties of real-world complex networks. It plays a crucial role in their behaviors and topology. While an important work has been done on the issue of community detection, very little attention has been devoted to the analysis of the community structure. In this paper, we present an extensive investigation of the overlapping community network deduced from a large-scale co-authorship network. The nodes of the overlapping community network rep-resent the functional communities of the co-authorship network, and the links account for the fact that communities share some nodes in the co-authorship network. The comparative evaluation of the topological properties of these two networks shows that they share similar topological properties. These results are very interesting. Indeed, the network of communities seems to be a good representative of the original co-authorship network. With its smaller size, it may be more practical in order to realize various analyses that cannot be performed easily in large-scale real-world networks.",
"title": ""
},
{
"docid": "neg:1840338_10",
"text": "Convolutional Neural Network (CNN) is one of the most effective neural network model for many classification tasks, such as voice recognition, computer vision and biological information processing. Unfortunately, Computation of CNN is both memory-intensive and computation-intensive, which brings a huge challenge to the design of the hardware accelerators. A large number of hardware accelerators for CNN inference are designed by the industry and the academia. Most of the engines are based on 32-bit floating point matrix multiplication, where the data precision is over-provisioned for inference job and the hardware cost are too high. In this paper, a 8-bit fixed-point LeNet inference engine (Laius) is designed and implemented on FPGA. In order to reduce the consumption of FPGA resource, we proposed a methodology to find the optimal bit-length for weight and bias in LeNet, which results in using 8-bit fixed point for most of the computation and using 16-bit fixed point for other computation. The PE (Processing Element) design is proposed. Pipelining and PE tiling technique is use to improve the performance of the inference engine. By theoretical analysis, we came to the conclusion that DSP resource in FPGA is the most critical resource, it should be carefully used during the design process. We implement the inference engine on Xilinx 485t FPGA. Experiment result shows that the designed LeNet inference engine can achieve 44.9 Gops throughput with 8-bit fixed-point operation after pipelining. Moreover, with only 1% loss of accuracy, the 8-bit fixed-point engine largely reduce 31.43% in latency, 87.01% in LUT consumption, 66.50% in BRAM consumption, 65.11% in DSP consumption and 47.95% reduction in power compared to a 32-bit fixed-point inference engine with the same structure.",
"title": ""
},
{
"docid": "neg:1840338_11",
"text": "Business Intelligence (BI) deals with integrated approaches to management support. Currently, there are constraints to BI adoption and a new era of analytic data management for business intelligence these constraints are the integrated infrastructures that are subject to BI have become complex, costly, and inflexible, the effort required consolidating and cleansing enterprise data and Performance impact on existing infrastructure / inadequate IT infrastructure. So, in this paper Cloud computing will be used as a possible remedy for these issues. We will represent a new environment atmosphere for the business intelligence to make the ability to shorten BI implementation windows, reduced cost for BI programs compared with traditional on-premise BI software, Ability to add environments for testing, proof-of-concepts and upgrades, offer users the potential for faster deployments and increased flexibility. Also, Cloud computing enables organizations to analyze terabytes of data faster and more economically than ever before. Business intelligence (BI) in the cloud can be like a big puzzle. Users can jump in and put together small pieces of the puzzle but until the whole thing is complete the user will lack an overall view of the big picture. In this paper reading each section will fill in a piece of the puzzle.",
"title": ""
},
{
"docid": "neg:1840338_12",
"text": "A number of sensor applications in recent years collect data which can be directly associated with human interactions. Some examples of such applications include GPS applications on mobile devices, accelerometers, or location sensors designed to track human and vehicular traffic. Such data lends itself to a variety of rich applications in which one can use the sensor data in order to model the underlying relationships and interactions. This requires the development of trajectory mining techniques, which can mine the GPS data for interesting social patterns. It also leads to a number of challenges, since such data may often be private, and it is important to be able to perform the mining process without violating the privacy of the users. Given the open nature of the information contributed by users in social sensing applications, this also leads to issues of trust in making inferences from the underlying data. In this chapter, we provide a broad survey of the work in this important and rapidly emerging field. We also discuss the key problems which arise in the context of this important field and the corresponding",
"title": ""
},
{
"docid": "neg:1840338_13",
"text": "BACKGROUND AND OBJECTIVES\nBecause skin cancer affects millions of people worldwide, computational methods for the segmentation of pigmented skin lesions in images have been developed in order to assist dermatologists in their diagnosis. This paper aims to present a review of the current methods, and outline a comparative analysis with regards to several of the fundamental steps of image processing, such as image acquisition, pre-processing and segmentation.\n\n\nMETHODS\nTechniques that have been proposed to achieve these tasks were identified and reviewed. As to the image segmentation task, the techniques were classified according to their principle.\n\n\nRESULTS\nThe techniques employed in each step are explained, and their strengths and weaknesses are identified. In addition, several of the reviewed techniques are applied to macroscopic and dermoscopy images in order to exemplify their results.\n\n\nCONCLUSIONS\nThe image segmentation of skin lesions has been addressed successfully in many studies; however, there is a demand for new methodologies in order to improve the efficiency.",
"title": ""
},
{
"docid": "neg:1840338_14",
"text": "Botnet is most widespread and occurs commonly in today's cyber attacks, resulting in serious threats to our network assets and organization's properties. Botnets are collections of compromised computers (Bots) which are remotely controlled by its originator (BotMaster) under a common Commond-and-Control (C & C) infrastructure. They are used to distribute commands to the Bots for malicious activities such as distributed denial-of-service (DDoS) attacks, sending large amount of SPAM and other nefarious purposes. Understanding the Botnet C & C channels is a critical component to precisely identify, detect, and mitigate the Botnets threats. Therefore, in this paper we provide a classification of Botnets C & C channels and evaluate well-known protocols (e.g. IRC, HTTP, and P2P) which are being used in each of them.",
"title": ""
},
{
"docid": "neg:1840338_15",
"text": "We present a convolutional-neural-network-based system that faithfully colorizes black and white photographic images without direct human assistance. We explore various network architectures, objectives, color spaces, and problem formulations. The final classification-based model we build generates colorized images that are significantly more aesthetically-pleasing than those created by the baseline regression-based model, demonstrating the viability of our methodology and revealing promising avenues for future work.",
"title": ""
},
{
"docid": "neg:1840338_16",
"text": "Face veri cation is the task of deciding by analyzing face images, whether a person is who he/she claims to be. This is very challenging due to image variations in lighting, pose, facial expression, and age. The task boils down to computing the distance between two face vectors. As such, appropriate distance metrics are essential for face veri cation accuracy. In this paper we propose a new method, named the Cosine Similarity Metric Learning (CSML) for learning a distance metric for facial veri cation. The use of cosine similarity in our method leads to an e ective learning algorithm which can improve the generalization ability of any given metric. Our method is tested on the state-of-the-art dataset, the Labeled Faces in the Wild (LFW), and has achieved the highest accuracy in the literature. Face veri cation has been extensively researched for decades. The reason for its popularity is the non-intrusiveness and wide range of practical applications, such as access control, video surveillance, and telecommunication. The biggest challenge in face veri cation comes from the numerous variations of a face image, due to changes in lighting, pose, facial expression, and age. It is a very di cult problem, especially using images captured in totally uncontrolled environment, for instance, images from surveillance cameras, or from the Web. Over the years, many public face datasets have been created for researchers to advance state of the art and make their methods comparable. This practice has proved to be extremely useful. FERET [1] is the rst popular face dataset freely available to researchers. It was created in 1993 and since then research in face recognition has advanced considerably. Researchers have come very close to fully recognizing all the frontal images in FERET [2,3,4,5,6]. However, these methods are not robust to deal with non-frontal face images. Recently a new face dataset named the Labeled Faces in the Wild (LFW) [7] was created. LFW is a full protocol for evaluating face veri cation algorithms. Unlike FERET, LFW is designed for unconstrained face veri cation. Faces in LFW can vary in all possible ways due to pose, lighting, expression, age, scale, and misalignment (Figure 1). Methods for frontal images cannot cope with these variations and as such many researchers have turned to machine learning to 2 Hieu V. Nguyen and Li Bai Fig. 1. From FERET to LFW develop learning based face veri cation methods [8,9]. One of these approaches is to learn a transformation matrix from the data so that the Euclidean distance can perform better in the new subspace. Learning such a transformation matrix is equivalent to learning a Mahalanobis metric in the original space [10]. Xing et al. [11] used semide nite programming to learn a Mahalanobis distance metric for clustering. Their algorithm aims to minimize the sum of squared distances between similarly labeled inputs, while maintaining a lower bound on the sum of distances between di erently labeled inputs. Goldberger et al. [10] proposed Neighbourhood Component Analysis (NCA), a distance metric learning algorithm especially designed to improve kNN classi cation. The algorithm is to learn a Mahalanobis distance by minimizing the leave-one-out cross validation error of the kNN classi er on a training set. Because it uses softmax activation function to convert distance to probability, the gradient computation step is expensive. Weinberger et al. [12] proposed a method that learns a matrix designed to improve the performance of kNN classi cation. The objective function is composed of two terms. The rst term minimizes the distance between target neighbours. The second term is a hinge-loss that encourages target neighbours to be at least one distance unit closer than points from other classes. It requires information about the class of each sample. As a result, their method is not applicable for the restricted setting in LFW (see section 2.1). Recently, Davis et al. [13] have taken an information theoretic approach to learn a Mahalanobis metric under a wide range of possible constraints and prior knowledge on the Mahalanobis distance. Their method regularizes the learned matrix to make it as close as possible to a known prior matrix. The closeness is measured as a Kullback-Leibler divergence between two Gaussian distributions corresponding to the two matrices. In this paper, we propose a new method named Cosine Similarity Metric Learning (CSML). There are two main contributions. The rst contribution is Cosine Similarity Metric Learning for Face Veri cation 3 that we have shown cosine similarity to be an e ective alternative to Euclidean distance in metric learning problem. The second contribution is that CSML can improve the generalization ability of an existing metric signi cantly in most cases. Our method is di erent from all the above methods in terms of distance measures. All of the other methods use Euclidean distance to measure the dissimilarities between samples in the transformed space whilst our method uses cosine similarity which leads to a simple and e ective metric learning method. The rest of this paper is structured as follows. Section 2 presents CSML method in detail. Section 3 present how CSML can be applied to face veri cation. Experimental results are presented in section 4. Finally, conclusion is given in section 5. 1 Cosine Similarity Metric Learning The general idea is to learn a transformation matrix from training data so that cosine similarity performs well in the transformed subspace. The performance is measured by cross validation error (cve). 1.1 Cosine similarity Cosine similarity (CS) between two vectors x and y is de ned as: CS(x, y) = x y ‖x‖ ‖y‖ Cosine similarity has a special property that makes it suitable for metric learning: the resulting similarity measure is always within the range of −1 and +1. As shown in section 1.3, this property allows the objective function to be simple and e ective. 1.2 Metric learning formulation Let {xi, yi, li}i=1 denote a training set of s labeled samples with pairs of input vectors xi, yi ∈ R and binary class labels li ∈ {1, 0} which indicates whether xi and yi match or not. The goal is to learn a linear transformation A : R → R(d ≤ m), which we will use to compute cosine similarities in the transformed subspace as: CS(x, y,A) = (Ax) (Ay) ‖Ax‖ ‖Ay‖ = xAAy √ xTATAx √ yTATAy Speci cally, we want to learn the linear transformation that minimizes the cross validation error when similarities are measured in this way. We begin by de ning the objective function. 4 Hieu V. Nguyen and Li Bai 1.3 Objective function First, we de ne positive and negative sample index sets Pos and Neg as:",
"title": ""
},
{
"docid": "neg:1840338_17",
"text": "This paper introduces a shape descriptor, the soft shape context, motivated by the shape context method. Unlike the original shape context method, where each image point was hard assigned into a single histogram bin, we instead allow each image point to contribute to multiple bins, hence more robust to distortions. The soft shape context can easily be integrated into the iterative closest point (ICP) method as an auxiliary feature vector, enriching the representation of an image point from spatial information only, to spatial and shape information. This yields a registration method more robust than the original ICP method. The method is general for 2D shapes. It does not calculate derivatives, hence being able to handle shapes with junctions and discontinuities. We present experimental results to demonstrate the robustness compared with the standard ICP method.",
"title": ""
},
{
"docid": "neg:1840338_18",
"text": "The use of wearable devices during running has become commonplace. Although there is ongoing research on interaction techniques for use while running, the effects of the resulting interactions on the natural movement patterns have received little attention so far. While previous studies on pedestrians reported increased task load and reduced walking speed while interacting, running movement further restricts interaction and requires minimizing interferences, e.g. to avoid injuries and maximize comfort. In this paper, we aim to shed light on how interacting with wearable devices affects running movement. We present results from a motion-tracking study (N=12) evaluating changes in movement and task load when users interact with a smartphone, a smartwatch, or a pair of smartglasses while running. In our study, smartwatches required less effort than smartglasses when using swipe input, resulted in less interference with the running movement and were preferred overall. From our results, we infer a number of guidelines regarding interaction design targeting runners.",
"title": ""
},
{
"docid": "neg:1840338_19",
"text": "Linker for activation of B cells (LAB, also called NTAL; a product of wbscr5 gene) is a newly identified transmembrane adaptor protein that is expressed in B cells, NK cells, and mast cells. Upon BCR activation, LAB is phosphorylated and interacts with Grb2. LAB is capable of rescuing thymocyte development in LAT-deficient mice. To study the in vivo function of LAB, LAB-deficient mice were generated. Although disruption of the Lab gene did not affect lymphocyte development, it caused mast cells to be hyperresponsive to stimulation via the FcepsilonRI, evidenced by enhanced Erk activation, calcium mobilization, degranulation, and cytokine production. These data suggested that LAB negatively regulates mast cell function. However, mast cells that lacked both linker for activation of T cells (LAT) and LAB proteins had a more severe block in FcepsilonRI-mediated signaling than LAT(-/-) mast cells, demonstrating that LAB also shares a redundant function with LAT to play a positive role in FcepsilonRI-mediated signaling.",
"title": ""
}
] |
1840339 | Integrating 3D structure into traffic scene understanding with RGB-D data | [
{
"docid": "pos:1840339_0",
"text": "View-based 3-D object retrieval and recognition has become popular in practice, e.g., in computer aided design. It is difficult to precisely estimate the distance between two objects represented by multiple views. Thus, current view-based 3-D object retrieval and recognition methods may not perform well. In this paper, we propose a hypergraph analysis approach to address this problem by avoiding the estimation of the distance between objects. In particular, we construct multiple hypergraphs for a set of 3-D objects based on their 2-D views. In these hypergraphs, each vertex is an object, and each edge is a cluster of views. Therefore, an edge connects multiple vertices. We define the weight of each edge based on the similarities between any two views within the cluster. Retrieval and recognition are performed based on the hypergraphs. Therefore, our method can explore the higher order relationship among objects and does not use the distance between objects. We conduct experiments on the National Taiwan University 3-D model dataset and the ETH 3-D object collection. Experimental results demonstrate the effectiveness of the proposed method by comparing with the state-of-the-art methods.",
"title": ""
}
] | [
{
"docid": "neg:1840339_0",
"text": "In recent years the sport of climbing has seen consistent increase in popularity. Climbing requires a complex skill set for successful and safe exercising. While elite climbers receive intensive expert coaching to refine this skill set, this progression approach is not viable for the amateur population. We have developed ClimbAX - a climbing performance analysis system that aims for replicating expert assessments and thus represents a first step towards an automatic coaching system for climbing enthusiasts. Through an accelerometer based wearable sensing platform, climber's movements are captured. An automatic analysis procedure detects climbing sessions and moves, which form the basis for subsequent performance assessment. The assessment parameters are derived from sports science literature and include: power, control, stability, speed. ClimbAX was evaluated in a large case study with 53 climbers under competition settings. We report a strong correlation between predicted scores and official competition results, which demonstrate the effectiveness of our automatic skill assessment system.",
"title": ""
},
{
"docid": "neg:1840339_1",
"text": "High efficiency power supply solutions for data centers are gaining more attention, in order to minimize the fast growing power demands of such loads, the 48V Voltage Regulator Module (VRM) for powering CPU is a promising solution replacing the legacy 12V VRM by which the bus distribution loss, cost and size can be dramatically minimized. In this paper, a two-stage 48V/12V/1.8V–250W VRM is proposed, the first stage is a high efficiency, high power density isolated — unregulated DC/DC converter (DCX) based on LLC resonant converter stepping the input voltage from 48V to 12V. The Matrix transformer concept was utilized for designing the high frequency transformer of the first stage, an enhanced termination loop for the synchronous rectifiers and a non-uniform winding structure is proposed resulting in significant increase in both power density and efficiency of the first stage converter. The second stage is a 4-phases buck converter stepping the voltage from 12V to 1.8V to the CPU. Since the CPU runs in the sleep mode most of the time a light load efficiency improvement method by changing the bus voltage from 12V to 6 V during light load operation is proposed showing more than 8% light load efficiency enhancement than fixed bus voltage. Experimental results demonstrate the high efficiency of the proposed solution reaching peak of 91% with a significant light load efficiency improvement.",
"title": ""
},
{
"docid": "neg:1840339_2",
"text": "The working hypothesis of the paper is that motor images are endowed with the same properties as those of the (corresponding) motor representations, and therefore have the same functional relationship to the imagined or represented movement and the same causal role in the generation of this movement. The fact that the timing of simulated movements follows the same constraints as that of actually executed movements is consistent with this hypothesis. Accordingly, many neural mechanisms are activated during motor imagery, as revealed by a sharp increase in tendinous reflexes in the limb imagined to move, and by vegetative changes which correlate with the level of mental effort. At the cortical level, a specific pattern of activation, that closely resembles that of action execution, is observed in areas devoted to motor control. This activation might be the substrate for the effects of mental training. A hierarchical model of the organization of action is proposed: this model implies a short-term memory storage of a 'copy' of the various representational steps. These memories are erased when an action corresponding to the represented goal takes place. By contrast, if the action is incompletely or not executed, the whole system remains activated, and the content of the representation is rehearsed. This mechanism would be the substrate for conscious access to this content during motor imagery and mental training.",
"title": ""
},
{
"docid": "neg:1840339_3",
"text": "The concept of centrality is often invoked in social network analysis, and diverse indices have been proposed to measure it. This paper develops a unified framework for the measurement of centrality. All measures of centrality assess a node’s involvement in the walk structure of a network. Measures vary along four key dimensions: type of nodal involvement assessed, type of walk considered, property of walk assessed, and choice of summary measure. If we cross-classify measures by type of nodal involvement (radial versus medial) and property of walk assessed (volume versus length), we obtain a four-fold polychotomization with one cell empty which mirrors Freeman’s 1979 categorization. At a more substantive level, measures of centrality summarize a node’s involvement in or contribution to the cohesiveness of the network. Radial measures in particular are reductions of pair-wise proximities/cohesion to attributes of nodes or actors. The usefulness and interpretability of radial measures depend on the fit of the cohesion matrix to the onedimensional model. In network terms, a network that is fit by a one-dimensional model has a core-periphery structure in which all nodes revolve more or less closely around a single core. This in turn implies that the network does not contain distinct cohesive subgroups. Thus, centrality is shown to be intimately connected with the cohesive subgroup structure of a network. © 2005 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "neg:1840339_4",
"text": "Thousands of operations are annually guided with computer assisted surgery (CAS) technologies. As the use of these devices is rapidly increasing, the reliability of the devices becomes ever more critical. The problem of accuracy assessment of the devices has thus become relevant. During the past five years, over 200 hazardous situations have been documented in the MAUDE database during operations using these devices in the field of neurosurgery alone. Had the accuracy of these devices been periodically assessed pre-operatively, many of them might have been prevented. The technical accuracy of a commercial navigator enabling the use of both optical (OTS) and electromagnetic (EMTS) tracking systems was assessed in the hospital setting using accuracy assessment tools and methods developed by the authors of this paper. The technical accuracy was obtained by comparing the positions of the navigated tool tip with the phantom accuracy assessment points. Each assessment contained a total of 51 points and a region of surgical interest (ROSI) volume of 120x120x100 mm roughly mimicking the size of the human head. The error analysis provided a comprehensive understanding of the trend of accuracy of the surgical navigator modalities. This study showed that the technical accuracies of OTS and EMTS over the pre-determined ROSI were nearly equal. However, the placement of the particular modality hardware needs to be optimized for the surgical procedure. New applications of EMTS, which does not require rigid immobilization of the surgical area, are suggested.",
"title": ""
},
{
"docid": "neg:1840339_5",
"text": "This paper presents GelClust, a new software that is designed for processing gel electrophoresis images and generating the corresponding phylogenetic trees. Unlike the most of commercial and non-commercial related softwares, we found that GelClust is very user-friendly and guides the user from image toward dendrogram through seven simple steps. Furthermore, the software, which is implemented in C# programming language under Windows operating system, is more accurate than similar software regarding image processing and is the only software able to detect and correct gel 'smile' effects completely automatically. These claims are supported with experiments.",
"title": ""
},
{
"docid": "neg:1840339_6",
"text": "Customer Relationship Management (CRM) is a strategy that supports an organization’s decision-making process to retain long-term and profitable relationships with its customers. Effective CRM analyses require a detailed data warehouse model that can support various CRM analyses and deep understanding on CRM-related business questions. In this paper, we present a taxonomy of CRM analysis categories. Our CRM taxonomy includes CRM strategies, CRM category analyses, CRM business questions, their potential uses, and key performance indicators (KPIs) for those analysis types. Our CRM taxonomy can be used in selecting and evaluating a data schema for CRM analyses, CRM vendors, CRM strategies, and KPIs.",
"title": ""
},
{
"docid": "neg:1840339_7",
"text": "This paper presents an approach for performance analysis of modern enterprise-class server applications. In our experience, performance bottlenecks in these applications differ qualitatively from bottlenecks in smaller, stand-alone systems. Small applications and benchmarks often suffer from CPU-intensive hot spots. In contrast, enterprise-class multi-tier applications often suffer from problems that manifest not as hot spots, but as idle time, indicating a lack of forward motion. Many factors can contribute to undesirable idle time, including locking problems, excessive system-level activities like garbage collection, various resource constraints, and problems driving load.\n We present the design and methodology for WAIT, a tool to diagnosis the root cause of idle time in server applications. Given lightweight samples of Java activity on a single tier, the tool can often pinpoint the primary bottleneck on a multi-tier system. The methodology centers on an informative abstraction of the states of idleness observed in a running program. This abstraction allows the tool to distinguish, for example, between hold-ups on a database machine, insufficient load, lock contention in application code, and a conventional bottleneck due to a hot method. To compute the abstraction, we present a simple expert system based on an extensible set of declarative rules.\n WAIT can be deployed on the fly, without modifying or even restarting the application. Many groups in IBM have applied the tool to diagnosis performance problems in commercial systems, and we present a number of examples as case studies.",
"title": ""
},
{
"docid": "neg:1840339_8",
"text": "As networks grow both in importance and size, there is an increasing need for effective security monitors such as Network Intrusion Detection System to prevent such illicit accesses. Intrusion Detection Systems technology is an effective approach in dealing with the problems of network security. In this paper, we present an intrusion detection model based on hybrid fuzzy logic and neural network. The key idea is to take advantage of different classification abilities of fuzzy logic and neural network for intrusion detection system. The new model has ability to recognize an attack, to differentiate one attack from another i.e. classifying attack, and the most important, to detect new attacks with high detection rate and low false negative. Training and testing data were obtained from the Defense Advanced Research Projects Agency (DARPA) intrusion detection evaluation data set.",
"title": ""
},
{
"docid": "neg:1840339_9",
"text": "This sketch presents an improved formalization of automatic caricature that extends a standard approach to account for the population variance of facial features. Caricature is generally considered a rendering that emphasizes the distinctive features of a particular face. A formalization of this idea, which we term “Exaggerating the Difference from the Mean” (EDFM), is widely accepted among caricaturists [Redman 1984] and was first implemented in a groundbreaking computer program by [Brennan 1985]. Brennan’s “Caricature generator” program produced caricatures by manually defining a polyline drawing with topology corresponding to a frontal, mean, face-shape drawing, and then displacing the vertices by a constant factor away from the mean shape. Many psychological studies have applied the “Caricature Generator” or EDFM idea to investigate caricaturerelated issues in face perception [Rhodes 1997].",
"title": ""
},
{
"docid": "neg:1840339_10",
"text": "Cuckoo search (CS) is an efficient swarm-intelligence-based algorithm and significant developments have been made since its introduction in 2009. CS has many advantages due to its simplicity and efficiency in solving highly non-linear optimisation problems with real-world engineering applications. This paper provides a timely review of all the state-of-the-art developments in the last five years, including the discussions of theoretical background and research directions for future development of this powerful algorithm.",
"title": ""
},
{
"docid": "neg:1840339_11",
"text": "We study the problem of finding efficiently computable non-degenerate multilinear maps from G1 to G2, where G1 and G2 are groups of the same prime order, and where computing discrete logarithms in G1 is hard. We present several applications to cryptography, explore directions for building such maps, and give some reasons to believe that finding examples with n > 2",
"title": ""
},
{
"docid": "neg:1840339_12",
"text": "Large scale implementation of active RFID tag technology has been restricted by the need for battery replacement. Prolonging battery lifespan may potentially promote active RFID tags which offer obvious advantages over passive RFID systems. This paper explores some opportunities to simulate and develop a prototype RF energy harvester for 2.4 GHz band specifically designed for low power active RFID tag application. This system employs a rectenna architecture which is a receiving antenna attached to a rectifying circuit that efficiently converts RF energy to DC current. Initial ADS simulation results show that 2 V output voltage can be achieved using a 7 stage Cockroft-Walton rectifying circuitry with -4.881 dBm (0.325 mW) output power under -4 dBm (0.398 mW) input RF signal. These results lend support to the idea that RF energy harvesting is indeed promising.",
"title": ""
},
{
"docid": "neg:1840339_13",
"text": "In the present study we manipulated the importance of performing two event-based prospective memory tasks. In Experiment 1, the event-based task was assumed to rely on relatively automatic processes, whereas in Experiment 2 the event-based task was assumed to rely on a more demanding monitoring process. In contrast to the first experiment, the second experiment showed that importance had a positive effect on prospective memory performance. In addition, the occurrence of an importance effect on prospective memory performance seemed to be mainly due to the features of the prospective memory task itself, and not to the characteristics of the ongoing tasks that only influenced the size of the importance effect. The results suggest that importance instructions may improve prospective memory if the prospective task requires the strategic allocation of attentional monitoring resources.",
"title": ""
},
{
"docid": "neg:1840339_14",
"text": "INTRODUCTION\nThere are many challenges to the drug discovery process, including the complexity of the target, its interactions, and how these factors play a role in causing the disease. Traditionally, biophysics has been used for hit validation and chemical lead optimization. With its increased throughput and sensitivity, biophysics is now being applied earlier in this process to empower target characterization and hit finding. Areas covered: In this article, the authors provide an overview of how biophysics can be utilized to assess the quality of the reagents used in screening assays, to validate potential tool compounds, to test the integrity of screening assays, and to create follow-up strategies for compound characterization. They also briefly discuss the utilization of different biophysical methods in hit validation to help avoid the resource consuming pitfalls caused by the lack of hit overlap between biophysical methods. Expert opinion: The use of biophysics early on in the drug discovery process has proven crucial to identifying and characterizing targets of complex nature. It also has enabled the identification and classification of small molecules which interact in an allosteric or covalent manner with the target. By applying biophysics in this manner and at the early stages of this process, the chances of finding chemical leads with novel mechanisms of action are increased. In the future, focused screens with biophysics as a primary readout will become increasingly common.",
"title": ""
},
{
"docid": "neg:1840339_15",
"text": "Through co-design of Augmented Reality (AR) based teaching material, this research aims to enhance collaborative learning experience in primary school education. It will introduce an interactive AR Book based on primary school textbook using tablets as the real time interface. The development of this AR Book employs co-design methods to involve children, teachers, educators and HCI experts from the early stages of the design process. Research insights from the co-design phase will be implemented in the AR Book design. The final outcome of the AR Book will be evaluated in the classroom to explore its effect on the collaborative experience of primary school students. The research aims to answer the question - Can Augmented Books be designed for primary school students in order to support collaboration? This main research question is divided into two sub-questions as follows - How can co-design methods be applied in designing Augmented Book with and for primary school children? And what is the effect of the proposed Augmented Book on primary school students' collaboration? This research will not only present a practical application of co-designing AR Book for and with primary school children, it will also clarify the benefit of AR for education in terms of collaborative experience.",
"title": ""
},
{
"docid": "neg:1840339_16",
"text": "Event-related desynchronization/synchronization patterns during right/left motor imagery (MI) are effective features for an electroencephalogram-based brain-computer interface (BCI). As MI tasks are subject-specific, selection of subject-specific discriminative frequency components play a vital role in distinguishing these patterns. This paper proposes a new discriminative filter bank (FB) common spatial pattern algorithm to extract subject-specific FB for MI classification. The proposed method enhances the classification accuracy in BCI competition III dataset IVa and competition IV dataset IIb. Compared to the performance offered by the existing FB-based method, the proposed algorithm offers error rate reductions of 17.42% and 8.9% for BCI competition datasets III and IV, respectively.",
"title": ""
},
{
"docid": "neg:1840339_17",
"text": "Modeling cloth with fiber-level geometry can produce highly realistic details. However, rendering fiber-level cloth models not only has a high memory cost but it also has a high computation cost even for offline rendering applications. In this paper we present a real-time fiber-level cloth rendering method for current GPUs. Our method procedurally generates fiber-level geometric details on-the-fly using yarn-level control points for minimizing the data transfer to the GPU. We also reduce the rasterization operations by collectively representing the fibers near the center of each ply that form the yarn structure. Moreover, we employ a level-of-detail strategy to minimize or completely eliminate the generation of fiber-level geometry that would have little or no impact on the final rendered image. Furthermore, we introduce a simple self-shadow computation method that allows lighting with self-shadows using relatively low-resolution shadow maps. We also provide a simple distance-based ambient occlusion approximation as well as an ambient illumination precomputation approach, both of which account for fiber-level self-occlusion of yarn. Finally, we discuss how to use a physical-based shading model with our fiber-level cloth rendering method and how to handle cloth animations with temporal coherency. We demonstrate the effectiveness of our approach by comparing our simplified fiber geometry to procedurally generated references and display knitwear containing more than a hundred million individual fiber curves at real-time frame rates with shadows and ambient occlusion.",
"title": ""
},
{
"docid": "neg:1840339_18",
"text": "Social Commerce as a result of the advancement of Social Networking Sites and Web 2.0 is increasing as a new model of online shopping. With techniques to improve the website using AJAX, Adobe Flash, XML, and RSS, Social Media era has changed the internet user behavior to be more communicative and active in internet, they love to share information and recommendation among communities. Social commerce also changes the way people shopping through online. Social commerce will be the new way of online shopping nowadays. But the new challenge is business has to provide the interactive website yet interesting website for internet users, the website should give experience to satisfy their needs. This purpose of research is to analyze the website quality (System Quality, Information Quality, and System Quality) as well as interaction feature (communication feature) impact on social commerce website and customers purchase intention. Data from 134 customers of social commerce website were used to test the model. Multiple linear regression is used to calculate the statistic result while confirmatory factor analysis was also conducted to test the validity from each variable. The result shows that website quality and communication feature are important aspect for customer purchase intention while purchasing in social commerce website.",
"title": ""
},
{
"docid": "neg:1840339_19",
"text": "Computer-animated characters are common in popular culture and have begun to be used as experimental tools in social cognitive neurosciences. Here we investigated how appearance of these characters' influences perception of their actions. Subjects were presented with different characters animated either with motion data captured from human actors or by interpolating between poses (keyframes) designed by an animator, and were asked to categorize the motion as biological or artificial. The response bias towards 'biological', derived from the Signal Detection Theory, decreases with characters' anthropomorphism, while sensitivity is only affected by the simplest rendering style, point-light displays. fMRI showed that the response bias correlates positively with activity in the mentalizing network including left temporoparietal junction and anterior cingulate cortex, and negatively with regions sustaining motor resonance. The absence of significant effect of the characters on the brain activity suggests individual differences in the neural responses to unfamiliar artificial agents. While computer-animated characters are invaluable tools to investigate the neural bases of social cognition, further research is required to better understand how factors such as anthropomorphism affect their perception, in order to optimize their appearance for entertainment, research or therapeutic purposes.",
"title": ""
}
] |
1840340 | Programming models for sensor networks: A survey | [
{
"docid": "pos:1840340_0",
"text": "Composed of tens of thousands of tiny devices with very limited resources (\"motes\"), sensor networks are subject to novel systems problems and constraints. The large number of motes in a sensor network means that there will often be some failing nodes; networks must be easy to repopulate. Often there is no feasible method to recharge motes, so energy is a precious resource. Once deployed, a network must be reprogrammable although physically unreachable, and this reprogramming can be a significant energy cost.We present Maté, a tiny communication-centric virtual machine designed for sensor networks. Maté's high-level interface allows complex programs to be very short (under 100 bytes), reducing the energy cost of transmitting new programs. Code is broken up into small capsules of 24 instructions, which can self-replicate through the network. Packet sending and reception capsules enable the deployment of ad-hoc routing and data aggregation algorithms. Maté's concise, high-level program representation simplifies programming and allows large networks to be frequently reprogrammed in an energy-efficient manner; in addition, its safe execution environment suggests a use of virtual machines to provide the user/kernel boundary on motes that have no hardware protection mechanisms.",
"title": ""
}
] | [
{
"docid": "neg:1840340_0",
"text": "The recent, exponential rise in adoption of the most disparate Internet of Things (IoT) devices and technologies has reached also Agriculture and Food (Agri-Food) supply chains, drumming up substantial research and innovation interest towards developing reliable, auditable and transparent traceability systems. Current IoT-based traceability and provenance systems for Agri-Food supply chains are built on top of centralized infrastructures and this leaves room for unsolved issues and major concerns, including data integrity, tampering and single points of failure. Blockchains, the distributed ledger technology underpinning cryptocurrencies such as Bitcoin, represent a new and innovative technological approach to realizing decentralized trustless systems. Indeed, the inherent properties of this digital technology provide fault-tolerance, immutability, transparency and full traceability of the stored transaction records, as well as coherent digital representations of physical assets and autonomous transaction executions. This paper presents AgriBlockIoT, a fully decentralized, blockchain-based traceability solution for Agri-Food supply chain management, able to seamless integrate IoT devices producing and consuming digital data along the chain. To effectively assess AgriBlockIoT, first, we defined a classical use-case within the given vertical domain, namely from-farm-to-fork. Then, we developed and deployed such use-case, achieving traceability using two different blockchain implementations, namely Ethereum and Hyperledger Sawtooth. Finally, we evaluated and compared the performance of both the deployments, in terms of latency, CPU, and network usage, also highlighting their main pros and cons.",
"title": ""
},
{
"docid": "neg:1840340_1",
"text": "Kernel methods enable the direct usage of structured representations of textual data during language learning and inference tasks. Expressive kernels, such as Tree Kernels, achieve excellent performance in NLP. On the other side, deep neural networks have been demonstrated effective in automatically learning feature representations during training. However, their input is tensor data, i.e., they cannot manage rich structured information. In this paper, we show that expressive kernels and deep neural networks can be combined in a common framework in order to (i) explicitly model structured information and (ii) learn non-linear decision functions. We show that the input layer of a deep architecture can be pre-trained through the application of the Nyström low-rank approximation of kernel spaces. The resulting “kernelized” neural network achieves state-of-the-art accuracy in three different tasks.",
"title": ""
},
{
"docid": "neg:1840340_2",
"text": "We address the problem of recovering a common set of covariates that are relevant simultaneously to several classification problems. By penalizing the sum of l2-norms of the blocks of coefficients associated with each covariate across different classification problems, similar sparsity patterns in all models are encouraged. To take computational advantage of the sparsity of solutions at high regularization levels, we propose a blockwise path-following scheme that approximately traces the regularization path. As the regularization coefficient decreases, the algorithm maintains and updates concurrently a growing set of covariates that are simultaneously active for all problems. We also show how to use random projections to extend this approach to the problem of joint subspace selection, where multiple predictors are found in a common low-dimensional subspace. We present theoretical results showing that this random projection approach converges to the solution yielded by trace-norm regularization. Finally, we present a variety of experimental results exploring joint covariate selection and joint subspace selection, comparing the path-following approach to competing algorithms in terms of prediction accuracy and running time.",
"title": ""
},
{
"docid": "neg:1840340_3",
"text": "Cloud computing is a model for delivering information technology services, wherein resources are retrieved from the Internet through web-based tools and applications instead of a direct connection to a server. The capability to provision and release cloud computing resources with minimal management effort or service provider interaction led to the rapid increase of the use of cloud computing. Therefore, balancing cloud computing resources to provide better performance and services to end users is important. Load balancing in cloud computing means balancing three important stages through which a request is processed. The three stages are data center selection, virtual machine scheduling, and task scheduling at a selected data center. User task scheduling plays a significant role in improving the performance of cloud services. This paper presents a review of various energy-efficient task scheduling methods in a cloud environment. A brief analysis of various scheduling parameters considered in these methods is also presented. The results show that the best power-saving percentage level can be achieved by using both DVFS and DNS.",
"title": ""
},
{
"docid": "neg:1840340_4",
"text": "OBJECTIVE\nTo establish a centile chart of cervical length between 18 and 32 weeks of gestation in a low-risk population of women.\n\n\nMETHODS\nA prospective longitudinal cohort study of women with a low risk, singleton pregnancy using public healthcare facilities in Cape Town, South Africa. Transvaginal measurement of cervical length was performed between 16 and 32 weeks of gestation and used to construct centile charts. The distribution of cervical length was determined for gestational ages and was used to establish estimates of longitudinal percentiles. Centile charts were constructed for nulliparous and multiparous women together and separately.\n\n\nRESULTS\nCentile estimation was based on data from 344 women. Percentiles showed progressive cervical shortening with increasing gestational age. Averaged over the entire follow-up period, mean cervical length was 1.5 mm shorter in nulliparous women compared with multiparous women (95% CI, 0.4-2.6).\n\n\nCONCLUSIONS\nEstablishment of longitudinal reference values of cervical length in a low-risk population will contribute toward a better understanding of cervical length in women at risk for preterm labor.",
"title": ""
},
{
"docid": "neg:1840340_5",
"text": "A new inline coupling topology for narrowband helical resonator filters is proposed that allows to introduce selectively located transmission zeros (TZs) in the stopband. We show that a pair of helical resonators arranged in an interdigital configuration can realize a large range of in-band coupling coefficient values and also selectively position a TZ in the stopband. The proposed technique dispenses the need for auxiliary elements, so that the size, complexity, power handling and insertion loss of the filter are not compromised. A second order prototype filter with dimensions of the order of 0.05λ, power handling capability up to 90 W, measured insertion loss of 0.18 dB and improved selectivity is presented.",
"title": ""
},
{
"docid": "neg:1840340_6",
"text": "We consider a class of a nested optimization problems involving inner and outer objectives. We observe that by taking into explicit account the optimization dynamics for the inner objective it is possible to derive a general framework that unifies gradient-based hyperparameter optimization and meta-learning (or learning-to-learn). Depending on the specific setting, the variables of the outer objective take either the meaning of hyperparameters in a supervised learning problem or parameters of a meta-learner. We show that some recently proposed methods in the latter setting can be instantiated in our framework and tackled with the same gradient-based algorithms. Finally, we discuss possible design patterns for learning-to-learn and present encouraging preliminary experiments for few-shot learning.",
"title": ""
},
{
"docid": "neg:1840340_7",
"text": "This article reviews the empirical literature on personality, leadership, and organizational effectiveness to make 3 major points. First, leadership is a real and vastly consequential phenomenon, perhaps the single most important issue in the human sciences. Second, leadership is about the performance of teams, groups, and organizations. Good leadership promotes effective team and group performance, which in turn enhances the well-being of the incumbents; bad leadership degrades the quality of life for everyone associated with it. Third, personality predicts leadership—who we are is how we lead—and this information can be used to select future leaders or improve the performance of current incumbents.",
"title": ""
},
{
"docid": "neg:1840340_8",
"text": "This review analyzes trends and commonalities among prominent theories of media effects. On the basis of exemplary meta-analyses of media effects and bibliometric studies of well-cited theories, we identify and discuss five features of media effects theories as well as their empirical support. Each of these features specifies the conditions under which media may produce effects on certain types of individuals. Our review ends with a discussion of media effects in newer media environments. This includes theories of computer-mediated communication, the development of which appears to share a similar pattern of reformulation from unidirectional, receiver-oriented views, to theories that recognize the transactional nature of communication. We conclude by outlining challenges and promising avenues for future research.",
"title": ""
},
{
"docid": "neg:1840340_9",
"text": "Three-dimensional measurement of joint motion is a promising tool for clinical evaluation and therapeutic treatment comparisons. Although many devices exist for joints kinematics assessment, there is a need for a system that could be used in routine practice. Such a system should be accurate, ambulatory, and easy to use. The combination of gyroscopes and accelerometers (i.e., inertial measurement unit) has proven to be suitable for unrestrained measurement of orientation during a short period of time (i.e., few minutes). However, due to their inability to detect horizontal reference, inertial-based systems generally fail to measure differential orientation, a prerequisite for computing the three-dimentional knee joint angle recommended by the Internal Society of Biomechanics (ISB). A simple method based on a leg movement is proposed here to align two inertial measurement units fixed on the thigh and shank segments. Based on the combination of the former alignment and a fusion algorithm, the three-dimensional knee joint angle is measured and compared with a magnetic motion capture system during walking. The proposed system is suitable to measure the absolute knee flexion/extension and abduction/adduction angles with mean (SD) offset errors of -1 degree (1 degree ) and 0 degrees (0.6 degrees ) and mean (SD) root mean square (RMS) errors of 1.5 degrees (0.4 degrees ) and 1.7 degrees (0.5 degrees ). The system is also suitable for the relative measurement of knee internal/external rotation (mean (SD) offset error of 3.4 degrees (2.7 degrees )) with a mean (SD) RMS error of 1.6 degrees (0.5 degrees ). The method described in this paper can be easily adapted in order to measure other joint angular displacements such as elbow or ankle.",
"title": ""
},
{
"docid": "neg:1840340_10",
"text": "Designing a metric manually for unsupervised sequence generation tasks, such as text generation, is essentially difficult. In a such situation, learning a metric of a sequence from data is one possible solution. The previous study, SeqGAN, proposed the framework for unsupervised sequence generation, in which a metric is learned from data, and a generator is optimized with regard to the learned metric with policy gradient, inspired by generative adversarial nets (GANs) and reinforcement learning. In this paper, we make two proposals to learn better metric than SeqGAN’s: partial reward function and expert-based reward function training. The partial reward function is a reward function for a partial sequence of a certain length. SeqGAN employs a reward function for completed sequence only. By combining long-scale and short-scale partial reward functions, we expect a learned metric to be able to evaluate a partial correctness as well as a coherence of a sequence, as a whole. In expert-based reward function training, a reward function is trained to discriminate between an expert (or true) sequence and a fake sequence that is produced by editing an expert sequence. Expert-based reward function training is not a kind of GAN frameworks. This makes the optimization of the generator easier. We examine the effect of the partial reward function and expert-based reward function training on synthetic data and real text data, and show improvements over SeqGAN and the model trained with MLE. Specifically, whereas SeqGAN gains 0.42 improvement of NLL over MLE on synthetic data, our best model gains 3.02 improvement, and whereas SeqGAN gains 0.029 improvement of BLEU over MLE, our best model gains 0.250 improvement.",
"title": ""
},
{
"docid": "neg:1840340_11",
"text": "To lay the groundwork for devising, improving and implementing strategies to prevent or delay the onset of disability in the elderly, we conducted a systematic literature review of longitudinal studies published between 1985 and 1997 that reported statistical associations between individual base-line risk factors and subsequent functional status in community-living older persons. Functional status decline was defined as disability or physical function limitation. We used MEDLINE, PSYCINFO, SOCA, EMBASE, bibliographies and expert consultation to select the articles, 78 of which met the selection criteria. Risk factors were categorized into 14 domains and coded by two independent abstractors. Based on the methodological quality of the statistical analyses between risk factors and functional outcomes (e.g. control for base-line functional status, control for confounding, attrition rate), the strength of evidence was derived for each risk factor. The association of functional decline with medical findings was also analyzed. The highest strength of evidence for an increased risk in functional status decline was found for (alphabetical order) cognitive impairment, depression, disease burden (comorbidity), increased and decreased body mass index, lower extremity functional limitation, low frequency of social contacts, low level of physical activity, no alcohol use compared to moderate use, poor self-perceived health, smoking and vision impairment. The review revealed that some risk factors (e.g. nutrition, physical environment) have been neglected in past research. This review will help investigators set priorities for future research of the Disablement Process, plan health and social services for elderly persons and develop more cost-effective programs for preventing disability among them.",
"title": ""
},
{
"docid": "neg:1840340_12",
"text": "Poker games provide a useful testbed for modern Artificial Intelligence techniques. Unlike many classical game domains such as chess and checkers, poker includes elements of imperfect information, stochastic events, and one or more adversarial agents to interact with. Furthermore, in poker it is possible to win or lose by varying degrees. Therefore, it can be advantageous to adapt ones’ strategy to exploit a weak opponent. A poker agent must address these challenges, acting in uncertain environments and exploiting other agents, in order to be highly successful. Arguably, poker games more closely resemble many real world problems than games with perfect information. In this brief paper, we outline Polaris, a Texas Hold’em poker program. Polaris recently defeated top human professionals at the Man vs. Machine Poker Championship and it is currently the reigning AAAI Computer Poker Competition winner in the limit equilibrium and no-limit events.",
"title": ""
},
{
"docid": "neg:1840340_13",
"text": "Jazz guitar solos are improvised melody lines played on one instrument on top of a chordal accompaniment (comping). As the improvisation happens spontaneously, a reference score is non-existent, only a lead sheet. There are situations, however, when one would like to have the original melody lines in the form of notated music, see the Real Book. The motivation is either for the purpose of practice and imitation or for musical analysis. In this work, an automatic transcriber for jazz guitar solos is developed. It resorts to a very intuitive representation of tonal music signals: the pitchgram. No instrument-specific modeling is involved, so the transcriber should be applicable to other pitched instruments as well. Neither is there the need to learn any note profiles prior to or during the transcription. Essentially, the proposed transcriber is a decision tree, thus a classifier, with a depth of 3. It has a (very) low computational complexity and can be run on-line. The decision rules can be refined or extended with no or little musical education. The transcriber’s performance is evaluated on a set of ten jazz solo excerpts and compared with a state-of-the-art transcription system for the guitar plus PYIN. We achieve an improvement of 34 % w.r.t. the reference system and 19 % w.r.t. PYIN in terms of the F-measure. Another measure of accuracy, the error score, attests that the number of erroneous pitch detections is reduced by more than 50 % w.r.t. the reference system and by 45 % w.r.t. PYIN.",
"title": ""
},
{
"docid": "neg:1840340_14",
"text": "This brief presents a high-efficiency current-regulated charge pump for a white light-emitting diode driver. The charge pump incorporates no series current regulator, unlike conventional voltage charge pump circuits. Output current regulation is accomplished by the proposed pumping current control. The experimental system, with two 1-muF flying and load capacitors, delivers a regulated 20-mA current from an input supply voltage of 2.8-4.2 V. The measured variation is less than 0.6% at a pumping frequency of 200 kHz. The active area of the designed chip is 0.43 mm2 in a 0.5-mum CMOS process.",
"title": ""
},
{
"docid": "neg:1840340_15",
"text": "Veterans of all war eras have a high rate of chronic disease, mental health disorders, and chronic multi-symptom illnesses (CMI).(1-3) Many veterans report symptoms that affect multiple biological systems as opposed to isolated disease states. Standard medical treatments often target isolated disease states such as headaches, insomnia, or back pain and at times may miss the more complex, multisystem dysfunction that has been documented in the veteran population. Research has shown that veterans have complex symptomatology involving physical, cognitive, psychological, and behavioral disturbances, such as difficult to diagnose pain patterns, irritable bowel syndrome, chronic fatigue, anxiety, depression, sleep disturbance, or neurocognitive dysfunction.(2-4) Meditation and acupuncture are each broad-spectrum treatments designed to target multiple biological systems simultaneously, and thus, may be well suited for these complex chronic illnesses. The emerging literature indicates that complementary and integrative medicine (CIM) approaches augment standard medical treatments to enhance positive outcomes for those with chronic disease, mental health disorders, and CMI.(5-12.)",
"title": ""
},
{
"docid": "neg:1840340_16",
"text": "We consider a generalized version of the Steiner problem in graphs, motivated by the wire routing phase in physical VLSI design: given a connected, undirected distance graph with required classes of vertices and Steiner vertices, find a shortest connected subgraph containing at least one vertex of each required class. We show that this problem is NP-hard, even if there are no Steiner vertices and the graph is a tree. Moreover, the same complexity result holds if the input class Steiner graph additionally is embedded in a unit grid, if each vertex has degree at most three, and each class consists of no more than three vertices. For similar restricted versions, we prove MAX SNP-hardness and we show that there exists no polynomial-time approximation algorithm with a constant bound on the relative error, unless P = NP. We propose two efficient heuristics computing different approximate solutions in time 0(/E] + /VI log IV]) and in time O(c(lEl + IV1 log (VI)), respectively, where E is the set of edges in the given graph, V is the set of vertices, and c is the number of classes. We present some promising implementation results.",
"title": ""
},
{
"docid": "neg:1840340_17",
"text": "We implemented live-textured geometry model creation with immediate coverage feedback visualizations in AR on the Microsoft HoloLens. A user walking and looking around a physical space can create a textured model of the space, ready for remote exploration and AR collaboration. Out of the box, a HoloLens builds a triangle mesh of the environment while scanning and being tracked in a new environment. The mesh contains vertices, triangles, and normals, but not color. We take the video stream from the color camera and use it to color a UV texture to be mapped to the mesh. Due to the limited graphics memory of the HoloLens, we use a fixed-size texture. Since the mesh generation dynamically changes in real time, we use an adaptive mapping scheme that evenly distributes every triangle of the dynamic mesh onto the fixed-size texture and adapts to new geometry without compromising existing color data. Occlusion is also considered. The user can walk around their environment and continuously fill in the texture while growing the mesh in real-time. We describe our texture generation algorithm and illustrate benefits and limitations of our system with example modeling sessions. Having first-person immediate AR feedback on the quality of modeled physical infrastructure, both in terms of mesh resolution and texture quality, helps the creation of high-quality colored meshes with this standalone wireless device and a fixed memory footprint in real-time.",
"title": ""
},
{
"docid": "neg:1840340_18",
"text": "Child sex tourism is an obscure industry where the tourist‟s primary purpose is to engage in a sexual experience with a child. Under international legislation, tourism with the intent of having sexual relations with a minor is in violation of the UN Convention of the Rights of a Child. The intent and act is a crime and in violation of human rights. This paper examines child sex tourism in the Philippines, a major destination country for the purposes of child prostitution. The purpose is to bring attention to the atrocities that occur under the guise of tourism. It offers a definition of the crisis, a description of the victims and perpetrators, and a discussion of the social and cultural factors that perpetuate the problem. Research articles and reports from non-government organizations, advocacy groups, governments and educators were examined. Although definitional challenges did emerge, it was found that several of the articles and reports varied little in their definitions of child sex tourism and in the descriptions of the victims and perpetrators. A number of differences emerged that identified the social and cultural factors responsible for the creation and perpetuation of the problem.",
"title": ""
}
] |
1840341 | An information theoretical approach to prefrontal executive function | [
{
"docid": "pos:1840341_0",
"text": "The prefrontal cortex (PFC) subserves cognitive control: the ability to coordinate thoughts or actions in relation with internal goals. Its functional architecture, however, remains poorly understood. Using brain imaging in humans, we showed that the lateral PFC is organized as a cascade of executive processes from premotor to anterior PFC regions that control behavior according to stimuli, the present perceptual context, and the temporal episode in which stimuli occur, respectively. The results support an unified modular model of cognitive control that describes the overall functional organization of the human lateral PFC and has basic methodological and theoretical implications.",
"title": ""
}
] | [
{
"docid": "neg:1840341_0",
"text": "This paper concerns the behavior of spatially extended dynamical systems —that is, systems with both temporal and spatial degrees of freedom. Such systems are common in physics, biology, and even social sciences such as economics. Despite their abundance, there is little understanding of the spatiotemporal evolution of these complex systems. ' Seemingly disconnected from this problem are two widely occurring phenomena whose very generality require some unifying underlying explanation. The first is a temporal effect known as 1/f noise or flicker noise; the second concerns the evolution of a spatial structure with scale-invariant, self-similar (fractal) properties. Here we report the discovery of a general organizing principle governing a class of dissipative coupled systems. Remarkably, the systems evolve naturally toward a critical state, with no intrinsic time or length scale. The emergence of the self-organized critical state provides a connection between nonlinear dynamics, the appearance of spatial self-similarity, and 1/f noise in a natural and robust way. A short account of some of these results has been published previously. The usual strategy in physics is to reduce a given problem to one or a few important degrees of freedom. The effect of coupling between the individual degrees of freedom is usually dealt with in a perturbative manner —or in a \"mean-field manner\" where the surroundings act on a given degree of freedom as an external field —thus again reducing the problem to a one-body one. In dynamics theory one sometimes finds that complicated systems reduce to a few collective degrees of freedom. This \"dimensional reduction'* has been termed \"selforganization, \" or the so-called \"slaving principle, \" and much insight into the behavior of dynamical systems has been achieved by studying the behavior of lowdimensional at tractors. On the other hand, it is well known that some dynamical systems act in a more concerted way, where the individual degrees of freedom keep each other in a more or less stab1e balance, which cannot be described as a \"perturbation\" of some decoupled state, nor in terms of a few collective degrees of freedom. For instance, ecological systems are organized such that the different species \"support\" each other in a way which cannot be understood by studying the individual constituents in isolation. The same interdependence of species also makes the ecosystem very susceptible to small changes or \"noise.\" However, the system cannot be too sensitive since then it could not have evolved into its present state in the first place. Owing to this balance we may say that such a system is \"critical. \" We shall see that this qualitative concept of criticality can be put on a firm quantitative basis. Such critical systems are abundant in nature. We shaB see that the dynamics of a critical state has a specific ternporal fingerprint, namely \"flicker noise, \" in which the power spectrum S(f) scales as 1/f at low frequencies. Flicker noise is characterized by correlations extended over a wide range of time scales, a clear indication of some sort of cooperative effect. Flicker noise has been observed, for example, in the light from quasars, the intensity of sunspots, the current through resistors, the sand flow in an hour glass, the flow of rivers such as the Nile, and even stock exchange price indices. ' All of these may be considered to be extended dynamical systems. Despite the ubiquity of flicker noise, its origin is not well understood. Indeed, one may say that because of its ubiquity, no proposed mechanism to data can lay claim as the single general underlying root of 1/f noise. We shall argue that flicker noise is in fact not noise but reflects the intrinsic dynamics of self-organized critical systems. Another signature of criticality is spatial selfsimilarity. It has been pointed out that nature is full of self-similar \"fractal\" structures, though the physical reason for this is not understood. \" Most notably, the whole universe is an extended dynamical system where a self-similar cosmic string structure has been claimed. Turbulence is a phenomenon where self-similarity is believed to occur in both space and time. Cooperative critical phenomena are well known in the context of phase transitions in equilibrium statistical mechanics. ' At the transition point, spatial selfsirnilarity occurs, and the dynamical response function has a characteristic power-law \"1/f\" behavior. (We use quotes because often flicker noise involves frequency spectra with dependence f ~ with P only roughly equal to 1.0.) Low-dimensional nonequilibrium dynamical systems also undergo phase transitions (bifurcations, mode locking, intermittency, etc.) where the properties of the attractors change. However, the critical point can be reached only by fine tuning a parameter (e.g. , temperature), and so may occur only accidentally in nature: It",
"title": ""
},
{
"docid": "neg:1840341_1",
"text": "We examine and compare simulation-based algorithms for solving the agent scheduling problem in a multiskill call center. This problem consists in minimizing the total costs of agents under constraints on the expected service level per call type, per period, and aggregated. We propose a solution approach that combines simulation with integer or linear programming, with cut generation. In our numerical experiments with realistic problem instances, this approach performs better than all other methods proposed previously for this problem. We also show that the two-step approach, which is the standard method for solving this problem, sometimes yield solutions that are highly suboptimal and inferior to those obtained by our proposed method. 2009 Published by Elsevier B.V.",
"title": ""
},
{
"docid": "neg:1840341_2",
"text": "Automatic design via Bayesian optimization holds great promise given the constant increase of available data across domains. However, it faces difficulties from high-dimensional, potentially discrete, search spaces. We propose to probabilistically embed inputs into a lower dimensional, continuous latent space, where we perform gradient-based optimization guided by a Gaussian process. Building on variational autoncoders, we use both labeled and unlabeled data to guide the encoding and increase its accuracy. In addition, we propose an adversarial extension to render the latent representation invariant with respect to specific design attributes, which allows us to transfer these attributes across structures. We apply the framework both to a functional-protein dataset and to perform optimization of drag coefficients directly over high-dimensional shapes without incorporating domain knowledge or handcrafted features.",
"title": ""
},
{
"docid": "neg:1840341_3",
"text": "In this paper, we propose a high-speed parallel 128 bit multiplier for Ghash Function in conjunction with its FPGA implementation. Through the use of Verilog the designs are evaluated by using Xilinx Vertax5 with 65nm technic and 30,000 logic cells. The highest throughput of 30.764Gpbs can be achieved on virtex5 with the consumption of 8864 slices LUT. The proposed design of the multiplier can be utilized as a design IP core for the implementation of the Ghash Function. The architecture of the multiplier can also apply in more general polynomial basis. Moreover it can be used as arithmetic module in other encryption field.",
"title": ""
},
{
"docid": "neg:1840341_4",
"text": "Currently, much of machine learning is opaque, just like a “black box”. However, in order for humans to understand, trust and effectively manage the emerging AI systems, an AI needs to be able to explain its decisions and conclusions. In this paper, I propose an argumentation-based approach to explainable AI, which has the potential to generate more comprehensive explanations than existing approaches.",
"title": ""
},
{
"docid": "neg:1840341_5",
"text": "On the basis of the notion that the ability to exert self-control is critical to the regulation of aggressive behaviors, we suggest that mindfulness, an aspect of the self-control process, plays a key role in curbing workplace aggression. In particular, we note the conceptual and empirical distinctions between dimensions of mindfulness (i.e., mindful awareness and mindful acceptance) and investigate their respective abilities to regulate workplace aggression. In an experimental study (Study 1), a multiwave field study (Study 2a), and a daily diary study (Study 2b), we established that the awareness dimension, rather than the acceptance dimension, of mindfulness plays a more critical role in attenuating the association between hostility and aggression. In a second multiwave field study (Study 3), we found that mindful awareness moderates the association between hostility and aggression by reducing the extent to which individuals use dysfunctional emotion regulation strategies (i.e., surface acting), rather than by reducing the extent to which individuals engage in dysfunctional thought processes (i.e., rumination). The findings are discussed in terms of the implications of differentiating the dimensions and mechanisms of mindfulness for regulating workplace aggression. (PsycINFO Database Record",
"title": ""
},
{
"docid": "neg:1840341_6",
"text": "Medical image processing is the most challenging and emerging field now a day’s. In this field, detection of brain tumor from MRI brain scan has become one of the most challenging problems, due to complex structure of brain. The quantitative analysis of MRI brain tumor allows obtaining useful key indicators of disease progression. A computer aided diagnostic system has been proposed here for detecting the tumor texture in biological study. This is an attempt made which describes the proposed strategy for detection of tumor with the help of segmentation techniques in MATLAB; which incorporates preprocessing stages of noise removal, image enhancement and edge detection. Processing stages includes segmentation like intensity and watershed based segmentation, thresholding to extract the area of unwanted cells from the whole image. Here algorithms are proposed to calculate area and percentage of the tumor. Keywords— MRI, FCM, MKFCM, SVM, Otsu, threshold, fudge factor",
"title": ""
},
{
"docid": "neg:1840341_7",
"text": "Received Nov 12, 2017 Revised Jan 20, 2018 Accepted Feb 11, 2018 In this paper, a modification of PVD (Pixel Value Differencing) algorithm is used for Image Steganography in spatial domain. It is normalizing secret data value by encoding method to make the new pixel edge difference less among three neighbors (horizontal, vertical and diagonal) and embedding data only to less intensity pixel difference areas or regions. The proposed algorithm shows a good improvement for both color and gray-scale images compared to other algorithms. Color images performances are better than gray images. However, in this work the focus is mainly on gray images. The strenght of this scheme is that any random hidden/secret data do not make any shuttle differences to Steg-image compared to original image. The bit plane slicing is used to analyze the maximum payload that has been embeded into the cover image securely. The simulation results show that the proposed algorithm is performing better and showing great consistent results for PSNR, MSE values of any images, also against Steganalysis attack.",
"title": ""
},
{
"docid": "neg:1840341_8",
"text": "People invest time, attention, and emotion while engaging in various activities in the real-world, for either purposes of awareness or participation. Social media platforms such as Twitter offer tremendous opportunities for people to become engaged in such real-world events through information sharing and communicating about these events. However, little is understood about the factors that affect people’s Twitter engagement in such real-world events. In this paper, we address this question by first operationalizing a person’s Twitter engagement in real-world events such as posting, retweeting, or replying to tweets about such events. Next, we construct statistical models that examine multiple predictive factors associated with four different perspectives of users’ Twitter engagement, and quantify their potential influence on predicting the (i) presence; and (ii) degree – of the user’s engagement with 643 real-world events. We also consider the effect of these factors with respect to a finer granularization of the different categories of events. We find that the measures of people’s prior Twitter activities, topical interests, geolocation, and social network structures are all variously correlated to their engagement with real-world events.",
"title": ""
},
{
"docid": "neg:1840341_9",
"text": "Face perception, perhaps the most highly developed visual skill in humans, is mediated by a distributed neural system in humans that is comprised of multiple, bilateral regions. We propose a model for the organization of this system that emphasizes a distinction between the representation of invariant and changeable aspects of faces. The representation of invariant aspects of faces underlies the recognition of individuals, whereas the representation of changeable aspects of faces, such as eye gaze, expression, and lip movement, underlies the perception of information that facilitates social communication. The model is also hierarchical insofar as it is divided into a core system and an extended system. The core system is comprised of occipitotemporal regions in extrastriate visual cortex that mediate the visual analysis of faces. In the core system, the representation of invariant aspects is mediated more by the face-responsive region in the fusiform gyrus, whereas the representation of changeable aspects is mediated more by the face-responsive region in the superior temporal sulcus. The extended system is comprised of regions from neural systems for other cognitive functions that can be recruited to act in concert with the regions in the core system to extract meaning from faces.",
"title": ""
},
{
"docid": "neg:1840341_10",
"text": "This is the first study to measure the 'sense of community' reportedly offered by the CrossFit gym model. A cross-sectional study adapted Social Capital and General Belongingness scales to compare perceptions of a CrossFit gym and a traditional gym. CrossFit gym members reported significantly higher levels of social capital (both bridging and bonding) and community belongingness compared with traditional gym members. However, regression analysis showed neither social capital, community belongingness, nor gym type was an independent predictor of gym attendance. Exercise and health professionals may benefit from evaluating further the 'sense of community' offered by gym-based exercise programmes.",
"title": ""
},
{
"docid": "neg:1840341_11",
"text": "A bench model of the new generation intelligent universal transformer (IUT) has been recently developed for distribution applications. The distribution IUT employs high-voltage semiconductor device technologies along with multilevel converter circuits for medium-voltage grid connection. This paper briefly describes the basic operation of the IUT and its experimental setup. Performances under source and load disturbances are characterized with extensive tests using a voltage sag generator and various linear and nonlinear loads. Experimental results demonstrate that IUT input and output can avoid direct impact from its opposite side disturbances. The output voltage is well regulated when the voltage sag is applied to the input. The input voltage and current maintains clean sinusoidal and unity power factor when output is nonlinear load. Under load transients, the input and output voltages remain well regulated. These key features prove that the power quality performance of IUT is far superior to that of conventional copper-and-iron based transformers",
"title": ""
},
{
"docid": "neg:1840341_12",
"text": "Efficient and accurate similarity searching on a large time series data set is an important but non- trivial problem. In this work, we propose a new approach to improve the quality of similarity search on time series data by combining symbolic aggregate approximation (SAX) and piecewise linear approximation. The approach consists of three steps: transforming real valued time series sequences to symbolic strings via SAX, pattern matching on the symbolic strings and a post-processing via Piecewise Linear Approximation.",
"title": ""
},
{
"docid": "neg:1840341_13",
"text": "Diabetes mellitus is a chronic disease that leads to complications including heart disease, stroke, kidney failure, blindness and nerve damage. Type 2 diabetes, characterized by target-tissue resistance to insulin, is epidemic in industrialized societies and is strongly associated with obesity; however, the mechanism by which increased adiposity causes insulin resistance is unclear. Here we show that adipocytes secrete a unique signalling molecule, which we have named resistin (for resistance to insulin). Circulating resistin levels are decreased by the anti-diabetic drug rosiglitazone, and increased in diet-induced and genetic forms of obesity. Administration of anti-resistin antibody improves blood sugar and insulin action in mice with diet-induced obesity. Moreover, treatment of normal mice with recombinant resistin impairs glucose tolerance and insulin action. Insulin-stimulated glucose uptake by adipocytes is enhanced by neutralization of resistin and is reduced by resistin treatment. Resistin is thus a hormone that potentially links obesity to diabetes.",
"title": ""
},
{
"docid": "neg:1840341_14",
"text": "In recent years, geological disposal of radioactive waste has focused on placement of highand intermediate-level wastes in mined underground caverns at depths of 500–800 m. Notwithstanding the billions of dollars spent to date on this approach, the difficulty of finding suitable sites and demonstrating to the public and regulators that a robust safety case can be developed has frustrated attempts to implement disposal programmes in several countries, and no disposal facility for spent nuclear fuel exists anywhere. The concept of deep borehole disposal was first considered in the 1950s, but was rejected as it was believed to be beyond existing drilling capabilities. Improvements in drilling and associated technologies and advances in sealing methods have prompted a re-examination of this option for the disposal of high-level radioactive wastes, including spent fuel and plutonium. Since the 1950s, studies of deep boreholes have involved minimal investment. However, deep borehole disposal offers a potentially safer, more secure, cost-effective and environmentally sound solution for the long-term management of high-level radioactive waste than mined repositories. Potentially it could accommodate most of the world’s spent fuel inventory. This paper discusses the concept, the status of existing supporting equipment and technologies and the challenges that remain.",
"title": ""
},
{
"docid": "neg:1840341_15",
"text": "This paper proposes a new theory of the relationship between the sentence processing mechanism and the available computational resources. This theory--the Syntactic Prediction Locality Theory (SPLT)--has two components: an integration cost component and a component for the memory cost associated with keeping track of obligatory syntactic requirements. Memory cost is hypothesized to be quantified in terms of the number of syntactic categories that are necessary to complete the current input string as a grammatical sentence. Furthermore, in accordance with results from the working memory literature both memory cost and integration cost are hypothesized to be heavily influenced by locality (1) the longer a predicted category must be kept in memory before the prediction is satisfied, the greater is the cost for maintaining that prediction; and (2) the greater the distance between an incoming word and the most local head or dependent to which it attaches, the greater the integration cost. The SPLT is shown to explain a wide range of processing complexity phenomena not previously accounted for under a single theory, including (1) the lower complexity of subject-extracted relative clauses compared to object-extracted relative clauses, (2) numerous processing overload effects across languages, including the unacceptability of multiply center-embedded structures, (3) the lower complexity of cross-serial dependencies relative to center-embedded dependencies, (4) heaviness effects, such that sentences are easier to understand when larger phrases are placed later and (5) numerous ambiguity effects, such as those which have been argued to be evidence for the Active Filler Hypothesis.",
"title": ""
},
{
"docid": "neg:1840341_16",
"text": "The recent trend of outsourcing network functions, aka. middleboxes, raises confidentiality and integrity concern on redirected packet, runtime state, and processing result. The outsourced middleboxes must be protected against cyber attacks and malicious service provider. It is challenging to simultaneously achieve strong security, practical performance, complete functionality and compatibility. Prior software-centric approaches relying on customized cryptographic primitives fall short of fulfilling one or more desired requirements. In this paper, after systematically addressing key challenges brought to the fore, we design and build a secure SGX-assisted system, LightBox, which supports secure and generic middlebox functions, efficient networking, and most notably, lowoverhead stateful processing. LightBox protects middlebox from powerful adversary, and it allows stateful network function to run at nearly native speed: it adds only 3μs packet processing delay even when tracking 1.5M concurrent flows.",
"title": ""
},
{
"docid": "neg:1840341_17",
"text": "We present the cases of three children with patent ductus arteriosus (PDA), pulmonary arterial hypertension (PAH), and desaturation. One of them had desaturation associated with atrial septal defect (ASD). His ASD, PAH, and desaturation improved after successful device closure of the PDA. The other two had desaturation associated with Down syndrome. One had desaturation only at room air oxygen (21% oxygen) but well saturated with 100% oxygen, subsequently underwent successful device closure of the PDA. The other had experienced desaturation at a younger age but spontaneously recovered when he was older, following attempted device closure of the PDA, with late embolization of the device.",
"title": ""
},
{
"docid": "neg:1840341_18",
"text": "This paper presents a Generative Adversarial Network (GAN) to model multi-turn dialogue generation, which trains a latent hierarchical recurrent encoder-decoder simultaneously with a discriminative classifier that make the prior approximate to the posterior. Experiments show that our model achieves better results.",
"title": ""
},
{
"docid": "neg:1840341_19",
"text": "It , is generally unrecognized that Sigmund Freud's contribution to the scientific understanding of dreams derived from a radical reorientation to the dream experience. During the nineteenth century, before publication of The Interpretation of Dreams, the presence of dreaming was considered by the scientific community as a manifestation of mental activity during sleep. The state of sleep was given prominence as a factor accounting for the seeming lack of organization and meaning to the dream experience. Thus, the assumed relatively nonpsychological sleep state set the scientific stage for viewing the nature of the dream. Freud radically shifted the context. He recognized-as myth, folklore, and common sense had long understood-that dreams were also linked with the psychology of waking life. This shift in orientation has proved essential for our modern view of dreams and dreaming. Dreams are no longer dismissed as senseless notes hit at random on a piano keyboard by an untrained player. Dreams are now recognized as psychologically significant and meaningful expressions of the life of the dreamer, albeit expressed in disguised and concealed forms. (For a contrasting view, see AcFIIa ION_sYNTHESIS xxroTESis .) Contemporary Dream Research During the past quarter-century, there has been increasing scientific interest in the process of dreaming. A regular sleep-wakefulness cycle has been discovered, and if experimental subjects are awakened during periods of rapid eye movements (REM periods), they will frequently report dreams. In a typical night, four or five dreams occur during REM periods, accompanied by other signs of physiological activation, such as increased respiratory rate, heart rate, and penile and clitoral erection. Dreams usually last for the duration of the eye movements, from about 10 to 25 minutes. Although dreaming usually occurs in such regular cycles ;.dreaming may occur at other times during sleep, as well as during hypnagogic (falling asleep) or hypnopompic .(waking up) states, when REMs are not present. The above findings are discoveries made since the monumental work of Freud reported in The Interpretation of Dreams, and .although of great interest to the study of the mind-body problem, these .findings as yet bear only a peripheral relationship to the central concerns of the psychology of dream formation, the meaning of dream content, the dream as an approach to a deeper understanding of emotional life, and the use of the dream in psychoanalytic treatment .",
"title": ""
}
] |
1840342 | Causal Discovery from Subsampled Time Series Data by Constraint Optimization | [
{
"docid": "pos:1840342_0",
"text": "Recent approaches to causal discovery based on Boolean satisfiability solvers have opened new opportunities to consider search spaces for causal models with both feedback cycles and unmeasured confounders. However, the available methods have so far not been able to provide a principled account of how to handle conflicting constraints that arise from statistical variability. Here we present a new approach that preserves the versatility of Boolean constraint solving and attains a high accuracy despite the presence of statistical errors. We develop a new logical encoding of (in)dependence constraints that is both well suited for the domain and allows for faster solving. We represent this encoding in Answer Set Programming (ASP), and apply a state-of-theart ASP solver for the optimization task. Based on different theoretical motivations, we explore a variety of methods to handle statistical errors. Our approach currently scales to cyclic latent variable models with up to seven observed variables and outperforms the available constraintbased methods in accuracy.",
"title": ""
}
] | [
{
"docid": "neg:1840342_0",
"text": "Recent advances in neural variational inference have facilitated efficient training of powerful directed graphical models with continuous latent variables, such as variational autoencoders. However, these models usually assume simple, unimodal priors — such as the multivariate Gaussian distribution — yet many realworld data distributions are highly complex and multi-modal. Examples of complex and multi-modal distributions range from topics in newswire text to conversational dialogue responses. When such latent variable models are applied to these domains, the restriction of the simple, uni-modal prior hinders the overall expressivity of the learned model as it cannot possibly capture more complex aspects of the data distribution. To overcome this critical restriction, we propose a flexible, simple prior distribution which can be learned efficiently and potentially capture an exponential number of modes of a target distribution. We develop the multi-modal variational encoder-decoder framework and investigate the effectiveness of the proposed prior in several natural language processing modeling tasks, including document modeling and dialogue modeling.",
"title": ""
},
{
"docid": "neg:1840342_1",
"text": "Recently, Convolutional Neural Network (CNN) based models have achieved great success in Single Image Super-Resolution (SISR). Owing to the strength of deep networks, these CNN models learn an effective nonlinear mapping from the low-resolution input image to the high-resolution target image, at the cost of requiring enormous parameters. This paper proposes a very deep CNN model (up to 52 convolutional layers) named Deep Recursive Residual Network (DRRN) that strives for deep yet concise networks. Specifically, residual learning is adopted, both in global and local manners, to mitigate the difficulty of training very deep networks, recursive learning is used to control the model parameters while increasing the depth. Extensive benchmark evaluation shows that DRRN significantly outperforms state of the art in SISR, while utilizing far fewer parameters. Code is available at https://github.com/tyshiwo/DRRN_CVPR17.",
"title": ""
},
{
"docid": "neg:1840342_2",
"text": "I discuss open theoretical questions pertaining to the modified dynamics (MOND)–a proposed alternative to dark matter, which posits a breakdown of Newtonian dynamics in the limit of small accelerations. In particular, I point the reasons for thinking that MOND is an effective theory–perhaps, despite appearance, not even in conflict with GR. I then contrast the two interpretations of MOND as modified gravity and as modified inertia. I describe two mechanical models that are described by potential theories similar to (non-relativistic) MOND: a potential-flow model, and a membrane model. These might shed some light on a possible origin of MOND. The possible involvement of vacuum effects is also speculated on.",
"title": ""
},
{
"docid": "neg:1840342_3",
"text": "X-rays are commonly performed imaging tests that use small amounts of radiation to produce pictures of the organs, tissues, and bones of the body. X-rays of the chest are used to detect abnormalities or diseases of the airways, blood vessels, bones, heart, and lungs. In this work we present a stochastic attention-based model that is capable of learning what regions within a chest X-ray scan should be visually explored in order to conclude that the scan contains a specific radiological abnormality. The proposed model is a recurrent neural network (RNN) that learns to sequentially sample the entire X-ray and focus only on informative areas that are likely to contain the relevant information. We report on experiments carried out with more than 100, 000 X-rays containing enlarged hearts or medical devices. The model has been trained using reinforcement learning methods to learn task-specific policies.",
"title": ""
},
{
"docid": "neg:1840342_4",
"text": "Data mining is an area of computer science with a huge prospective, which is the process of discovering or extracting information from large database or datasets. There are many different areas under Data Mining and one of them is Classification or the supervised learning. Classification also can be implemented through a number of different approaches or algorithms. We have conducted the comparison between three algorithms with help of WEKA (The Waikato Environment for Knowledge Analysis), which is an open source software. It contains different type's data mining algorithms. This paper explains discussion of Decision tree, Bayesian Network and K-Nearest Neighbor algorithms. Here, for comparing the result, we have used as parameters the correctly classified instances, incorrectly classified instances, time taken, kappa statistic, relative absolute error, and root relative squared error.",
"title": ""
},
{
"docid": "neg:1840342_5",
"text": "This paper addresses empirically and theoretically a question derived from the chunking theory of memory (Chase & Simon, 1973a, 1973b): To what extent is skilled chess memory limited by the size of short-term memory (about seven chunks)? This question is addressed first with an experiment where subjects, ranking from class A players to grandmasters, are asked to recall up to five positions presented during 5 s each. Results show a decline of percentage of recall with additional boards, but also show that expert players recall more pieces than is predicted by the chunking theory in its original form. A second experiment shows that longer latencies between the presentation of boards facilitate recall. In a third experiment, a Chessmaster gradually increases the number of boards he can reproduce with higher than 70% average accuracy to nine, replacing as many as 160 pieces correctly. To account for the results of these experiments, a revision of the Chase-Simon theory is proposed. It is suggested that chess players, like experts in other recall tasks, use long-term memory retrieval structures (Chase & Ericsson, 1982) or templates in addition to chunks in short-term memory to store information rapidly.",
"title": ""
},
{
"docid": "neg:1840342_6",
"text": "Instant Messaging chat sessions are realtime text-based conversations which can be analyzed using dialogue-act models. We describe a statistical approach for modelling and detecting dialogue acts in Instant Messaging dialogue. This involved the collection of a small set of task-based dialogues and annotating them with a revised tag set. We then dealt with segmentation and synchronisation issues which do not arise in spoken dialogue. The model we developed combines naive Bayes and dialogue-act n-grams to obtain better than 80% accuracy in our tagging experiment.",
"title": ""
},
{
"docid": "neg:1840342_7",
"text": "The aim of transfer learning is to improve prediction accuracy on a target task by exploiting the training examples for tasks that are related to the target one. Transfer learning has received more attention in recent years, because this technique is considered to be helpful in reducing the cost of labeling. In this paper, we propose a very simple approach to transfer learning: TrBagg, which is the extension of bagging. TrBagg is composed of two stages: Many weak classifiers are first generated as in standard bagging, and these classifiers are then filtered based on their usefulness for the target task. This simplicity makes it easy to work reasonably well without severe tuning of learning parameters. Further, our algorithm equips an algorithmic scheme to avoid negative transfer. We applied TrBagg to personalized tag prediction tasks for social bookmarks Our approach has several convenient characteristics for this task such as adaptation to multiple tasks with low computational cost.",
"title": ""
},
{
"docid": "neg:1840342_8",
"text": "Over the last few years, we've seen a plethora of Internet of Things (IoT) solutions, products, and services make their way into the industry's marketplace. All such solutions will capture large amounts of data pertaining to the environment as well as their users. The IoT's objective is to learn more and better serve system users. Some IoT solutions might store data locally on devices (\"things\"), whereas others might store it in the cloud. The real value of collecting data comes through data processing and aggregation on a large scale, where new knowledge can be extracted. However, such procedures can lead to user privacy issues. This article discusses some of the main challenges of privacy in the IoT as well as opportunities for research and innovation. The authors also introduce some of the ongoing research efforts that address IoT privacy issues.",
"title": ""
},
{
"docid": "neg:1840342_9",
"text": "The planet Mars, while cold and arid today, once possessed a warm and wet climate, as evidenced by extensive fluvial features observable on its surface. It is believed that the warm climate of the primitive Mars was created by a strong greenhouse effect caused by a thick CO2 atmosphere. Mars lost its warm climate when most of the available volatile CO2 was fixed into the form of carbonate rock due to the action of cycling water. It is believed, however, that sufficient CO2 to form a 300 to 600 mb atmosphere may still exist in volatile form, either adsorbed into the regolith or frozen out at the south pole. This CO2 may be released by planetary warming, and as the CO2 atmosphere thickens, positive feedback is produced which can accelerate the warming trend. Thus it is conceivable, that by taking advantage of the positive feedback inherent in Mars' atmosphere/regolith CO2 system, that engineering efforts can produce drastic changes in climate and pressure on a planetary scale. In this paper we propose a mathematical model of the Martian CO2 system, and use it to produce analysis which clarifies the potential of positive feedback to accelerate planetary engineering efforts. It is shown that by taking advantage of the feedback, the requirements for planetary engineering can be reduced by about 2 orders of magnitude relative to previous estimates. We examine the potential of various schemes for producing the initial warming to drive the process, including the stationing of orbiting mirrors, the importation of natural volatiles with high greenhouse capacity from the outer solar system, and the production of artificial halocarbon greenhouse gases on the Martian surface through in-situ industry. If the orbital mirror scheme is adopted, mirrors with dimension on the order or 100 km radius are required to vaporize the CO2 in the south polar cap. If manufactured of solar sail like material, such mirrors would have a mass on the order of 200,000 tonnes. If manufactured in space out of asteroidal or Martian moon material, about 120 MWe-years of energy would be needed to produce the required aluminum. This amount of power can be provided by near-term multimegawatt nuclear power units, such as the 5 MWe modules now under consideration for NEP spacecraft. Orbital transfer of very massive bodies from the outer solar system can be accomplished using nuclear thermal rocket engines using the asteroid's volatile material as propellant. Using major planets for gravity assists, the rocket ∆V required to move an outer solar system asteroid onto a collision trajectory with Mars can be as little as 300 m/s. If the asteroid is made of NH3, specific impulses of about 400 s can be attained, and as little as 10% of the asteroid will be required for propellant. Four 5000 MWt NTR engines would require a 10 year burn time to push a 10 billion tonne asteroid through a ∆V of 300 m/s. About 4 such objects would be sufficient to greenhouse Mars. Greenhousing Mars via the manufacture of halocarbon gases on the planet's surface may well be the most practical option. Total surface power requirements to drive planetary warming using this method are calculated and found to be on the order of 1000 MWe, and the required times scale for climate and atmosphere modification is on the order of 50 years. It is concluded that a drastic modification of Martian conditions can be achieved using 21st century technology. The Mars so produced will closely resemble the conditions existing on the primitive Mars. Humans operating on the surface of such a Mars would require breathing gear, but pressure suits would be unnecessary. With outside atmospheric pressures raised, it will be possible to create large dwelling areas by means of very large inflatable structures. Average temperatures could be above the freezing point of water for significant regions during portions of the year, enabling the growth of plant life in the open. The spread of plants could produce enough oxygen to make Mars habitable for animals in several millennia. More rapid oxygenation would require engineering efforts supported by multi-terrawatt power sources. It is speculated that the desire to speed the terraforming of Mars will be a driver for developing such technologies, which in turn will define a leap in human power over nature as dramatic as that which accompanied the creation of post-Renaissance industrial civilization.",
"title": ""
},
{
"docid": "neg:1840342_10",
"text": "Quick Response Code has been widely used in the automatic identification fields. In order to adapting various sizes, a little dirty or damaged, and various lighting conditions of bar code image, this paper proposes a novel implementation of real-time Quick Response Code recognition using mobile, which is an efficient technology used for data transferring. An image processing system based on mobile is described to be able to binarize, locate, segment, and decode the QR Code. Our experimental results indicate that these algorithms are robust to real world scene image.",
"title": ""
},
{
"docid": "neg:1840342_11",
"text": "Wearable medical sensors (WMSs) are garnering ever-increasing attention from both the scientific community and the industry. Driven by technological advances in sensing, wireless communication, and machine learning, WMS-based systems have begun transforming our daily lives. Although WMSs were initially developed to enable low-cost solutions for continuous health monitoring, the applications of WMS-based systems now range far beyond health care. Several research efforts have proposed the use of such systems in diverse application domains, e.g., education, human-computer interaction, and security. Even though the number of such research studies has grown drastically in the last few years, the potential challenges associated with their design, development, and implementation are neither well-studied nor well-recognized. This article discusses various services, applications, and systems that have been developed based on WMSs and sheds light on their design goals and challenges. We first provide a brief history of WMSs and discuss how their market is growing. We then discuss the scope of applications of WMS-based systems. Next, we describe the architecture of a typical WMS-based system and the components that constitute such a system, and their limitations. Thereafter, we suggest a list of desirable design goals that WMS-based systems should satisfy. Finally, we discuss various research directions related to WMSs and how previous research studies have attempted to address the limitations of the components used in WMS-based systems and satisfy the desirable design goals.",
"title": ""
},
{
"docid": "neg:1840342_12",
"text": "PubChem (http://pubchem.ncbi.nlm.nih.gov) is a public repository for biological properties of small molecules hosted by the US National Institutes of Health (NIH). PubChem BioAssay database currently contains biological test results for more than 700 000 compounds. The goal of PubChem is to make this information easily accessible to biomedical researchers. In this work, we present a set of web servers to facilitate and optimize the utility of biological activity information within PubChem. These web-based services provide tools for rapid data retrieval, integration and comparison of biological screening results, exploratory structure-activity analysis, and target selectivity examination. This article reviews these bioactivity analysis tools and discusses their uses. Most of the tools described in this work can be directly accessed at http://pubchem.ncbi.nlm.nih.gov/assay/. URLs for accessing other tools described in this work are specified individually.",
"title": ""
},
{
"docid": "neg:1840342_13",
"text": "Abst ract Qualit at ive case st udy met hodology provides t ools f or researchers t o st udy complex phenomena wit hin t heir cont ext s. When t he approach is applied correct ly, it becomes a valuable met hod f or healt h science research t o develop t heory, evaluat e programs, and develop int ervent ions. T he purpose of t his paper is t o guide t he novice researcher in ident if ying t he key element s f or designing and implement ing qualit at ive case st udy research project s. An overview of t he t ypes of case st udy designs is provided along wit h general recommendat ions f or writ ing t he research quest ions, developing proposit ions, det ermining t he “case” under st udy, binding t he case and a discussion of dat a sources and t riangulat ion. T o f acilit at e applicat ion of t hese principles, clear examples of research quest ions, st udy proposit ions and t he dif f erent t ypes of case st udy designs are provided Keywo rds Case St udy and Qualit at ive Met hod Publicat io n Dat e 12-1-2008 Creat ive Co mmo ns License Journal Home About T his Journal Aims & Scope Edit orial Board Policies Open Access",
"title": ""
},
{
"docid": "neg:1840342_14",
"text": "LPWAN (Low Power Wide Area Networks) technologies have been attracting attention continuously in IoT (Internet of Things). LoRaWAN is present on the market as a LPWAN technology and it has features such as low power consumption, low transceiver chip cost and wide coverage area. In the LoRaWAN, end devices must perform a join procedure for participating in the network. Attackers could exploit the join procedure because it has vulnerability in terms of security. Replay attack is a method of exploiting the vulnerability in the join procedure. In this paper, we propose a attack scenario and a countermeasure against replay attack that may occur in the join request transfer process.",
"title": ""
},
{
"docid": "neg:1840342_15",
"text": "INTRODUCTION Pivotal to athletic performance is the ability to more maintain desired athletic performance levels during particularly critical periods of competition [1], such as during pressurised situations that typically evoke elevated levels of anxiety (e.g., penalty kicks) or when exposed to unexpected adversities (e.g., unfavourable umpire calls on crucial points) [2, 3]. These kinds of situations become markedly important when athletes, who are separated by marginal physical and technical differences, are engaged in closely contested matches, games, or races [4]. It is within these competitive conditions, in particular, that athletes’ responses define their degree of success (or lack thereof); responses that are largely dependent on athletes’ psychological attributes [5]. One of these attributes appears to be mental toughness (MT), which has often been classified as a critical success factor due to the role it plays in fostering adaptive responses to positively and negatively construed pressures, situations, and events [6 8]. However, as scholars have intensified",
"title": ""
},
{
"docid": "neg:1840342_16",
"text": "Detecting and identifying any phishing websites in real-time, particularly for e-banking is really a complex and dynamic problem involving many factors and criteria. Because of the subjective considerations and the ambiguities involved in the detection, Fuzzy Data Mining (DM) Techniques can be an effective tool in assessing and identifying phishing websites for e-banking since it offers a more natural way of dealing with quality factors rather than exact values. In this paper, we present novel approach to overcome the ‘fuzziness’ in the e-banking phishing website assessment and propose an intelligent resilient and effective model for detecting e-banking phishing websites. The proposed model is based on Fuzzy logic (FL) combined with Data Mining algorithms to characterize the e-banking phishing website factors and to investigate its techniques by classifying there phishing types and defining six e-banking phishing website attack criteria’s with a layer structure. The proposed e-banking phishing website model showed the significance importance of the phishing website two criteria’s (URL & Domain Identity) and (Security & Encryption) in the final phishing detection rate result, taking into consideration its characteristic association and relationship with each others as showed from the fuzzy data mining classification and association rule algorithms. Our phishing model also showed the insignificant trivial influence of the (Page Style & Content) criteria along with (Social Human Factor) criteria in the phishing detection final rate result.",
"title": ""
},
{
"docid": "neg:1840342_17",
"text": "The impact of predictive genetic testing on cancer care can be measured by the increased demand for and utilization of genetic services as well as in the progress made in reducing cancer risks in known mutation carriers. Nonetheless, differential access to and utilization of genetic counseling and cancer predisposition testing among underserved racial and ethnic minorities compared with the white population has led to growing health care disparities in clinical cancer genetics that are only beginning to be addressed. Furthermore, deficiencies in the utility of genetic testing in underserved populations as a result of limited testing experience and in the effectiveness of risk-reducing interventions compound access and knowledge-base disparities. The recent literature on racial/ethnic health care disparities is briefly reviewed, and is followed by a discussion of the current limitations of risk assessment and genetic testing outside of white populations. The importance of expanded testing in underserved populations is emphasized.",
"title": ""
},
{
"docid": "neg:1840342_18",
"text": "In this paper, we study the design and workspace of a 6–6 cable-suspended parallel robot. The workspace volume is characterized as the set of points where the centroid of the moving platform can reach with tensions in all suspension cables at a constant orientation. This paper attempts to tackle some aspects of optimal design of a 6DOF cable robot by addressing the variations of the workspace volume and the accuracy of the robot using different geometric configurations, different sizes and orientations of the moving platform. The global condition index is used as a performance index of a robot with respect to the force and velocity transmission over the whole workspace. The results are used for design analysis of the cable-robot for a specific motion of the moving platform. 2004 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "neg:1840342_19",
"text": "We present a named-entity recognition (NER) system for parallel multilingual text. Our system handles three languages (i.e., English, French, and Spanish) and is tailored to the biomedical domain. For each language, we design a supervised knowledge-based CRF model with rich biomedical and general domain information. We use the sentence alignment of the parallel corpora, the word alignment generated by the GIZA++[8] tool, and Wikipedia-based word alignment in order to transfer system predictions made by individual language models to the remaining parallel languages. We re-train each individual language system using the transferred predictions and generate a final enriched NER model for each language. The enriched system performs better than the initial system based on the predictions transferred from the other language systems. Each language model benefits from the external knowledge extracted from biomedical and general domain resources.",
"title": ""
}
] |
1840343 | CBCD: Cloned buggy code detector | [
{
"docid": "pos:1840343_0",
"text": "Software security vulnerabilities are discovered on an almost daily basis and have caused substantial damage. Aiming at supporting early detection and resolution for them, we have conducted an empirical study on thousands of vulnerabilities and found that many of them are recurring due to software reuse. Based on the knowledge gained from the study, we developed SecureSync, an automatic tool to detect recurring software vulnerabilities on the systems that reuse source code or libraries. The core of SecureSync includes two techniques to represent and compute the similarity of vulnerable code across different systems. The evaluation for 60 vulnerabilities on 176 releases of 119 open-source software systems shows that SecureSync is able to detect recurring vulnerabilities with high accuracy and to identify 90 releases having potentially vulnerable code that are not reported or fixed yet, even in mature systems. A couple of cases were actually confirmed by their developers.",
"title": ""
}
] | [
{
"docid": "neg:1840343_0",
"text": "With the proliferation of computing and information technologies, we have an opportunity to envision a fully participatory democracy in the country through a fully digitized voting platform. However, the growing interconnectivity of systems and people across the globe, and the proliferation of cybersecurity issues pose a significant bottleneck towards achieving such a vision. In this paper, we discuss a vision to modernize our voting processes and discuss the challenges for creating a national e-voting framework that incorporates policies, standards and technological infrastructure that is secure, privacy-preserving, resilient and transparent. Through partnerships among private industry, academia, and State and Federal Government, technology must be the catalyst to develop a national platform for American voters. Along with integrating biometrics to authenticate each registered voter for transparency and accountability, the platform provides depth in the e-voting infrastructure with emerging blockchain technologies. We outline the way voting process runs today with the challenges; states are having from funding to software development concerns. Additionally, we highlight attacks from malware infiltrations from off the shelf products made from factories from countries such as China. This paper illustrates a strategic level of voting challenges and modernizing processes that will enhance the voter’s trust in America democracy.",
"title": ""
},
{
"docid": "neg:1840343_1",
"text": "Nonlinear manifold learning from unorganized data points is a very challenging unsupervised learning and data visualization problem with a great variety of applications. In this paper we present a new algorithm for manifold learning and nonlinear dimension reduction. Based on a set of unorganized data points sampled with noise from the manifold, we represent the local geometry of the manifold using tangent spaces learned by fitting an affine subspace in a neighborhood of each data point. Those tangent spaces are aligned to give the internal global coordinates of the data points with respect to the underlying manifold by way of a partial eigendecomposition of the neighborhood connection matrix. We present a careful error analysis of our algorithm and show that the reconstruction errors are of second-order accuracy. We illustrate our algorithm using curves and surfaces both in 2D/3D and higher dimensional Euclidean spaces, and 64-by-64 pixel face images with various pose and lighting conditions. We also address several theoretical and algorithmic issues for further research and improvements.",
"title": ""
},
{
"docid": "neg:1840343_2",
"text": "This paper describes the application of a pedagogical model called \\learning as a research activity\" [D. Gil-P erez and J. Carrascosa-Alis, Science Education 78 (1994) 301{315] to the design and implementation of a two-semester course on compiler design for Computer Engineering students. In the new model, the classical pattern of classroom activity based mainly on one-way knowledge transmission/reception of pre-elaborated concepts is replaced by an active working environment that resembles that of a group of novice researchers under the supervision of an expert. The new model, rooted in the now commonly-accepted constructivist postulates, strives for meaningful acquisition of fundamental concepts through problem solving |in close parallelism to the construction of scienti c knowledge through history.",
"title": ""
},
{
"docid": "neg:1840343_3",
"text": "A new Depth Image Layers Separation (DILS) algorithm for synthesizing inter-view images based on disparity depth map layers representation is presented. The approach is to separate the depth map into several layers identified through histogram-based clustering. Each layer is extracted using inter-view interpolation to create objects based on location and depth. DILS is a new paradigm in selecting interesting image locations based on depth, but also in producing new image representations that allow objects or parts of an image to be described without the need of segmentation and identification. The image view synthesis can reduce the configuration complexity of multi-camera arrays in 3D imagery and free-viewpoint applications. The simulation results show that depth layer separation is able to create inter-view images that may be integrated with other techniques such as occlusion handling processes. The DILS algorithm can be implemented using both simple as well as sophisticated stereo matching methods to synthesize inter-view images.",
"title": ""
},
{
"docid": "neg:1840343_4",
"text": "This article argues that technological innovation is transforming the flow of information, the fluidity of social action, and is giving birth to new forms of bottom up innovation that are capable of expanding and exploding old theories of reproduction and resistance because 'smart mobs', 'street knowledge', and 'social movements' cannot be neutralized by powerful structural forces in the same old ways. The purpose of this article is to develop the concept of YPAR 2.0 in which new technologies enable young people to visualize, validate, and transform social inequalities by using local knowledge in innovative ways that deepen civic engagement, democratize data, expand educational opportunity, inform policy, and mobilize community assets. Specifically this article documents how digital technology (including a mobile, mapping and SMS platform called Streetwyze and paper-mapping tool Local Ground) - coupled with 'ground-truthing' - an approach in which community members work with researchers to collect and verify 'public' data - sparked a food revolution in East Oakland that led to an increase in young people's self-esteem, environmental stewardship, academic engagement, and positioned urban youth to become community leaders and community builders who are connected and committed to health and well-being of their neighborhoods. This article provides an overview of how the YPAR 2.0 Model was developed along with recommendations and implications for future research and collaborations between youth, teachers, neighborhood leaders, and youth serving organizations.",
"title": ""
},
{
"docid": "neg:1840343_5",
"text": "Existing Natural Language Generation (nlg) systems are weak AI systems and exhibit limited capabilities when language generation tasks demand higher levels of creativity, originality and brevity. Eective solutions or, at least evaluations of modern nlg paradigms for such creative tasks have been elusive, unfortunately. is paper introduces and addresses the task of coherent story generation from independent descriptions, describing a scene or an event. Towards this, we explore along two popular text-generation paradigms – (1) Statistical Machine Translation (smt), posing story generation as a translation problem and (2) Deep Learning, posing story generation as a sequence to sequence learning problem. In SMT, we chose two popular methods such as phrase based SMT (pb-SMT) and syntax based SMT (syntax-SMT) to ‘translate’ the incoherent input text into stories. We then implement a deep recurrent neural network (rnn) architecture that encodes sequence of variable length input descriptions to corresponding latent representations and decodes them to produce well formed comprehensive story like summaries. e ecacy of the suggested approaches is demonstrated on a publicly available dataset with the help of popular machine translation and summarization evaluation metrics. We believe, a system like ours has dierent interesting applicationsfor example, creating news articles from phrases of event information.",
"title": ""
},
{
"docid": "neg:1840343_6",
"text": "First we report on a new threat campaign, underway in Korea, which infected around 20,000 Android users within two months. The campaign attacked mobile users with malicious applications spread via different channels, such as email attachments or SMS spam. A detailed investigation of the Android malware resulted in the identification of a new Android malware family Android/BadAccents. The family represents current state-of-the-art in mobile malware development for banking trojans. Second, we describe in detail the techniques this malware family uses and confront them with current state-of-the-art static and dynamic codeanalysis techniques for Android applications. We highlight various challenges for automatic malware analysis frameworks that significantly hinder the fully automatic detection of malicious components in current Android malware. Furthermore, the malware exploits a previously unknown tapjacking vulnerability in the Android operating system, which we describe. As a result of this work, the vulnerability, affecting all Android versions, will be patched in one of the next releases of the Android Open Source Project.",
"title": ""
},
{
"docid": "neg:1840343_7",
"text": "The design of a low-cost rectenna for low-power applications is presented. The rectenna is designed with the use of analytical models and closed-form analytical expressions. This allows for a fast design of the rectenna system. To acquire a small-area rectenna, a layered design is proposed. Measurements indicate the validity range of the analytical models.",
"title": ""
},
{
"docid": "neg:1840343_8",
"text": "This meta-analysis evaluated predictors of both objective and subjective sales performance. Biodata measures and sales ability inventories were good predictors of the ratings criterion, with corrected rs of .52 and .45, respectively. Potency (a subdimension of the Big 5 personality dimension Extraversion) predicted supervisor ratings of performance (r = .28) and objective measures of sales (r — .26). Achievement (a component of the Conscientiousness dimension) predicted ratings (r = .25) and objective sales (r = .41). General cognitive ability showed a correlation of .40 with ratings but only .04 with objective sales. Similarly, age predicted ratings (r = .26) but not objective sales (r = —.06). On the basis of a small number of studies, interest appears to be a promising predictor of sales success.",
"title": ""
},
{
"docid": "neg:1840343_9",
"text": "Body composition in older adults can be assessed using simple, convenient but less precise anthropometric methods to assess (regional) body fat and skeletal muscle, or more elaborate, precise and costly methods such as computed tomography and magnetic resonance imaging. Body weight and body fat percentage generally increase with aging due to an accumulation of body fat and a decline in skeletal muscle mass. Body weight and fatness plateau at age 75–80 years, followed by a gradual decline. However, individual weight patterns may differ and the periods of weight loss and weight (re)gain common in old age may affect body composition. Body fat redistributes with aging, with decreasing subcutaneous and appendicular fat and increasing visceral and ectopic fat. Skeletal muscle mass declines with aging, a process called sarcopenia. Obesity in old age is associated with a higher risk of mobility limitations, disability and mortality. A higher waist circumference and more visceral fat increase these risks, independent of overall body fatness, as do involuntary weight loss and weight cycling. The role of low skeletal muscle mass in the development of mobility limitations and disability remains controversial, but it is much smaller than the role of high body fat. Low muscle mass does not seem to increase mortality risk in older adults.",
"title": ""
},
{
"docid": "neg:1840343_10",
"text": "Foreign Exchange (Forex) market is a complex and challenging task for prediction due to uncertainty movement of exchange rate. However, these movements over timeframe also known as historical Forex data that offered a generic repeated trend patterns. This paper uses the features extracted from trend patterns to model and predict the next day trend. Hidden Markov Models (HMMs) is applied to learn the historical trend patterns, and use to predict the next day movement trends. We use the 2011 Forex historical data of Australian Dollar (AUS) and European Union Dollar (EUD) against the United State Dollar (USD) for modeling, and the 2012 and 2013 Forex historical data for validating the proposed model. The experimental results show outperforms prediction result for both years.",
"title": ""
},
{
"docid": "neg:1840343_11",
"text": "Deep learning has become increasingly popular in both academic and industrial areas in the past years. Various domains including pattern recognition, computer vision, and natural language processing have witnessed the great power of deep networks. However, current studies on deep learning mainly focus on data sets with balanced class labels, while its performance on imbalanced data is not well examined. Imbalanced data sets exist widely in real world and they have been providing great challenges for classification tasks. In this paper, we focus on the problem of classification using deep network on imbalanced data sets. Specifically, a novel loss function called mean false error together with its improved version mean squared false error are proposed for the training of deep networks on imbalanced data sets. The proposed method can effectively capture classification errors from both majority class and minority class equally. Experiments and comparisons demonstrate the superiority of the proposed approach compared with conventional methods in classifying imbalanced data sets on deep neural networks.",
"title": ""
},
{
"docid": "neg:1840343_12",
"text": "Graph clustering aims to discovercommunity structures in networks, the task being fundamentally challenging mainly because the topology structure and the content of the graphs are difficult to represent for clustering analysis. Recently, graph clustering has moved from traditional shallow methods to deep learning approaches, thanks to the unique feature representation learning capability of deep learning. However, existing deep approaches for graph clustering can only exploit the structure information, while ignoring the content information associated with the nodes in a graph. In this paper, we propose a novel marginalized graph autoencoder (MGAE) algorithm for graph clustering. The key innovation of MGAE is that it advances the autoencoder to the graph domain, so graph representation learning can be carried out not only in a purely unsupervised setting by leveraging structure and content information, it can also be stacked in a deep fashion to learn effective representation. From a technical viewpoint, we propose a marginalized graph convolutional network to corrupt network node content, allowing node content to interact with network features, and marginalizes the corrupted features in a graph autoencoder context to learn graph feature representations. The learned features are fed into the spectral clustering algorithm for graph clustering. Experimental results on benchmark datasets demonstrate the superior performance of MGAE, compared to numerous baselines.",
"title": ""
},
{
"docid": "neg:1840343_13",
"text": "In the aftermath of recent corporate scandals, managers and researchers have turned their attention to questions of ethics management. We identify five common myths about business ethics and provide responses that are grounded in theory, research, and business examples. Although the scientific study of business ethics is relatively new, theory and research exist that can guide executives who are trying to better manage their employees' and their own ethical behavior. We recommend that ethical conduct be managed proactively via explicit ethical leadership and conscious management of the organization's ethical culture.",
"title": ""
},
{
"docid": "neg:1840343_14",
"text": "The current lack of knowledge about the effect of maternally administered drugs on the developing fetus is a major public health concern worldwide. The first critical step toward predicting the safety of medications in pregnancy is to screen drug compounds for their ability to cross the placenta. However, this type of preclinical study has been hampered by the limited capacity of existing in vitro and ex vivo models to mimic physiological drug transport across the maternal-fetal interface in the human placenta. Here the proof-of-principle for utilizing a microengineered model of the human placental barrier to simulate and investigate drug transfer from the maternal to the fetal circulation is demonstrated. Using the gestational diabetes drug glyburide as a model compound, it is shown that the microphysiological system is capable of reconstituting efflux transporter-mediated active transport function of the human placental barrier to limit fetal exposure to maternally administered drugs. The data provide evidence that the placenta-on-a-chip may serve as a new screening platform to enable more accurate prediction of drug transport in the human placenta.",
"title": ""
},
{
"docid": "neg:1840343_15",
"text": "For evaluating or training different kinds of vision algorithms, a large amount of precise and reliable data is needed. In this paper we present a system to create extended synthetic sequences of traffic environment scenarios, associated with several types of ground truth data. By integrating vehicle dynamics in a configuration tool, and by using path-tracing in an external rendering engine to render the scenes, a system is created that allows ongoing and flexible creation of highly realistic traffic images. For all images, ground truth data is provided for depth, optical flow, surface normals and semantic scene labeling. Sequences that are produced with this system are more varied and closer to natural images than other synthetic datasets before.",
"title": ""
},
{
"docid": "neg:1840343_16",
"text": "Generative adversarial networks (GANs) implicitly learn the probability distribution of a dataset and can draw samples from the distribution. This paper presents, Tabular GAN (TGAN), a generative adversarial network which can generate tabular data like medical or educational records. Using the power of deep neural networks, TGAN generates high-quality and fully synthetic tables while simultaneously generating discrete and continuous variables. When we evaluate our model on three datasets, we find that TGAN outperforms conventional statistical generative models in both capturing the correlation between columns and scaling up for large datasets.",
"title": ""
},
{
"docid": "neg:1840343_17",
"text": "Blood pressure oscillometric waveforms behave as amplitude modulated nonlinear signals with frequency fluctuations. Their oscillating nature can be better analyzed by the digital Taylor-Fourier transform (DTFT), recently proposed for phasor estimation in oscillating power systems. Based on a relaxed signal model that includes Taylor components greater than zero, the DTFT is able to estimate not only the oscillation itself, as does the digital Fourier transform (DFT), but also its derivatives included in the signal model. In this paper, an oscillometric waveform is analyzed with the DTFT, and its zeroth and first oscillating harmonics are illustrated. The results show that the breathing activity can be separated from the cardiac one through the critical points of the first component, determined by the zero crossings of the amplitude derivatives estimated from the third Taylor order model. On the other hand, phase derivative estimates provide the fluctuations of the cardiac frequency and its derivative, new parameters that could improve the precision of the systolic and diastolic blood pressure assignment. The DTFT envelope estimates uniformly converge from K=3, substantially improving the harmonic separation of the DFT.",
"title": ""
},
{
"docid": "neg:1840343_18",
"text": "This paper provides a review of recent developments in speech recognition research. The concept of sources of knowledge is introduced and the use of knowledge to generate and verify hypotheses is discussed. The difficulties that arise in the construction of different types of speech recognition systems are discussed and the structure and performance of several such systems is presented. Aspects of component subsystems at the acoustic, phonetic, syntactic, and semantic levels are presented. System organizations that are required for effective interaction and use of various component subsystems in the presence of error and ambiguity are discussed.",
"title": ""
}
] |
1840344 | Power grid's Intelligent Stability Analysis based on big data technology | [
{
"docid": "pos:1840344_0",
"text": "This paper describes an online dynamic security assessment scheme for large-scale interconnected power systems using phasor measurements and decision trees. The scheme builds and periodically updates decision trees offline to decide critical attributes as security indicators. Decision trees provide online security assessment and preventive control guidelines based on real-time measurements of the indicators from phasor measurement units. The scheme uses a new classification method involving each whole path of a decision tree instead of only classification results at terminal nodes to provide more reliable security assessment results for changes in system conditions. The approaches developed are tested on a 2100-bus, 2600-line, 240-generator operational model of the Entergy system. The test results demonstrate that the proposed scheme is able to identify key security indicators and give reliable and accurate online dynamic security predictions.",
"title": ""
}
] | [
{
"docid": "neg:1840344_0",
"text": "The capturing of angular and spatial information of the scene using single camera is made possible by new emerging technology referred to as plenoptic camera. Both angular and spatial information, enable various post-processing applications, e.g. refocusing, synthetic aperture, super-resolution, and 3D scene reconstruction. In the past, multiple traditional cameras were used to capture the angular and spatial information of the scene. However, recently with the advancement in optical technology, plenoptic cameras have been introduced to capture the scene information. In a plenoptic camera, a lenslet array is placed between the main lens and the image sensor that allows multiplexing of the spatial and angular information onto a single image, also referred to as plenoptic image. The placement of the lenslet array relative to the main lens and the image sensor, results in two different optical designs of a plenoptic camera, also referred to as plenoptic 1.0 and plenoptic 2.0. In this work, we present a novel dataset captured with plenoptic 1.0 (Lytro Illum) and plenoptic 2.0 (Raytrix R29) cameras for the same scenes under the same conditions. The dataset provides the benchmark contents for various research and development activities for plenoptic images.",
"title": ""
},
{
"docid": "neg:1840344_1",
"text": "It is necessary and essential to discovery protein function from the novel primary sequences. Wet lab experimental procedures are not only time-consuming, but also costly, so predicting protein structure and function reliably based only on amino acid sequence has significant value. TATA-binding protein (TBP) is a kind of DNA binding protein, which plays a key role in the transcription regulation. Our study proposed an automatic approach for identifying TATA-binding proteins efficiently, accurately, and conveniently. This method would guide for the special protein identification with computational intelligence strategies. Firstly, we proposed novel fingerprint features for TBP based on pseudo amino acid composition, physicochemical properties, and secondary structure. Secondly, hierarchical features dimensionality reduction strategies were employed to improve the performance furthermore. Currently, Pretata achieves 92.92% TATA-binding protein prediction accuracy, which is better than all other existing methods. The experiments demonstrate that our method could greatly improve the prediction accuracy and speed, thus allowing large-scale NGS data prediction to be practical. A web server is developed to facilitate the other researchers, which can be accessed at http://server.malab.cn/preTata/ .",
"title": ""
},
{
"docid": "neg:1840344_2",
"text": "In this paper, we focus on the problem of preserving the privacy of sensitive relationships in graph data. We refer to the problem of inferring sensitive relationships from anonymized graph data as link re-identification. We propose five different privacy preservation strategies, which vary in terms of the amount of data removed (and hence their utility) and the amount of privacy preserved. We assume the adversary has an accurate predictive model for links, and we show experimentally the success of different link re-identification strategies under varying structural characteristics of the data.",
"title": ""
},
{
"docid": "neg:1840344_3",
"text": "In this paper we propose an asymmetric semantic similarity among instances within an ontology. We aim to define a measurement of semantic similarity that exploit as much as possible the knowledge stored in the ontology taking into account different hints hidden in the ontology definition. The proposed similarity measurement considers different existing similarities, which we have combined and extended. Moreover, the similarity assessment is explicitly parameterised according to the criteria induced by the context. The parameterisation aims to assist the user in the decision making pertaining to similarity evaluation, as the criteria can be refined according to user needs. Experiments and an evaluation of the similarity assessment are presented showing the efficiency of the method.",
"title": ""
},
{
"docid": "neg:1840344_4",
"text": "Context-aware Web services are emerging as a promising technology for the electronic businesses in mobile and pervasive environments. Unfortunately, complex context-aware services are still hard to build. In this paper, we present a modeling language for the model-driven development of context-aware Web services based on the Unified Modeling Language (UML). Specifically, we show how UML can be used to specify information related to the design of context-aware services. We present the abstract syntax and notation of the language and illustrate its usage using an example service. Our language offers significant design flexibility that considerably simplifies the development of context-aware Web services.",
"title": ""
},
{
"docid": "neg:1840344_5",
"text": "Under today's bursty web traffic, the fine-grained per-container control promises more efficient resource provisioning for web services and better resource utilization in cloud datacenters. In this paper, we present Two-stage Stochastic Programming Resource A llocator (2SPRA). It optimizes resource provisioning for containerized n-tier web services in accordance with fluctuations of incoming workload to accommodate predefined SLOs on response latency. In particular, 2SPRA is capable of minimizing resource over-provisioning by addressing dynamics of web traffic as workload uncertainty in a native stochastic optimization model. Using special-purpose OpenOpt optimization framework, we fully implement 2SPRA in Python and evaluate it against three other existing allocation schemes, in a Docker-based CoreOS Linux VMs on Amazon EC2. We generate workloads based on four real-world web traces of various traffic variations: AOL, WorldCup98, ClarkNet, and NASA. Our experimental results demonstrate that 2SPRA achieves the minimum resource over-provisioning outperforming other schemes. In particular, 2SPRA allocates only 6.16 percent more than application's actual demand on average and at most 7.75 percent in the worst case. It achieves 3x further reduction in total resources provisioned compared to other schemes delivering overall cost-savings of 53.6 percent on average and up to 66.8 percent. Furthermore, 2SPRA demonstrates consistency in its provisioning decisions and robust responsiveness against workload fluctuations.",
"title": ""
},
{
"docid": "neg:1840344_6",
"text": "Modern mobile devices provide several functionalities and new ones are being added at a breakneck pace. Unfortunately browsing the menu and accessing the functions of a mobile phone is not a trivial task for visual impaired users. Low vision people typically rely on screen readers and voice commands. However, depending on the situations, screen readers are not ideal because blind people may need their hearing for safety, and automatic recognition of voice commands is challenging in noisy environments. Novel smart watches technologies provides an interesting opportunity to design new forms of user interaction with mobile phones. We present our first works towards the realization of a system, based on the combination of a mobile phone and a smart watch for gesture control, for assisting low vision people during daily life activities. More specifically we propose a novel approach for gesture recognition which is based on global alignment kernels and is shown to be effective in the challenging scenario of user independent recognition. This method is used to build a gesture-based user interaction module and is embedded into a system targeted to visually impaired which will also integrate several other modules. We present two of them: one for identifying wet floor signs, the other for automatic recognition of predefined logos.",
"title": ""
},
{
"docid": "neg:1840344_7",
"text": "PHP is the most popular scripting language for web applications. Because no native solution to compile or protect PHP scripts exists, PHP applications are usually shipped as plain source code which is easily understood or copied by an adversary. In order to prevent such attacks, commercial products such as ionCube, Zend Guard, and Source Guardian promise a source code protection. In this paper, we analyze the inner working and security of these tools and propose a method to recover the source code by leveraging static and dynamic analysis techniques. We introduce a generic approach for decompilation of obfuscated bytecode and show that it is possible to automatically recover the original source code of protected software. As a result, we discovered previously unknown vulnerabilities and backdoors in 1 million lines of recovered source code of 10 protected applications.",
"title": ""
},
{
"docid": "neg:1840344_8",
"text": "Understanding physical phenomena is a key competence that enables humans and animals to act and interact under uncertain perception in previously unseen environments containing novel object and their configurations. Developmental psychology has shown that such skills are acquired by infants from observations at a very early stage. In this paper, we contrast a more traditional approach of taking a modelbased route with explicit 3D representations and physical simulation by an end-to-end approach that directly predicts stability and related quantities from appearance. We ask the question if and to what extent and quality such a skill can directly be acquired in a data-driven way— bypassing the need for an explicit simulation. We present a learning-based approach based on simulated data that predicts stability of towers comprised of wooden blocks under different conditions and quantities related to the potential fall of the towers. The evaluation is carried out on synthetic data and compared to human judgments on the same stimuli.",
"title": ""
},
{
"docid": "neg:1840344_9",
"text": "Contents Preface vii Chapter 1. Graph Theory in the Information Age 1 1.1. Introduction 1 1.2. Basic definitions 3 1.3. Degree sequences and the power law 6 1.4. History of the power law 8 1.5. Examples of power law graphs 10 1.6. An outline of the book 17 Chapter 2. Old and New Concentration Inequalities 21 2.1. The binomial distribution and its asymptotic behavior 21 2.2. General Chernoff inequalities 25 2.3. More concentration inequalities 30 2.4. A concentration inequality with a large error estimate 33 2.5. Martingales and Azuma's inequality 35 2.6. General martingale inequalities 38 2.7. Supermartingales and Submartingales 41 2.8. The decision tree and relaxed concentration inequalities 46 Chapter 3. A Generative Model — the Preferential Attachment Scheme 55 3.1. Basic steps of the preferential attachment scheme 55 3.2. Analyzing the preferential attachment model 56 3.3. A useful lemma for rigorous proofs 59 3.4. The peril of heuristics via an example of balls-and-bins 60 3.5. Scale-free networks 62 3.6. The sharp concentration of preferential attachment scheme 64 3.7. Models for directed graphs 70 Chapter 4. Duplication Models for Biological Networks 75 4.1. Biological networks 75 4.2. The duplication model 76 4.3. Expected degrees of a random graph in the duplication model 77 4.4. The convergence of the expected degrees 79 4.5. The generating functions for the expected degrees 83 4.6. Two concentration results for the duplication model 84 4.7. Power law distribution of generalized duplication models 89 Chapter 5. Random Graphs with Given Expected Degrees 91 5.1. The Erd˝ os-Rényi model 91 5.2. The diameter of G n,p 95 iii iv CONTENTS 5.3. A general random graph model 97 5.4. Size, volume and higher order volumes 97 5.5. Basic properties of G(w) 100 5.6. Neighborhood expansion in random graphs 103 5.7. A random power law graph model 107 5.8. Actual versus expected degree sequence 109 Chapter 6. The Rise of the Giant Component 113 6.1. No giant component if w < 1? 114 6.2. Is there a giant component if˜w > 1? 115 6.3. No giant component if˜w < 1? 116 6.4. Existence and uniqueness of the giant component 117 6.5. A lemma on neighborhood growth 126 6.6. The volume of the giant component 129 6.7. Proving the volume estimate of the giant component 131 6.8. Lower bounds for the volume of the giant component 136 6.9. The complement of the giant component and its size 138 6.10. …",
"title": ""
},
{
"docid": "neg:1840344_10",
"text": "OBJECTIVE\nThis study was undertaken to determine the effects of rectovaginal fascia reattachment on symptoms and vaginal topography.\n\n\nSTUDY DESIGN\nStandardized preoperative and postoperative assessments of vaginal topography (the Pelvic Organ Prolapse staging system of the International Continence Society, American Urogynecologic Society, and Society of Gynecologic Surgeons) and 5 symptoms commonly attributed to rectocele were used to evaluate 66 women who underwent rectovaginal fascia reattachment for rectocele repair. All patients had abnormal fluoroscopic results with objective rectocele formation.\n\n\nRESULTS\nSeventy percent (n = 46) of the women were objectively assessed at 1 year. Preoperative symptoms included the following: protrusion, 85% (n = 39); difficult defecation, 52% (n = 24); constipation, 46% (n = 21); dyspareunia, 26% (n = 12); and manual evacuation, 24% (n = 11). Posterior vaginal topography was considered abnormal in all patients with a mean Ap point (a point located in the midline of the posterior vaginal wall 3 cm proximal to the hymen) value of -0.5 cm (range, -2 to 3 cm). Postoperative symptom resolution was as follows: protrusion, 90% (35/39; P <.0005); difficult defecation, 54% (14/24; P <.0005); constipation, 43% (9/21; P =.02); dyspareunia, 92% (11/12; P =.01); and manual evacuation, 36% (4/11; P =.125). Vaginal topography at 1 year was improved, with a mean Ap point value of -2 cm (range, -3 to 2 cm).\n\n\nCONCLUSION\nThis technique of rectocele repair improves vaginal topography and alleviates 3 symptoms commonly attributed to rectoceles. It is relatively ineffective for relief of manual evacuation, and constipation is variably decreased.",
"title": ""
},
{
"docid": "neg:1840344_11",
"text": "A penetrating head injury belongs to the most severe traumatic brain injuries, in which communication can arise between the intracranial cavity and surrounding environment. The authors present a literature review and typical case reports of a penetrating head injury in children. The list of patients treated at the neurosurgical department in the last 5 years for penetrating TBI is briefly referred. Rapid transfer to the specialized center with subsequent urgent surgical treatment is the important point in the treatment algorithm. It is essential to clean the wound very properly with all the foreign material during the surgery and to close the dura with a water-tight suture. Wide-spectrum antibiotics are of great use. In case of large-extent brain damage, the use of anticonvulsants is recommended. The prognosis of such severe trauma could be influenced very positively by a good medical care organization; obviously, the extent of brain tissue laceration is the limiting factor.",
"title": ""
},
{
"docid": "neg:1840344_12",
"text": "Robotic-assisted laparoscopic prostatectomy is a surgical procedure performed to eradicate prostate cancer. Use of robotic assistance technology allows smaller incisions than the traditional laparoscopic approach and results in better patient outcomes, such as less blood loss, less pain, shorter hospital stays, and better postoperative potency and continence rates. This surgical approach creates unique challenges in patient positioning for the perioperative team because the patient is placed in the lithotomy with steep Trendelenburg position. Incorrect positioning can lead to nerve damage, pressure ulcers, and other complications. Using a special beanbag positioning device made specifically for use with this severe position helps prevent these complications.",
"title": ""
},
{
"docid": "neg:1840344_13",
"text": "Mobile-edge computing (MEC) has recently emerged as a prominent technology to liberate mobile devices from computationally intensive workloads, by offloading them to the proximate MEC server. To make offloading effective, the radio and computational resources need to be dynamically managed, to cope with the time-varying computation demands and wireless fading channels. In this paper, we develop an online joint radio and computational resource management algorithm for multi-user MEC systems, with the objective of minimizing the long-term average weighted sum power consumption of the mobile devices and the MEC server, subject to a task buffer stability constraint. Specifically, at each time slot, the optimal CPU-cycle frequencies of the mobile devices are obtained in closed forms, and the optimal transmit power and bandwidth allocation for computation offloading are determined with the Gauss-Seidel method; while for the MEC server, both the optimal frequencies of the CPU cores and the optimal MEC server scheduling decision are derived in closed forms. Besides, a delay-improved mechanism is proposed to reduce the execution delay. Rigorous performance analysis is conducted for the proposed algorithm and its delay-improved version, indicating that the weighted sum power consumption and execution delay obey an $\\left [{O\\left ({1 / V}\\right), O\\left ({V}\\right) }\\right ]$ tradeoff with $V$ as a control parameter. Simulation results are provided to validate the theoretical analysis and demonstrate the impacts of various parameters.",
"title": ""
},
{
"docid": "neg:1840344_14",
"text": "This study examined level of engagement with Disney Princess media/products as it relates to gender-stereotypical behavior, body esteem (i.e. body image), and prosocial behavior during early childhood. Participants consisted of 198 children (Mage = 58 months), who were tested at two time points (approximately 1 year apart). Data consisted of parent and teacher reports, and child observations in a toy preference task. Longitudinal results revealed that Disney Princess engagement was associated with more female gender-stereotypical behavior 1 year later, even after controlling for initial levels of gender-stereotypical behavior. Parental mediation strengthened associations between princess engagement and adherence to female gender-stereotypical behavior for both girls and boys, and for body esteem and prosocial behavior for boys only.",
"title": ""
},
{
"docid": "neg:1840344_15",
"text": "In order to organize the large number of products listed in e-commerce sites, each product is usually assigned to one of the multi-level categories in the taxonomy tree. It is a time-consuming and difficult task for merchants to select proper categories within thousan ds of options for the products they sell. In this work, we propose an automatic classification tool to predict the matching category for a given product title and description. We used a combinatio n of two different neural models, i.e., deep belief nets and deep autoencoders, for both titles and descriptions. We implemented a selective reconstruction approach for the input layer during the training of the deep neural networks, in order to scale-out for large-sized sparse feature vectors. GPUs are utilized in order to train neural networks in a reasonable time. We have trained o ur m dels for around 150 million products with a taxonomy tree with at most 5 levels that contains 28,338 leaf categories. Tests with millions of products show that our first prediction s matches 81% of merchants’ assignments, when “others” categories are excluded.",
"title": ""
},
{
"docid": "neg:1840344_16",
"text": "Joint torque sensing represents one of the foundations and vital components of modern robotic systems that target to match closely the physical interaction performance of biological systems through the realization of torque controlled actuators. However, despite decades of studies on the development of different torque sensors, the design of accurate and reliable torque sensors still remains challenging for the majority of the robotics community preventing the use of the technology. This letter proposes and evaluates two joint torque sensing elements based on strain gauge and deflection-encoder principles. The two designs are elaborated and their performance from different perspectives and practical factors are evaluated including resolution, nonaxial moments load crosstalk, torque ripple rejection, bandwidth, noise/residual offset level, and thermal/time dependent signal drift. The letter reveals the practical details and the pros and cons of each sensor principle providing valuable contributions into the field toward the realization of higher fidelity joint torque sensing performance.",
"title": ""
},
{
"docid": "neg:1840344_17",
"text": "Natural Language Generation (NLG) is defined as the systematic approach for producing human understandable natural language text based on nontextual data or from meaning representations. This is a significant area which empowers human-computer interaction. It has also given rise to a variety of theoretical as well as empirical approaches. This paper intends to provide a detailed overview and a classification of the state-of-the-art approaches in Natural Language Generation. The paper explores NLG architectures and tasks classed under document planning, micro-planning and surface realization modules. Additionally, this paper also identifies the gaps existing in the NLG research which require further work in order to make NLG a widely usable technology.",
"title": ""
},
{
"docid": "neg:1840344_18",
"text": "This paper addresses the problems that must be considered if computers are going to treat their users as individuals with distinct personalities, goals, and so forth. It first outlines the issues, and then proposes stereotypes as a useful mechanism for building models of individual users on the basis of a small amount of information about them. In order to build user models quickly, a large amount of uncertain knowledge must be incorporated into the models. The issue of how to resolve the conflicts that will arise among such inferences is discussed. A system, Grundy, is described that bunds models of its users, with the aid of stereotypes, and then exploits those models to guide it in its task, suggesting novels that people may find interesting. If stereotypes are to be useful to Grundy, they must accurately characterize the users of the system. Some techniques to modify stereotypes on the basis of experience are discussed. An analysis of Grundy's performance shows that its user models are effective in guiding its performance.",
"title": ""
}
] |
1840345 | Causal video object segmentation from persistence of occlusions | [
{
"docid": "pos:1840345_0",
"text": "Layered models provide a compelling approach for estimating image motion and segmenting moving scenes. Previous methods, however, have failed to capture the structure of complex scenes, provide precise object boundaries, effectively estimate the number of layers in a scene, or robustly determine the depth order of the layers. Furthermore, previous methods have focused on optical flow between pairs of frames rather than longer sequences. We show that image sequences with more frames are needed to resolve ambiguities in depth ordering at occlusion boundaries; temporal layer constancy makes this feasible. Our generative model of image sequences is rich but difficult to optimize with traditional gradient descent methods. We propose a novel discrete approximation of the continuous objective in terms of a sequence of depth-ordered MRFs and extend graph-cut optimization methods with new “moves” that make joint layer segmentation and motion estimation feasible. Our optimizer, which mixes discrete and continuous optimization, automatically determines the number of layers and reasons about their depth ordering. We demonstrate the value of layered models, our optimization strategy, and the use of more than two frames on both the Middlebury optical flow benchmark and the MIT layer segmentation benchmark.",
"title": ""
}
] | [
{
"docid": "neg:1840345_0",
"text": "An enhanced automated material handling system (AMHS) that uses a local FOUP buffer at each tool is presented as a method of enabling lot size reduction and parallel metrology sampling in the photolithography (litho) bay. The local FOUP buffers can be integrated with current OHT AMHS systems in existing fabs with little or no change to the AMHS or process equipment. The local buffers enhance the effectiveness of the OHT by eliminating intermediate moves to stockers, increasing the move rate capacity by 15-20%, and decreasing the loadport exchange time to 30 seconds. These enhancements can enable the AMHS to achieve the high move rates compatible with lot size reduction down to 12-15 wafers per FOUP. The implementation of such a system in a photolithography bay could result in a 60-74% reduction in metrology delay time, which is the time between wafer exposure at a litho tool and collection of metrology and inspection data.",
"title": ""
},
{
"docid": "neg:1840345_1",
"text": "As one of the most influential social media platforms, microblogging is becoming increasingly popular in the last decades. Each day a large amount of events appear and spread in microblogging. The spreading of events and corresponding comments on them can greatly influence the public opinion. It is practical important to discover new emerging events in microblogging and predict their future popularity. Traditional event detection and information diffusion models cannot effectively handle our studied problem, because most existing methods focus only on event detection but ignore to predict their future trend. In this paper, we propose a new approach to detect burst novel events and predict their future popularity simultaneously. Specifically, we first detect events from online microblogging stream by utilizing multiple types of information, i.e., term frequency, and user's social relation. Meanwhile, the popularity of detected event is predicted through a proposed diffusion model which takes both the content and user information of the event into account. Extensive evaluations on two real-world datasets demonstrate the effectiveness of our approach on both event detection and their popularity",
"title": ""
},
{
"docid": "neg:1840345_2",
"text": "Superficial dorsal penile vein thrombosis was diagnosed 8 times in 7 patients between 19 and 40 years old (mean age 27 years). All patients related the onset of the thrombosis to vigorous sexual intercourse. No other etiological medications, drugs or constricting devices were implicated. Three patients were treated acutely with anti-inflammatory medications, while 4 were managed expectantly. The mean interval to resolution of symptoms was 7 weeks. Followup ranged from 3 to 30 months (mean 11) at which time all patients noticed normal erectile function. Only 1 patient had recurrent thrombosis 3 months after the initial episode, again related to intercourse. We conclude that this is a benign self-limited condition. Anti-inflammatory agents are useful for acute discomfort but they do not affect the rate of resolution.",
"title": ""
},
{
"docid": "neg:1840345_3",
"text": "Speech-language pathologists tend to rely on the noninstrumental swallowing evaluation in making recommendations about a patient’s diet and management plan. The present study was designed to examine the sensitivity and specificity of the accuracy of using the chin-down posture during the clinical/bedside swallowing assessment. In 15 patients with acute stroke and clinically suspected oropharyngeal dysphagia, the correlation between clinical and videofluoroscopic findings was examined. Results identified that there is a difference in outcome prediction using the chin-down posture during the clinical/bedside assessment of swallowing compared to assessment by videofluoroscopy. Results are discussed relative to statistical and clinical perspectives, including site of lesion and factors to be considered in the design of an overall treatment plan for a patient with disordered swallowing.",
"title": ""
},
{
"docid": "neg:1840345_4",
"text": "We address the problem of semantic segmentation: classifying each pixel in an image according to the semantic class it belongs to (e.g. dog, road, car). Most existing methods train from fully supervised images, where each pixel is annotated by a class label. To reduce the annotation effort, recently a few weakly supervised approaches emerged. These require only image labels indicating which classes are present. Although their performance reaches a satisfactory level, there is still a substantial gap between the accuracy of fully and weakly supervised methods. We address this gap with a novel active learning method specifically suited for this setting. We model the problem as a pairwise CRF and cast active learning as finding its most informative nodes. These nodes induce the largest expected change in the overall CRF state, after revealing their true label. Our criterion is equivalent to maximizing an upper-bound on accuracy gain. Experiments on two data-sets show that our method achieves 97% percent of the accuracy of the corresponding fully supervised model, while querying less than 17% of the (super-)pixel labels.",
"title": ""
},
{
"docid": "neg:1840345_5",
"text": "A correlational study examined relationships between motivational orientation, self-regulated learning, and classroom academic performance for 173 seventh graders from eight science and seven English classes. A self-report measure of student self-efficacy, intrinsic value, test anxiety, self-regulation, and use of learning strategies was administered, and performance data were obtained from work on classroom assignments. Self-efficacy and intrinsic value were positively related to cognitive engagement and performance. Regression analyses revealed that, depending on the outcome measure, self-regulation, self-efficacy, and test anxiety emerged as the best predictors of performance. Intrinsic value did not have a direct influence on performance but was strongly related to self-regulation and cognitive strategy use, regardless of prior achievement level. The implications of individual differences in motivational orientation for cognitive engagement and self-regulation in the classroom are discussed.",
"title": ""
},
{
"docid": "neg:1840345_6",
"text": "Recent developments in digital technologies bring about considerable business opportunities but also impose significant challenges on firms in all industries. While some industries, e.g., newspapers, have already profoundly reorganized the mechanisms of value creation, delivery, and capture during the course of digitalization (Karimi & Walter, 2015, 2016), many process-oriented and asset intensive industries have not yet fully evaluated and exploited the potential applications (Rigby, 2014). Although the process industries have successfully used advancements in technologies to optimize processes in the past (Kim et al., 2011), digitalization poses an unprecedented shift in technology that exceeds conventional technological evolution (Svahn et al., 2017). Driven by augmented processing power, connectivity of devices (IoT), advanced data analytics, and sensor technology, innovation activities in the process industries now break away from established innovation paths (Svahn et al., 2017; Tripsas, 2009). In contrast to prior innovations that were primarily bound to physical devices, new products are increasingly embedded into systems of value creation that span the physical and digital world (Parmar et al., 2014; Rigby, 2014; Yoo et al., 2010a). On this new playing field, firms and researchers are jointly interested in the organizational characteristics and capabilities that are required to gain a competitive advantage (e.g. Fink, 2011). Whereas prior studies cover the effect of digital transformation on innovation in various industries like newspaper (Karimi and Walter, 2015, 2016), automotive (Henfridsson and Yoo, 2014; Svahn et al., 2017), photography (Tripsas, 2009), and manufacturing (Jonsson et al., 2008), there is a relative dearth of studies that cover the impact of digital transformation in the process industries (Westergren and Holmström, 2012). The process industries are characterized by asset and research intensity, strong integration into physical locations, and often include value chains that are complex and feature aspects of rigidity (Lager Research Paper Digitalization in the process industries – Evidence from the German water industry",
"title": ""
},
{
"docid": "neg:1840345_7",
"text": "Individual graphene oxide sheets subjected to chemical reduction were electrically characterized as a function of temperature and external electric fields. The fully reduced monolayers exhibited conductivities ranging between 0.05 and 2 S/cm and field effect mobilities of 2-200 cm2/Vs at room temperature. Temperature-dependent electrical measurements and Raman spectroscopic investigations suggest that charge transport occurs via variable range hopping between intact graphene islands with sizes on the order of several nanometers. Furthermore, the comparative study of multilayered sheets revealed that the conductivity of the undermost layer is reduced by a factor of more than 2 as a consequence of the interaction with the Si/SiO2 substrate.",
"title": ""
},
{
"docid": "neg:1840345_8",
"text": "This essay extends Simon's arguments in the Sciences of the Artificial to a critical examination of how theorizing in Information Technology disciplines should occur. The essay is framed around a number of fundamental questions that relate theorizing in the artificial sciences to the traditions of the philosophy of science. Theorizing in the artificial sciences is contrasted with theorizing in other branches of science and the applicability of the scientific method is questioned. The paper argues that theorizing should be considered in a holistic manner that links two modes of theorizing: an interior mode with the how of artifact construction studied and an exterior mode with the what of existing artifacts studied. Unlike some representations in the design science movement the paper argues that the study of artifacts once constructed can not be passed back uncritically to the methods of traditional science. Seven principles for creating knowledge in IT disciplines are derived: (i) artifact system centrality; (ii) artifact purposefulness; (iii) need for design theory; (iv) induction and abduction in theory building; (v) artifact construction as theory building; (vi) interior and exterior modes for theorizing; and (viii) issues with generality. The implicit claim is that consideration of these principles will improve knowledge creation and theorizing in design disciplines, for both design science researchers and also for researchers using more traditional methods. Further, attention to these principles should lead to the creation of more useful and relevant knowledge.",
"title": ""
},
{
"docid": "neg:1840345_9",
"text": "Fluidic circuits made up of tiny chambers, conduits, and membranes can be fabricated in soft substrates to realize pressure-based sequential logic functions. Additional chambers in the same substrate covered with thin membranes can function as bubble-like tactile features. Sequential addressing of bubbles with fluidic logic enables just two external electronic valves to control of any number of tactile features by \"clocking in\" pressure states one at a time. But every additional actuator added to a shift register requires an additional clock pulse to address, so that the display refresh rate scales inversely with the number of actuators in an array. In this paper, we build a model of a fluidic logic circuit that can be used for sequential addressing of bubble actuators. The model takes the form of a hybrid automaton combining the discrete dynamics of valve switching and the continuous dynamics of compressible fluid flow through fluidic resistors (conduits) and capacitors (chambers). When parameters are set according to the results of system identification experiments on a physical prototype, pressure trajectories and propagation delays predicted by simulation of the hybrid automaton compare favorably to experiment. The propagation delay in turn determines the maximum clock rate and associated refresh rate for a refreshable braille display intended for rendering a full page of braille text or tactile graphics.",
"title": ""
},
{
"docid": "neg:1840345_10",
"text": "The role of e-learning technologies entirely depends on the acceptance and execution of required-change in the thinking and behaviour of the users of institutions. The research are constantly reporting that many e-learning projects are falling short of their objectives due to many reasons but on the top is the user resistance to change according to the digital requirements of new era. It is argued that the suitable way for change management in e-learning environment is the training and persuading of users with a view to enhance their digital literacy and thus gradually changing the users’ attitude in positive direction. This paper discusses change management in transition to e-learning system considering pedagogical, cost and technical implications. It also discusses challenges and opportunities for integrating these technologies in higher learning institutions with examples from Turkey GATA (Gülhane Askeri Tıp Akademisi-Gülhane Military Medical Academy).",
"title": ""
},
{
"docid": "neg:1840345_11",
"text": "The authors review research on police effectiveness in reducing crime, disorder, and fear in the context of a typology of innovation in police practices. That typology emphasizes two dimensions: one concerning the diversity of approaches, and the other, the level of focus. The authors find that little evidence supports the standard model of policing—low on both of these dimensions. In contrast, research evidence does support continued investment in police innovations that call for greater focus and tailoring of police efforts, combined with an expansion of the tool box of policing beyond simple law enforcement. The strongest evidence of police effectiveness in reducing crime and disorder is found in the case of geographically focused police practices, such as hot-spots policing. Community policing practices are found to reduce fear of crime, but the authors do not find consistent evidence that community policing (when it is implemented without models of problem-oriented policing) affects either crime or disorder. A developing body of evidence points to the effectiveness of problemoriented policing in reducing crime, disorder, and fear. More generally, the authors find that many policing practices applied broadly throughout the United States either have not been the subject of systematic research or have been examined in the context of research designs that do not allow practitioners or policy makers to draw very strong conclusions.",
"title": ""
},
{
"docid": "neg:1840345_12",
"text": "Tuberculosis, also called TB, is currently a major health hazard due to multidrug-resistant forms of bacilli. Global efforts are underway to eradicate TB using new drugs with new modes of action, higher activity, and fewer side effects in combination with vaccines. For this reason, unexplored new sources and previously explored sources were examined and around 353 antimycobacterial compounds (Nat Prod Rep 2007; 24: 278-297) 7 have been previously reported. To develop drugs from these new sources, additional work is required for preclinical and clinical results. Since ancient times, different plant part extracts have been used as traditional medicines against diseases including tuberculosis. This knowledge may be useful in developing future powerful drugs. Plant natural products are again becoming important in this regard. In this review, we report 127 antimycobacterial compounds and their antimycobacterial activities. Of these, 27 compounds had a minimum inhibitory concentration of < 10 µg/mL. In some cases, the mechanism of activity has been determined. We hope that some of these compounds may eventually develop into effective new drugs against tuberculosis.",
"title": ""
},
{
"docid": "neg:1840345_13",
"text": "This paper extends and contributes to emerging debates on the validation of interpretive research (IR) in management accounting. We argue that IR has the potential to produce not only subjectivist, emic understandings of actors’ meanings, but also explanations, characterised by a certain degree of ‘‘thickness”. Mobilising the key tenets of the modern philosophical theory of explanation and the notion of abduction, grounded in pragmatist epistemology, we explicate how explanations may be developed and validated, yet remaining true to the core premises of IR. We focus on the intricate relationship between two arguably central aspects of validation in IR, namely authenticity and plausibility. Working on the assumption that validation is an important, but potentially problematic concern in all serious scholarly research, we explore whether and how validation efforts are manifest in IR using two case studies as illustrative examples. Validation is seen as an issue of convincing readers of the authenticity of research findings whilst simultaneously ensuring that explanations are deemed plausible. Whilst the former is largely a matter of preserving the emic qualities of research accounts, the latter is intimately linked to the process of abductive reasoning, whereby different theories are applied to advance thick explanations. This underscores the view of validation as a process, not easily separated from the ongoing efforts of researchers to develop explanations as research projects unfold and far from reducible to mere technicalities of following pre-specified criteria presumably minimising various biases. These properties detract from a view of validation as conforming to prespecified, stable, and uniform criteria and allow IR to move beyond the ‘‘crisis of validity” arguably prevailing in the social sciences. 2009 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "neg:1840345_14",
"text": "Doxorubicin (DOX) is a very effective anticancer agent. However, in its pure form, its application is limited by significant cardiotoxic side effects. The purpose of this study was to develop a controllably activatable chemotherapy prodrug of DOX created by blocking its free amine group with a biotinylated photocleavable blocking group (PCB). An n-hydroxy succunamide protecting group on the PCB allowed selective binding at the DOX active amine group. The PCB included an ortho-nitrophenyl group for photo cleavability and a water-soluble glycol spacer arm ending in a biotin group for enhanced membrane interaction. This novel DOX-PCB prodrug had a 200-fold decrease in cytotoxicity compared to free DOX and could release active DOX upon exposure to UV light at 350 nm. Unlike DOX, DOX-PCB stayed in the cell cytoplasm, did not enter the nucleus, and did not stain the exposed DNA during mitosis. Human liver microsome incubation with DOX-PCB indicated stability against liver metabolic breakdown. The development of the DOX-PCB prodrug demonstrates the possibility of using light as a method of prodrug activation in deep internal tissues without relying on inherent physical or biochemical differences between the tumor and healthy tissue for use as the trigger.",
"title": ""
},
{
"docid": "neg:1840345_15",
"text": "The fields of neuroscience and artificial intelligence (AI) have a long and intertwined history. In more recent times, however, communication and collaboration between the two fields has become less commonplace. In this article, we argue that better understanding biological brains could play a vital role in building intelligent machines. We survey historical interactions between the AI and neuroscience fields and emphasize current advances in AI that have been inspired by the study of neural computation in humans and other animals. We conclude by highlighting shared themes that may be key for advancing future research in both fields.",
"title": ""
},
{
"docid": "neg:1840345_16",
"text": "The balance between facilitation and competition is likely to change with age due to the dynamic nature of nutrient, water and carbon cycles, and light availability during stand development. These processes have received attention in harsh, arid, semiarid and alpine ecosystems but are rarely examined in more productive communities, in mixed-species forest ecosystems or in long-term experiments spanning more than a decade. The aim of this study was to examine how inter- and intraspecific interactions between Eucalyptus globulus Labill. mixed with Acacia mearnsii de Wildeman trees changed with age and productivity in a field experiment in temperate south-eastern Australia. Spatially explicit neighbourhood indices were calculated to quantify tree interactions and used to develop growth models to examine how the tree interactions changed with time and stand productivity. Interspecific influences were usually less negative than intraspecific influences, and their difference increased with time for E. globulus and decreased with time for A. mearnsii. As a result, the growth advantages of being in a mixture increased with time for E. globulus and decreased with time for A. mearnsii. The growth advantage of being in a mixture also decreased for E. globulus with increasing stand productivity, showing that spatial as well as temporal dynamics in resource availability influenced the magnitude and direction of plant interactions.",
"title": ""
},
{
"docid": "neg:1840345_17",
"text": "Being Scrum the agile software development framework most commonly used in the software industry, its applicability is attracting great attention to the academia. That is why this topic is quite often included in computer science and related university programs. In this article, we present a course design of a Software Engineering course where an educational framework and an open-source agile project management tool were used to develop real-life projects by undergraduate students. During the course, continuous guidance was given by the teaching staff to facilitate the students' learning of Scrum. Results indicate that students find it easy to use the open-source tool and helpful to apply Scrum to a real-life project. However, the unavailability of the client and conflicts among the team members have negative impact on the realization of projects. The guidance given to students along the course helped identify five common issues faced by students through the learning process.",
"title": ""
},
{
"docid": "neg:1840345_18",
"text": "In an online survey with two cohorts (2009 and 2011) of undergraduates in dating relationshi ps, we examined how attachment was related to communication technology use within romantic relation ships. Participants reported on their attachment style and frequency of in-person communication as well as phone, text messaging, social network site (SNS), and electronic mail usage with partners. Texting and SNS communication were more frequent in 2011 than 2009. Attachment avoidance was related to less frequent phone use and texting, and greater email usage. Electronic communication channels (phone and texting) were related to positive relationship qualities, however, once accounting for attachment, only moderated effects were found. Interactions indicated texting was linked to more positive relationships for highly avoidant (but not less avoidant) participants. Additionally, email use was linked to more conflict for highly avoidant (but not less avoidant) participants. Finally, greater use of a SNS was positively associated with intimacy/support for those higher (but not lower) on attachment anxiety. This study illustrates how attachment can help to explain why the use of specific technology-based communication channels within romantic relationships may mean different things to different people, and that certain channels may be especially relevant in meeting insecurely attached individuals’ needs. 2013 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "neg:1840345_19",
"text": "In decision making under uncertainty there are two main questions that need to be evaluated: i) What are the future consequences and associated uncertainties of an action, and ii) what is a good (or right) decision or action. Philosophically these issues are categorised as epistemic questions (i.e. questions of knowledge) and ethical questions (i.e. questions of moral and norms). This paper discusses the second issue, and evaluates different bases for a good decision, using different ethical theories as a starting point. This includes the utilitarian ethics of Bentley and Mills, and deontological ethics of Kant, Rawls and Habermas. The paper addresses various principles in risk management and risk related decision making, including cost benefit analysis, minimum safety criterion, the ALARP principle and the precautionary principle.",
"title": ""
}
] |
1840346 | Classification of histopathological images using convolutional neural network | [
{
"docid": "pos:1840346_0",
"text": "The Pap smear test is a manual screening procedure that is used to detect precancerous changes in cervical cells based on color and shape properties of their nuclei and cytoplasms. Automating this procedure is still an open problem due to the complexities of cell structures. In this paper, we propose an unsupervised approach for the segmentation and classification of cervical cells. The segmentation process involves automatic thresholding to separate the cell regions from the background, a multi-scale hierarchical segmentation algorithm to partition these regions based on homogeneity and circularity, and a binary classifier to finalize the separation of nuclei from cytoplasm within the cell regions. Classification is posed as a grouping problem by ranking the cells based on their feature characteristics modeling abnormality degrees. The proposed procedure constructs a tree using hierarchical clustering, and then arranges the cells in a linear order by using an optimal leaf ordering algorithm that maximizes the similarity of adjacent leaves without any requirement for training examples or parameter adjustment. Performance evaluation using two data sets show the effectiveness of the proposed approach in images having inconsistent staining, poor contrast, and overlapping cells.",
"title": ""
}
] | [
{
"docid": "neg:1840346_0",
"text": "Sematch is an integrated framework for the development, evaluation and application of semantic similarity for Knowledge Graphs. The framework provides a number of similarity tools and datasets, and allows users to compute semantic similarity scores of concepts, words, and entities, as well as to interact with Knowledge Graphs through SPARQL queries. Sematch focuses on knowledge-based semantic similarity that relies on structural knowledge in a given taxonomy (e.g. depth, path length, least common subsumer), and statistical information contents. Researchers can use Sematch to develop and evaluate semantic similarity metrics and exploit these metrics in applications. © 2017 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "neg:1840346_1",
"text": "Ridge regression is an algorithm that takes as input a large number of data points and finds the best-fit linear curve through these points. The algorithm is a building block for many machine-learning operations. We present a system for privacy-preserving ridge regression. The system outputs the best-fit curve in the clear, but exposes no other information about the input data. Our approach combines both homomorphic encryption and Yao garbled circuits, where each is used in a different part of the algorithm to obtain the best performance. We implement the complete system and experiment with it on real data-sets, and show that it significantly outperforms pure implementations based only on homomorphic encryption or Yao circuits.",
"title": ""
},
{
"docid": "neg:1840346_2",
"text": "Though convolutional neural networks have achieved stateof-the-art performance on various vision tasks, they are extremely vulnerable to adversarial examples, which are obtained by adding humanimperceptible perturbations to the original images. Adversarial examples can thus be used as an useful tool to evaluate and select the most robust models in safety-critical applications. However, most of the existing adversarial attacks only achieve relatively low success rates under the challenging black-box setting, where the attackers have no knowledge of the model structure and parameters. To this end, we propose to improve the transferability of adversarial examples by creating diverse input patterns. Instead of only using the original images to generate adversarial examples, our method applies random transformations to the input images at each iteration. Extensive experiments on ImageNet show that the proposed attack method can generate adversarial examples that transfer much better to different networks than existing baselines. To further improve the transferability, we (1) integrate the recently proposed momentum method into the attack process; and (2) attack an ensemble of networks simultaneously. By evaluating our method against top defense submissions and official baselines from NIPS 2017 adversarial competition, this enhanced attack reaches an average success rate of 73.0%, which outperforms the top 1 attack submission in the NIPS competition by a large margin of 6.6%. We hope that our proposed attack strategy can serve as a benchmark for evaluating the robustness of networks to adversaries and the effectiveness of different defense methods in future. The code is public available at https://github.com/cihangxie/DI-2-FGSM.",
"title": ""
},
{
"docid": "neg:1840346_3",
"text": "Our opinions and judgments are increasingly shaped by what we read on social media -- whether they be tweets and posts in social networks, blog posts, or review boards. These opinions could be about topics such as consumer products, politics, life style, or celebrities. Understanding how users in a network update opinions based on their neighbor's opinions, as well as what global opinion structure is implied when users iteratively update opinions, is important in the context of viral marketing and information dissemination, as well as targeting messages to users in the network.\n In this paper, we consider the problem of modeling how users update opinions based on their neighbors' opinions. We perform a set of online user studies based on the celebrated conformity experiments of Asch [1]. Our experiments are carefully crafted to derive quantitative insights into developing a model for opinion updates (as opposed to deriving psychological insights). We show that existing and widely studied theoretical models do not explain the entire gamut of experimental observations we make. This leads us to posit a new, nuanced model that we term the BVM. We present preliminary theoretical and simulation results on the convergence and structure of opinions in the entire network when users iteratively update their respective opinions according to the BVM. We show that consensus and polarization of opinions arise naturally in this model under easy to interpret initial conditions on the network.",
"title": ""
},
{
"docid": "neg:1840346_4",
"text": "A clear andpowerfulformalism for describing languages, both natural and artificial, follows f iom a method for expressing grammars in logic due to Colmerauer and Kowalski. This formalism, which is a natural extension o f context-free grammars, we call \"definite clause grammars\" (DCGs). A DCG provides not only a description of a language, but also an effective means for analysing strings o f that language, since the DCG, as it stands, is an executable program o f the programming language Prolog. Using a standard Prolog compiler, the DCG can be compiled into efficient code, making it feasible to implement practical language analysers directly as DCGs. This paper compares DCGs with the successful and widely used augmented transition network (ATN) formalism, and indicates how ATNs can be translated into DCGs. It is argued that DCGs can be at least as efficient as ATNs, whilst the DCG formalism is clearer, more concise and in practice more powerful",
"title": ""
},
{
"docid": "neg:1840346_5",
"text": "Translating data from linked data sources to the vocabulary that is expected by a linked data application requires a large number of mappings and can require a lot of structural transformations as well as complex property value transformations. The R2R mapping language is a language based on SPARQL for publishing expressive mappings on the web. However, the specification of R2R mappings is not an easy task. This paper therefore proposes the use of mapping patterns to semi-automatically generate R2R mappings between RDF vocabularies. In this paper, we first specify a mapping language with a high level of abstraction to transform data from a source ontology to a target ontology vocabulary. Second, we introduce the proposed mapping patterns. Finally, we present a method to semi-automatically generate R2R mappings using the mapping",
"title": ""
},
{
"docid": "neg:1840346_6",
"text": "The human arm has 7 degrees of freedom (DOF) while only 6 DOF are required to position the wrist and orient the palm. Thus, the inverse kinematics of an human arm has a nonunique solution. Resolving this redundancy becomes critical as the human interacts with a wearable robot and the inverse kinematics solution of these two coupled systems must be identical to guarantee an seamless integration. The redundancy of the arm can be formulated by defining the swivel angle, the rotation angle of the plane defined by the upper and lower arm around a virtual axis that connects the shoulder and wrist joints. Analyzing reaching tasks recorded with a motion capture system indicates that the swivel angle is selected such that when the elbow joint is flexed, the palm points to the head. Based on these experimental results, a new criterion is formed to resolve the human arm redundancy. This criterion was implemented into the control algorithm of an upper limb 7-DOF wearable robot. Experimental results indicate that by using the proposed redundancy resolution criterion, the error between the predicted and the actual swivel angle adopted by the motor control system is less then 5°.",
"title": ""
},
{
"docid": "neg:1840346_7",
"text": "This work investigates the use of linguistically motivated features to improve search, in particular for ranking answers to non-factoid questions. We show that it is possible to exploit existing large collections of question–answer pairs (from online social Question Answering sites) to extract such features and train ranking models which combine them effectively. We investigate a wide range of feature types, some exploiting natural language processing such as coarse word sense disambiguation, named-entity identification, syntactic parsing, and semantic role labeling. Our experiments demonstrate that linguistic features, in combination, yield considerable improvements in accuracy. Depending on the system settings we measure relative improvements of 14% to 21% in Mean Reciprocal Rank and Precision@1, providing one of the most compelling evidence to date that complex linguistic features such as word senses and semantic roles can have a significant impact on large-scale information retrieval tasks.",
"title": ""
},
{
"docid": "neg:1840346_8",
"text": "Comprehension is one fundamental process in the software life cycle. Although necessary, this comprehension is difficult to obtain due to amount and complexity of information related to software. Thus, software visualization techniques and tools have been proposed to facilitate the comprehension process and to reduce maintenance costs. This paper shows the results from a Literature Systematic Review to identify software visualization techniques and tools. We analyzed 52 papers and we identified 28 techniques and 33 tools for software visualization. Among these techniques, 71% have been implemented and available to users, 48% use 3D visualization and 80% are generated using static analysis.",
"title": ""
},
{
"docid": "neg:1840346_9",
"text": "We introduce MySong, a system that automatically chooses chords to accompany a vocal melody. A user with no musical experience can create a song with instrumental accompaniment just by singing into a microphone, and can experiment with different styles and chord patterns using interactions designed to be intuitive to non-musicians.\n We describe the implementation of MySong, which trains a Hidden Markov Model using a music database and uses that model to select chords for new melodies. Model parameters are intuitively exposed to the user. We present results from a study demonstrating that chords assigned to melodies using MySong and chords assigned manually by musicians receive similar subjective ratings. We then present results from a second study showing that thirteen users with no background in music theory are able to rapidly create musical accompaniments using MySong, and that these accompaniments are rated positively by evaluators.",
"title": ""
},
{
"docid": "neg:1840346_10",
"text": "Database forensics is a domain that uses database content and metadata to reveal malicious activities on database systems in an Internet of Things environment. Although the concept of database forensics has been around for a while, the investigation of cybercrime activities and cyber breaches in an Internet of Things environment would benefit from the development of a common investigative standard that unifies the knowledge in the domain. Therefore, this paper proposes common database forensic investigation processes using a design science research approach. The proposed process comprises four phases, namely: 1) identification; 2) artefact collection; 3) artefact analysis; and 4) the documentation and presentation process. It allows the reconciliation of the concepts and terminologies of all common database forensic investigation processes; hence, it facilitates the sharing of knowledge on database forensic investigation among domain newcomers, users, and practitioners.",
"title": ""
},
{
"docid": "neg:1840346_11",
"text": "We present a novel algorithm for automatically co-segmenting a set of shapes from a common family into consistent parts. Starting from over-segmentations of shapes, our approach generates the segmentations by grouping the primitive patches of the shapes directly and obtains their correspondences simultaneously. The core of the algorithm is to compute an affinity matrix where each entry encodes the similarity between two patches, which is measured based on the geometric features of patches. Instead of concatenating the different features into one feature descriptor, we formulate co-segmentation into a subspace clustering problem in multiple feature spaces. Specifically, to fuse multiple features, we propose a new formulation of optimization with a consistent penalty, which facilitates both the identification of most similar patches and selection of master features for two similar patches. Therefore the affinity matrices for various features are sparsity-consistent and the similarity between a pair of patches may be determined by part of (instead of all) features. Experimental results have shown how our algorithm jointly extracts consistent parts across the collection in a good manner.",
"title": ""
},
{
"docid": "neg:1840346_12",
"text": "This paper proposes a new approach to style, arising from our work on computational media using structural blending, which enriches the conceptual blending of cognitive linguistics with structure building operations in order to encompass syntax and narrative as well as metaphor. We have implemented both conceptual and structural blending, and conducted initial experiments with poetry, although the approach generalizes to other media. The central idea is to analyze style in terms of principles for blending, based on our £nding that very different principles from those of common sense blending are needed for some creative works.",
"title": ""
},
{
"docid": "neg:1840346_13",
"text": "Electrophysiological and computational studies suggest that nigro-striatal dopamine may play an important role in learning about sequences of environmentally important stimuli, particularly when this learning is based upon step-by-step associations between stimuli, such as in second-order conditioning. If so, one would predict that disruption of the midbrain dopamine system--such as occurs in Parkinson's disease--may lead to deficits on tasks that rely upon such learning processes. This hypothesis was tested using a \"chaining\" task, in which each additional link in a sequence of stimuli leading to reward is trained step-by-step, until a full sequence is learned. We further examined how medication (L-dopa) affects this type of learning. As predicted, we found that Parkinson's patients tested 'off' L-dopa performed as well as controls during the first phase of this task, when required to learn a simple stimulus-response association, but were impaired at learning the full sequence of stimuli. In contrast, we found that Parkinson's patients tested 'on' L-dopa performed better than those tested 'off', and no worse than controls, on all phases of the task. These findings suggest that the loss of dopamine that occurs in Parkinson's disease can lead to specific learning impairments that are predicted by electrophysiological and computational studies, and that enhancing dopamine levels with L-dopa alleviates this deficit. This last result raises questions regarding the mechanisms by which midbrain dopamine modulates learning in Parkinson's disease, and how L-dopa affects these processes.",
"title": ""
},
{
"docid": "neg:1840346_14",
"text": "Initial work on automatic emotion recognition concentrates mainly on audio-based emotion classification. Speech is the most important channel for the communication between humans and it may expected that emotional states are trans-fered though content, prosody or paralinguistic cues. Besides the audio modality, with the rapidly developing computer hardware and video-processing devices researches start exploring the video modality. Visual-based emotion recognition works focus mainly on the extraction and recognition of emotional information from the facial expressions. There are also attempts to classify emotional states from body or head gestures and to combine different visual modalities, for instance facial expressions and body gesture captured by two separate cameras [3]. Emotion recognition from psycho-physiological measurements, such as skin conductance, respiration, electro-cardiogram (ECG), electromyography (EMG), electroencephalography (EEG) is another attempt. In contrast to speech, gestures or facial expressions these biopotentials are the result of the autonomic nervous system and cannot be imitated [4]. Research activities in facial expression and speech based emotion recognition [6] are usually performed independently from each other. But in almost all practical applications people speak and exhibit facial expressions at the same time, and consequently both modalities should be used in order to perform robust affect recognition. Therefore, multimodal, and in particularly audiovisual emotion recognition has been emerging in recent times [11], for example multiple classifier systems have been widely investigated for the classification of human emotions [1, 9, 12, 14]. Combining classifiers is a promising approach to improve the overall classifier performance [13, 8]. In multiple classifier systems (MCS) it is assumed that the raw data X originates from an underlying source, but each classifier receives different subsets of (X) of the same raw input data X. Feature vector F j (X) are used as the input to the j−th classifier computing an estimate y j of the class membership of F j (X). This output y j might be a crisp class label or a vector of class memberships, e.g. estimates of posteriori probabilities. Based on the multiple classifier outputs y 1 ,. .. , y N the combiner produces the final decision y. Combiners used in this study are fixed transformations of the multiple classifier outputs y 1 ,. .. , y N. Examples of such combining rules are Voting, (weighted) Averaging, and Multiplying, just to mention the most popular types. 2 Friedhelm Schwenker In addition to a priori fixed combination rules the combiner can be a …",
"title": ""
},
{
"docid": "neg:1840346_15",
"text": "Tactile augmentation is a simple, safe, inexpensive interaction technique for adding physical texture and force feedback cues to virtual objects. This study explored whether virtual reality (VR) exposure therapy reduces fear of spiders and whether giving patients the illusion of physically touching the virtual spider increases treatment effectiveness. Eight clinically phobic students were randomly assigned to one of 3 groups—(a) no treatment, (b) VR with no tactile cues, or (c) VR with a physically “touchable” virtual spider—as were 28 nonclinically phobic students. Participants in the 2 VR treatment groups received three 1-hr exposure therapy sessions resulting in clinically significant drops in behavioral avoidance and subjective fear ratings. The tactile augmentation group showed the greatest progress on behavioral measures. On average, participants in this group, who only approached to 5.5 ft of a live spider on the pretreatment Behavioral Avoidance Test (Garcia-Palacios, 2002), were able to approach to 6 in. of the spider after VR exposure treatment and did so with much less anxiety (see www.vrpain.com for details). Practical implications are discussed. INTERNATIONAL JOURNAL OF HUMAN–COMPUTER INTERACTION, 16(2), 283–300 Copyright © 2003, Lawrence Erlbaum Associates, Inc.",
"title": ""
},
{
"docid": "neg:1840346_16",
"text": "Objective: Mastitis is one of the most costly diseases in dairy cows, which greatly decreases milk production. Use of antibiotics in cattle leads to antibiotic-resistance of mastitis-causing bacteria. The present study aimed to investigate synergistic effect of silver nanoparticles (AgNPs) with neomycin or gentamicin antibiotic on mastitis-causing Staphylococcus aureus. Materials and Methods: In this study, 46 samples of milk were taken from the cows with clinical and subclinical mastitis during the august-October 2015 sampling period. In addition to biochemical tests, nuc gene amplification by PCR was used to identify strains of Staphylococcus aureus. Disk diffusion test and microdilution were performed to determine minimum inhibitory concentration (MIC) and minimum bactericidal concentration (MBC). Fractional Inhibitory Concentration (FIC) index was calculated to determine the interaction between a combination of AgNPs and each one of the antibiotics. Results: Twenty strains of Staphylococcus aureus were isolated from 46 milk samples and were confirmed by PCR. Based on disk diffusion test, 35%, 10% and 55% of the strains were respectively susceptible, moderately susceptible and resistant to gentamicin. In addition, 35%, 15% and 50% of the strains were respectively susceptible, moderately susceptible and resistant to neomycin. According to FIC index, gentamicin antibiotic and AgNPs had synergistic effects in 50% of the strains. Furthermore, neomycin antibiotic and AgNPs had synergistic effects in 45% of the strains. Conclusion: It could be concluded that a combination of AgNPs with either gentamicin or neomycin showed synergistic antibacterial properties in Staphylococcus aureus isolates from mastitis. In addition, some hypotheses were proposed to explain antimicrobial mechanism of the combination.",
"title": ""
},
{
"docid": "neg:1840346_17",
"text": "Strong motivation for developing new prosthetic hand devices is provided by the fact that low functionality and controllability—in addition to poor cosmetic appearance—are the most important reasons why amputees do not regularly use their prosthetic hands. This paper presents the design of the CyberHand, a cybernetic anthropomorphic hand intended to provide amputees with functional hand replacement. Its design was bio-inspired in terms of its modular architecture, its physical appearance, kinematics, sensorization, and actuation, and its multilevel control system. Its underactuated mechanisms allow separate control of each digit as well as thumb–finger opposition and, accordingly, can generate a multitude of grasps. Its sensory system was designed to provide proprioceptive information as well as to emulate fundamental functional properties of human tactile mechanoreceptors of specific importance for grasp-and-hold tasks. The CyberHand control system presumes just a few efferent and afferent channels and was divided in two main layers: a high-level control that interprets the user’s intention (grasp selection and required force level) and can provide pertinent sensory feedback and a low-level control responsible for actuating specific grasps and applying the desired total force by taking advantage of the intelligent mechanics. The grasps made available by the high-level controller include those fundamental for activities of daily living: cylindrical, spherical, tridigital (tripod), and lateral grasps. The modular and flexible design of the CyberHand makes it suitable for incremental development of sensorization, interfacing, and control strategies and, as such, it will be a useful tool not only for clinical research but also for addressing neuroscientific hypotheses regarding sensorimotor control.",
"title": ""
},
{
"docid": "neg:1840346_18",
"text": "With many advantageous features such as softness and better biocompatibility, flexible electronic device is a promising technology that can enable many emerging applications. However, most of the existing applications with flexible devices are sensors and drivers, while there is nearly no utilization aiming at complex computation, because the flexible devices have lower electron mobility, simple structure, and large process variation. In this paper, we propose an innovative method that enabled flexible devices to implement real-time and energy-efficient Difference-of-Gaussian, which illustrate feasibility and potentials for the flexible devices to achieve complicated real-time computation in future generation products.",
"title": ""
}
] |
1840347 | Shallow and Deep Convolutional Networks for Saliency Prediction | [
{
"docid": "pos:1840347_0",
"text": "The Pascal Visual Object Classes (VOC) challenge consists of two components: (i) a publicly available dataset of images together with ground truth annotation and standardised evaluation software; and (ii) an annual competition and workshop. There are five challenges: classification, detection, segmentation, action classification, and person layout. In this paper we provide a review of the challenge from 2008–2012. The paper is intended for two audiences: algorithm designers, researchers who want to see what the state of the art is, as measured by performance on the VOC datasets, along with the limitations and weak points of the current generation of algorithms; and, challenge designers, who want to see what we as organisers have learnt from the process and our recommendations for the organisation of future challenges. To analyse the performance of submitted algorithms on the VOC datasets we introduce a number of novel evaluation methods: a bootstrapping method for determining whether differences in the performance of two algorithms are significant or not; a normalised average precision so that performance can be compared across classes with different proportions of positive instances; a clustering method for visualising the performance across multiple algorithms so that the hard and easy images can be identified; and the use of a joint classifier over the submitted algorithms in order to measure their complementarity and combined performance. We also analyse the community’s progress through time using the methods of Hoiem et al. (Proceedings of European Conference on Computer Vision, 2012) to identify the types of occurring errors. We conclude the paper with an appraisal of the aspects of the challenge that worked well, and those that could be improved in future challenges.",
"title": ""
},
{
"docid": "pos:1840347_1",
"text": "It is believed that eye movements in free-viewing of natural scenes are directed by both bottom-up visual saliency and top-down visual factors. In this paper, we propose a novel computational framework to simultaneously learn these two types of visual features from raw image data using a multiresolution convolutional neural network (Mr-CNN) for predicting eye fixations. The Mr-CNN is directly trained from image regions centered on fixation and non-fixation locations over multiple resolutions, using raw image pixels as inputs and eye fixation attributes as labels. Diverse top-down visual features can be learned in higher layers. Meanwhile bottom-up visual saliency can also be inferred via combining information over multiple resolutions. Finally, optimal integration of bottom-up and top-down cues can be learned in the last logistic regression layer to predict eye fixations. The proposed approach achieves state-of-the-art results over four publically available benchmark datasets, demonstrating the superiority of our work.",
"title": ""
}
] | [
{
"docid": "neg:1840347_0",
"text": "• Oracle experiment: to understand how well these attributes, when used together, can explain persuasiveness, we train 3 linear SVM regressors, one for each component type, to score an arguments persuasiveness using gold attribute’s as features • Two human annotators who were both native speakers of English were first familiarized with the rubrics and definitions and then trained on five essays • 30 essays were doubly annotated for computing inter-annotator agreement • Each of the remaining essays was annotated by one of the annotators • Score/Class distributions by component type: Give me More Feedback: Annotating Argument Persusiveness and Related Attributes in Student Essays",
"title": ""
},
{
"docid": "neg:1840347_1",
"text": "Most speech recognition applications in use today rely heavily on confidence measure for making optimal decisions. In this paper, we aim to answer the question: what can be done to improve the quality of confidence measure if we cannot modify the speech recognition engine? The answer provided in this paper is a post-processing step called confidence calibration, which can be viewed as a special adaptation technique applied to confidence measure. Three confidence calibration methods have been developed in this work: the maximum entropy model with distribution constraints, the artificial neural network, and the deep belief network. We compare these approaches and demonstrate the importance of key features exploited: the generic confidence-score, the application-dependent word distribution, and the rule coverage ratio. We demonstrate the effectiveness of confidence calibration on a variety of tasks with significant normalized cross entropy increase and equal error rate reduction.",
"title": ""
},
{
"docid": "neg:1840347_2",
"text": "Mobile edge clouds (MECs) are small cloud-like infrastructures deployed in close proximity to users, allowing users to have seamless and low-latency access to cloud services. When users move across different locations, their service applications often need to be migrated to follow the user so that the benefit of MEC is maintained. In this paper, we propose a layered framework for migrating running applications that are encapsulated either in virtual machines (VMs) or containers. We evaluate the migration performance of various real applications under the proposed framework.",
"title": ""
},
{
"docid": "neg:1840347_3",
"text": "We describe a posterolateral transfibular neck approach to the proximal tibia. This approach was developed as an alternative to the anterolateral approach to the tibial plateau for the treatment of two fracture subtypes: depressed and split depressed fractures in which the comminution and depression are located in the posterior half of the lateral tibial condyle. These fractures have proved particularly difficult to reduce and adequately internally fix through an anterior or anterolateral approach. The approach described in this article exposes the posterolateral aspect of the tibial plateau between the posterior margin of the iliotibial band and the posterior cruciate ligament. The approach allows lateral buttressing of the lateral tibial plateau and may be combined with a simultaneous posteromedial and/or anteromedial approach to the tibial plateau. Critically, the proximal tibial soft tissue envelope and its blood supply are preserved. To date, we have used this approach either alone or in combination with a posteromedial approach for the successful reduction of tibial plateau fractures in eight patients. No complications related to this approach were documented, including no symptoms related to the common peroneal nerve, and all fractures and fibular neck osteotomies healed uneventfully.",
"title": ""
},
{
"docid": "neg:1840347_4",
"text": "A novel ultra-wideband bandpass filter (BPF) is presented using a back-to-back microstrip-to-coplanar waveguide (CPW) transition employed as the broadband balun structure in this letter. The proposed BPF is based on the electromagnetic coupling between open-circuited microstrip line and short-circuited CPW. The equivalent circuit of half of the filter is used to calculate the input impedance. The broadband microstip-to-CPW transition is designed at the center frequency of 6.85 GHz. The simulated and measured results are shown in this letter.",
"title": ""
},
{
"docid": "neg:1840347_5",
"text": "Changes in the background EEG activity occurring at the same time as visual and auditory evoked potentials, as well as during the interstimulus interval in a CNV paradigm were analysed in human subjects, using serial power measurements of overlapping EEG segments. The analysis was focused on the power of the rhythmic activity within the alpha band (RAAB power). A decrease in RAAB power occurring during these event-related phenomena was indicative of desynchronization. Phasic, i.e. short lasting, localised desynchronization was present during sensory stimulation, and also preceding the imperative signal and motor response (motor preactivation) in the CNV paradigm.",
"title": ""
},
{
"docid": "neg:1840347_6",
"text": "The abundance of event data in today’s information systems makes it possible to “confront” process models with the actual observed behavior. Process mining techniques use event logs to discover process models that describe the observed behavior, and to check conformance of process models by diagnosing deviations between models and reality. In many situations, it is desirable to mediate between a preexisting model and observed behavior. Hence, we would like to repair the model while improving the correspondence between model and log as much as possible. The approach presented in this article assigns predefined costs to repair actions (allowing inserting or skipping of activities). Given a maximum degree of change, we search for models that are optimal in terms of fitness—that is, the fraction of behavior in the log not possible according to the model is minimized. To compute fitness, we need to align the model and log, which can be time consuming. Hence, finding an optimal repair may be intractable. We propose different alternative approaches to speed up repair. The number of alignment computations can be reduced dramatically while still returning near-optimal repairs. The different approaches have been implemented using the process mining framework ProM and evaluated using real-life logs.",
"title": ""
},
{
"docid": "neg:1840347_7",
"text": "Acute renal failure increases risk of death after cardiac surgery. However, it is not known whether more subtle changes in renal function might have an impact on outcome. Thus, the association between small serum creatinine changes after surgery and mortality, independent of other established perioperative risk indicators, was analyzed. In a prospective cohort study in 4118 patients who underwent cardiac and thoracic aortic surgery, the effect of changes in serum creatinine within 48 h postoperatively on 30-d mortality was analyzed. Cox regression was used to correct for various established demographic preoperative risk indicators, intraoperative parameters, and postoperative complications. In the 2441 patients in whom serum creatinine decreased, early mortality was 2.6% in contrast to 8.9% in patients with increased postoperative serum creatinine values. Patients with large decreases (DeltaCrea <-0.3 mg/dl) showed a progressively increasing 30-d mortality (16 of 199 [8%]). Mortality was lowest (47 of 2195 [2.1%]) in patients in whom serum creatinine decreased to a maximum of -0.3 mg/dl; mortality increased to 6% in patients in whom serum creatinine remained unchanged or increased up to 0.5 mg/dl. Mortality (65 of 200 [32.5%]) was highest in patients in whom creatinine increased > or =0.5 mg/dl. For all groups, increases in mortality remained significant in multivariate analyses, including postoperative renal replacement therapy. After cardiac and thoracic aortic surgery, 30-d mortality was lowest in patients with a slight postoperative decrease in serum creatinine. Any even minimal increase or profound decrease of serum creatinine was associated with a substantial decrease in survival.",
"title": ""
},
{
"docid": "neg:1840347_8",
"text": "We present an incremental maintenance algorithm for leapfrog triejoin. The algorithm maintains rules in time proportional (modulo log factors) to the edit distance between leapfrog triejoin traces.",
"title": ""
},
{
"docid": "neg:1840347_9",
"text": "The past few years have seen rapid advances in communication and information technology (C&IT), and the pervasion of the worldwide web into everyday life has important implications for education. Most medical schools provide extensive computer networks for their students, and these are increasingly becoming a central component of the learning and teaching environment. Such advances bring new opportunities and challenges to medical education, and are having an impact on the way that we teach and on the way that students learn, and on the very design and delivery of the curriculum. The plethora of information available on the web is overwhelming, and both students and staff need to be taught how to manage it effectively. Medical schools must develop clear strategies to address the issues raised by these technologies. We describe how medical schools are rising to this challenge, look at some of the ways in which communication and information technology can be used to enhance the learning and teaching environment, and discuss the potential impact of future developments on medical education.",
"title": ""
},
{
"docid": "neg:1840347_10",
"text": "We present a novel view of the structuring of distributed systems, and a few examples of its utilization in an object-oriented context. In a distributed system, the structure of a service or subsystem may be complex, being implemented as a set of communicating server objects; however, this complexity of structure should not be apparent to the client. In our proposal, a client must first acquire a local object, called a proxy, in order to use such a service. The proxy represents the whole set of servers. The client directs all its communication to the proxy. The proxy, and all the objects it represents, collectively form one distributed object, which is not decomposable by the client. Any higher-level communication protocols are internal to this distributed object. Such a view provides a powerful structuring framework for distributed systems; it can be implemented cheaply without sacrificing much flexibility. It subsumes may previous proposals, but encourages better information-hiding and encapsulation.",
"title": ""
},
{
"docid": "neg:1840347_11",
"text": "This paper addresses the stability problem of a class of delayed neural networks with time-varying impulses. One important feature of the time-varying impulses is that both the stabilizing and destabilizing impulses are considered simultaneously. Based on the comparison principle, the stability of delayed neural networks with time-varying impulses is investigated. Finally, the simulation results demonstrate the effectiveness of the results.",
"title": ""
},
{
"docid": "neg:1840347_12",
"text": "The classical uncertainty principle provides a fundamental tradeoff in the localization of a signal in the time and frequency domains. In this paper we describe a similar tradeoff for signals defined on graphs. We describe the notions of “spread” in the graph and spectral domains, using the eigenvectors of the graph Laplacian as a surrogate Fourier basis. We then describe how to find signals that, among all signals with the same spectral spread, have the smallest graph spread about a given vertex. For every possible spectral spread, the desired signal is the solution to an eigenvalue problem. Since localization in graph and spectral domains is a desirable property of the elements of wavelet frames on graphs, we compare the performance of some existing wavelet transforms to the obtained bound.",
"title": ""
},
{
"docid": "neg:1840347_13",
"text": "Following the Daubert ruling in 1993, forensic evidence based on fingerprints was first challenged in the 1999 case of the U.S. versus Byron C. Mitchell and, subsequently, in 20 other cases involving fingerprint evidence. The main concern with the admissibility of fingerprint evidence is the problem of individualization, namely, that the fundamental premise for asserting the uniqueness of fingerprints has not been objectively tested and matching error rates are unknown. In order to assess the error rates, we require quantifying the variability of fingerprint features, namely, minutiae in the target population. A family of finite mixture models has been developed in this paper to represent the distribution of minutiae in fingerprint images, including minutiae clustering tendencies and dependencies in different regions of the fingerprint image domain. A mathematical model that computes the probability of a random correspondence (PRC) is derived based on the mixture models. A PRC of 2.25 times10-6 corresponding to 12 minutiae matches was computed for the NIST4 Special Database, when the numbers of query and template minutiae both equal 46. This is also the estimate of the PRC for a target population with a similar composition as that of NIST4.",
"title": ""
},
{
"docid": "neg:1840347_14",
"text": "OBJECTIVES\nBiliary injuries are frequently accompanied by vascular injuries, which may worsen the bile duct injury and cause liver ischemia. We performed an analytical review with the aim of defining vasculobiliary injury and setting out the important issues in this area.\n\n\nMETHODS\nA literature search of relevant terms was performed using OvidSP. Bibliographies of papers were also searched to obtain older literature.\n\n\nRESULTS\n Vasculobiliary injury was defined as: an injury to both a bile duct and a hepatic artery and/or portal vein; the bile duct injury may be caused by operative trauma, be ischaemic in origin or both, and may or may not be accompanied by various degrees of hepatic ischaemia. Right hepatic artery (RHA) vasculobiliary injury (VBI) is the most common variant. Injury to the RHA likely extends the biliary injury to a higher level than the gross observed mechanical injury. VBI results in slow hepatic infarction in about 10% of patients. Repair of the artery is rarely possible and the overall benefit unclear. Injuries involving the portal vein or common or proper hepatic arteries are much less common, but have more serious effects including rapid infarction of the liver.\n\n\nCONCLUSIONS\nRoutine arteriography is recommended in patients with a biliary injury if early repair is contemplated. Consideration should be given to delaying repair of a biliary injury in patients with occlusion of the RHA. Patients with injuries to the portal vein or proper or common hepatic should be emergently referred to tertiary care centers.",
"title": ""
},
{
"docid": "neg:1840347_15",
"text": "Switching audio amplifiers are widely used in HBridge topology thanks to their high efficiency; however low audio performances in single ended power stage topology is a strong weakness leading to not be used for headset applications. This paper explains the importance of efficient error correction in Single Ended Class-D audio amplifier. A hysteresis control for Class-D amplifier with a variable window is also presented. The analyses are verified by simulations and measurements. The proposed solution was fabricated in 0.13µm CMOS technology with an active area of 0.2mm2. It could be used in single ended output configuration fully compatible with common headset connectors. The proposed Class-D amplifier achieves a harmonic distortion of 0.01% and a power supply rejection of 70dB with a quite low static current consumption.",
"title": ""
},
{
"docid": "neg:1840347_16",
"text": "The research of probiotics for aquatic animals is increasing with the demand for environmentfriendly aquaculture. The probiotics were defined as live microbial feed supplements that improve health of man and terrestrial livestock. The gastrointestinal microbiota of fish and shellfish are peculiarly dependent on the external environment, due to the water flow passing through the digestive tract. Most bacterial cells are transient in the gut, with continuous intrusion of microbes coming from water and food. Some commercial products are referred to as probiotics, though they were designed to treat the rearing medium, not to supplement the diet. This extension of the probiotic concept is pertinent when the administered microbes survive in the gastrointestinal tract. Otherwise, more general terms are suggested, like biocontrol when the treatment is antagonistic to pathogens, or bioremediation when water quality is improved. However, the first probiotics tested in fish were commercial preparations devised for land animals. Though some effects were observed with such preparations, the survival of these bacteria was uncertain in aquatic environment. Most attempts to propose probiotics have been undertaken by isolating and selecting strains from aquatic environment. These microbes were Vibrionaceae, pseudomonads, lactic acid bacteria, Bacillus spp. and yeasts. Three main characteristics have been searched in microbes as candidates Ž . to improve the health of their host. 1 The antagonism to pathogens was shown in vitro in most Ž . Ž . cases. 2 The colonization potential of some candidate probionts was also studied. 3 Challenge tests confirmed that some strains could increase the resistance to disease of their host. Many other beneficial effects may be expected from probiotics, e.g., competition with pathogens for nutrients or for adhesion sites, and stimulation of the immune system. The most promising prospects are sketched out, but considerable efforts of research will be necessary to develop the applications to aquaculture. q 1999 Elsevier Science B.V. All rights reserved.",
"title": ""
},
{
"docid": "neg:1840347_17",
"text": "This paper revisits the problem of optimal learning and decision-making when different misclassification errors incur different penalties. We characterize precisely but intuitively when a cost matrix is reasonable, and we show how to avoid the mistake of defining a cost matrix that is economically incoherent. For the two-class case, we prove a theorem that shows how to change the proportion of negative examples in a training set in order to make optimal cost-sensitive classification decisions using a classifier learned by a standard non-costsensitive learning method. However, we then argue that changing the balance of negative and positive training examples has little effect on the classifiers produced by standard Bayesian and decision tree learning methods. Accordingly, the recommended way of applying one of these methods in a domain with differing misclassification costs is to learn a classifier from the training set as given, and then to compute optimal decisions explicitly using the probability estimates given by the classifier. 1 Making decisions based on a cost matrix Given a specification of costs for correct and incorrect predictions, an example should be predicted to have the class that leads to the lowest expected cost, where the expectatio n is computed using the conditional probability of each class given the example. Mathematically, let the (i; j) entry in a cost matrixC be the cost of predicting class i when the true class isj. If i = j then the prediction is correct, while if i 6= j the prediction is incorrect. The optimal prediction for an examplex is the classi that minimizes L(x; i) =Xj P (jjx)C(i; j): (1) Costs are not necessarily monetary. A cost can also be a waste of time, or the severity of an illness, for example. For eachi,L(x; i) is a sum over the alternative possibilities for the true class of x. In this framework, the role of a learning algorithm is to produce a classifier that for any example x can estimate the probability P (jjx) of each classj being the true class ofx. For an examplex, making the predictioni means acting as if is the true class of x. The essence of cost-sensitive decision-making is that it can be optimal to ct as if one class is true even when some other class is more probable. For example, it can be rational not to approve a large credit card transaction even if the transaction is mos t likely legitimate. 1.1 Cost matrix properties A cost matrixC always has the following structure when there are only two classes: actual negative actual positive predict negative C(0; 0) = 00 C(0; 1) = 01 predict positive C(1; 0) = 10 C(1; 1) = 11 Recent papers have followed the convention that cost matrix rows correspond to alternative predicted classes, whi le columns correspond to actual classes, i.e. row/column = i/j predicted/actual. In our notation, the cost of a false positive is 10 while the cost of a false negative is 01. Conceptually, the cost of labeling an example incorrectly should always be greater th an the cost of labeling it correctly. Mathematically, it shoul d always be the case that 10 > 00 and 01 > 11. We call these conditions the “reasonableness” conditions. Suppose that the first reasonableness condition is violated , so 00 10 but still 01 > 11. In this case the optimal policy is to label all examples positive. Similarly, if 10 > 00 but 11 01 then it is optimal to label all examples negative. We leave the case where both reasonableness conditions are violated for the reader to analyze. Margineantu[2000] has pointed out that for some cost matrices, some class labels are never predicted by the optimal policy as given by Equation (1). We can state a simple, intuitive criterion for when this happens. Say that row m dominates rown in a cost matrixC if for all j,C(m; j) C(n; j). In this case the cost of predictingis no greater than the cost of predictingm, regardless of what the true class j is. So it is optimal never to predict m. As a special case, the optimal prediction is alwaysn if row n is dominated by all other rows in a cost matrix. The two reasonableness conditions for a two-class cost matrix imply that neither row in the matrix dominates the other. Given a cost matrix, the decisions that are optimal are unchanged if each entry in the matrix is multiplied by a positiv e constant. This scaling corresponds to changing the unit of account for costs. Similarly, the decisions that are optima l are unchanged if a constant is added to each entry in the matrix. This shifting corresponds to changing the baseline aw ay from which costs are measured. By scaling and shifting entries, any two-class cost matrix that satisfies the reasonab leness conditions can be transformed into a simpler matrix tha t always leads to the same decisions:",
"title": ""
},
{
"docid": "neg:1840347_18",
"text": "Empirical methods in geoparsing have thus far lacked a standard evaluation framework describing the task, data and metrics used to establish state-of-the-art systems. Evaluation is further made inconsistent, even unrepresentative of real world usage, by the lack of distinction between the different types of toponyms, which necessitates new guidelines, a consolidation of metrics and a detailed toponym taxonomy with implications for Named Entity Recognition (NER). To address these deficiencies, our manuscript introduces such a framework in three parts. Part 1) Task Definition: clarified via corpus linguistic analysis proposing a fine-grained Pragmatic Taxonomy of Toponyms with new guidelines. Part 2) Evaluation Data: shared via a dataset called GeoWebNews to provide test/train data to enable immediate use of our contributions. In addition to fine-grained Geotagging and Toponym Resolution (Geocoding), this dataset is also suitable for prototyping machine learning NLP models. Part 3) Metrics: discussed and reviewed for a rigorous evaluation with appropriate recommendations for NER/Geoparsing practitioners. We gratefully acknowledge the funding support of the Natural Environment Research Council (NERC) PhD Studentship (Milan Gritta NE/M009009/1), EPSRC (Nigel Collier EP/M005089/1) and MRC (Mohammad Taher Pilehvar MR/M025160/1 for PheneBank). We also acknowledge Cambridge University linguists Mina Frost and Qianchu (Flora) Liu for providing expertise and verification (IAA) during dataset construction/annotation. Milan Gritta E-mail: mg711@cam.ac.uk Mohammad Taher Pilehvar E-mail: mp792@cam.ac.uk Nigel Collier E-mail: nhc30@cam.ac.uk Language Technology Lab (LTL) Department of Theoretical and Applied Linguistics (DTAL) University of Cambridge, 9 West Road, Cambridge CB3 9DP ar X iv :1 81 0. 12 36 8v 2 [ cs .C L ] 2 N ov 2 01 8 2 Milan Gritta et al.",
"title": ""
},
{
"docid": "neg:1840347_19",
"text": "This paper presents a scalable method to efficiently search for the most likely state trajectory leading to an event given only a simulator of a system. Our approach uses a reinforcement learning formulation and solves it using Monte Carlo Tree Search (MCTS). The approach places very few requirements on the underlying system, requiring only that the simulator provide some basic controls, the ability to evaluate certain conditions, and a mechanism to control the stochasticity in the system. Access to the system state is not required, allowing the method to support systems with hidden state. The method is applied to stress test a prototype aircraft collision avoidance system to identify trajectories that are likely to lead to near mid-air collisions. We present results for both single and multi-threat encounters and discuss their relevance. Compared with direct Monte Carlo search, this MCTS method performs significantly better both in finding events and in maximizing their likelihood.",
"title": ""
}
] |
1840348 | Discourse Marker Augmented Network with Reinforcement Learning for Natural Language Inference | [
{
"docid": "pos:1840348_0",
"text": "This paper presents the first use of a computational model of natural logic—a system of logical inference which operates over natural language—for textual inference. Most current approaches to the PASCAL RTE textual inference task achieve robustness by sacrificing semantic precision; while broadly effective, they are easily confounded by ubiquitous inferences involving monotonicity. At the other extreme, systems which rely on first-order logic and theorem proving are precise, but excessively brittle. This work aims at a middle way. Our system finds a low-cost edit sequence which transforms the premise into the hypothesis; learns to classify entailment relations across atomic edits; and composes atomic entailments into a top-level entailment judgment. We provide the first reported results for any system on the FraCaS test suite. We also evaluate on RTE3 data, and show that hybridizing an existing RTE system with our natural logic system yields significant performance gains.",
"title": ""
},
{
"docid": "pos:1840348_1",
"text": "Previous work combines word-level and character-level representations using concatenation or scalar weighting, which is suboptimal for high-level tasks like reading comprehension. We present a fine-grained gating mechanism to dynamically combine word-level and character-level representations based on properties of the words. We also extend the idea of fine-grained gating to modeling the interaction between questions and paragraphs for reading comprehension. Experiments show that our approach can improve the performance on reading comprehension tasks, achieving new state-of-the-art results on the Children’s Book Test and Who Did What datasets. To demonstrate the generality of our gating mechanism, we also show improved results on a social media tag prediction task.1",
"title": ""
},
{
"docid": "pos:1840348_2",
"text": "This article presents a general class of associative reinforcement learning algorithms for connectionist networks containing stochastic units. These algorithms, called REINFORCE algorithms, are shown to make weight adjustments in a direction that lies along the gradient of expected reinforcement in both immediate-reinforcement tasks and certain limited forms of delayed-reinforcement tasks, and they do this without explicitly computing gradient estimates or even storing information from which such estimates could be computed. Specific examples of such algorithms are presented, some of which bear a close relationship to certain existing algorithms while others are novel but potentially interesting in their own right. Also given are results that show how such algorithms can be naturally integrated with backpropagation. We close with a brief discussion of a number of additional issues surrounding the use of such algorithms, including what is known about their limiting behaviors as well as further considerations that might be used to help develop similar but potentially more powerful reinforcement learning algorithms.",
"title": ""
}
] | [
{
"docid": "neg:1840348_0",
"text": "Male genital injuries, demand prompt management to prevent long-term sexual and psychological damage. Injuries to the scrotum and contents may produce impaired fertility.We report our experience in diagnosing and managing a case of a foreign body in the scrotum following a boat engine blast accident. This case report highlights the need for a good history and thorough general examination to establish the mechanism of injury in order to distinguish between an embedded penetrating projectile injury and an injury with an exit wound. Prompt surgical exploration with hematoma evacuation limits complications.",
"title": ""
},
{
"docid": "neg:1840348_1",
"text": "A model free auto tuning algorithm is developed by using simultaneous perturbation stochastic approximation (SPSA). For such a method, plant models are not required. A set of closed loop experiments are conducted to generate data for an online optimization procedure. The optimum of the parameters of the restricted structured controllers will be found via SPSA algorithm. Compared to the conventional gradient approximation methods, SPSA only needs the small number of measurement of the cost function. It will be beneficial to application with high dimensional parameters. In the paper, a cost function is formulated to directly reflect the control performances widely used in industry, like overshoot, settling time and integral of absolute error. Therefore, the proposed auto tuning method will naturally lead to the desired closed loop performance. A case study of auto tuning of spool position control in a twin spool two stage valve is conducted. Both simulation and experimental study in TI C2000 target demonstrate effectiveness of the algorithm.",
"title": ""
},
{
"docid": "neg:1840348_2",
"text": "OBJECTIVE\nThis study aims to compare how national guidelines approach the management of obesity in reproductive age women.\n\n\nSTUDY DESIGN\nWe conducted a search for national guidelines in the English language on the topic of obesity surrounding the time of a pregnancy. We identified six primary source documents and several secondary source documents from five countries. Each document was then reviewed to identify: (1) statements acknowledging increased health risks related to obesity and reproductive outcomes, (2) recommendations for the management of obesity before, during, or after pregnancy.\n\n\nRESULTS\nAll guidelines cited an increased risk for miscarriage, birth defects, gestational diabetes, hypertension, fetal growth abnormalities, cesarean sections, difficulty with anesthesia, postpartum hemorrhage, and obesity in offspring. Counseling on the risks of obesity and weight loss before pregnancy were universal recommendations. There were substantial differences in the recommendations pertaining to gestational weight gain goals, nutrient and vitamin supplements, screening for gestational diabetes, and thromboprophylaxis among the guidelines.\n\n\nCONCLUSION\nStronger evidence from randomized trials is needed to devise consistent recommendations for obese reproductive age women. This research may also assist clinicians in overcoming one of the many obstacles they encounter when providing care to obese women.",
"title": ""
},
{
"docid": "neg:1840348_3",
"text": "the paper presents a model integrating theories from collaboration research (i.e., social presence theory, channel expansion theory, and the task closure model) with a recent theory from technology adoption research (i.e., unified theory of acceptance and use of technology, abbreviated to utaut) to explain the adoption and use of collaboration technology. we theorize that collaboration technology characteristics, individual and group characteristics, task characteristics, and situational characteristics are predictors of performance expectancy, effort expectancy, social influence, and facilitating conditions in utaut. we further theorize that the utaut constructs, in concert with gender, age, and experience, predict intention to use a collaboration technology, which in turn predicts use. we conducted two field studies in Finland among (1) 349 short message service (SMS) users and (2) 447 employees who were potential users of a new collaboration technology in an organization. Our model was supported in both studies. the current work contributes to research by developing and testing a technology-specific model of adoption in the collaboration context. key worDS anD phraSeS: channel expansion theory, collaboration technologies, social presence theory, task closure model, technology acceptance, technology adoption, unified theory of acceptance and use of technology. technology aDoption iS one of the moSt mature StreamS in information systems (IS) research (see [65, 76, 77]). the benefit of such maturity is the availability of frameworks and models that can be applied to the study of interesting problems. while practical contributions are certain to accrue from such investigations, a key challenge for researchers is to ensure that studies yield meaningful scientific contributions. there have been several models explaining technology adoption and use, particularly since the late 1980s [76]. In addition to noting the maturity of this stream of research, Venkatesh et al. identified several important directions for future research and suggested that “one of the most important directions for future research is to tie this mature stream [technology adoption] of research into other established streams of work” [76, p. 470] (see also [70]). In research on technology adoption, the technology acceptance model (taM) [17] is the most widely employed theoretical model [76]. taM has been applied to a range of technologies and has been very predictive of individual technology adoption and use. the unified theory of acceptance and use of technology (utaut) [76] integrated eight distinct models of technology adoption and use, including taM. utaut extends taM by incorporating social influence and facilitating conditions. utaut is based in PrEDICtING COllaBOratION tEChNOlOGY uSE 11 the rich tradition of taM and provides a foundation for future research in technology adoption. utaut also incorporates four different moderators of key relationships. although utaut is more integrative, like taM, it still suffers from the limitation of being predictive but not particularly useful in providing explanations that can be used to design interventions that foster adoption (e.g., [72, 73]). there has been some research on general antecedents of perceived usefulness and perceived ease of use that are technology independent (e.g., [69, 73]). But far less attention has been paid to technology-specific antecedents that may provide significantly stronger guidance for the successful design and implementation of specific types of systems. Developing theory that is more focused and context specific—here, technology specific—is considered an important frontier for advances in IS research [53, 70]. Building on utaut to develop a model that will be more helpful will require a better understanding of how the utaut factors play out with different technologies [7, 76]. as a first step, it is important to extend utaut to a specific class of technologies [70, 76]. a model focused on a specific class of technology will be more explanatory compared to a general model that attempts to address many classes of technologies [70]. Such a focused model will also provide designers and managers with levers to augment adoption and use. One example is collaboration technology [20], a technology designed to assist two or more people to work together at the same place and time or at different places or different times [25, 26]. technologies that facilitate collaboration via electronic means have become an important component of day-to-day life (both in and out of the workplace). thus, it is not surprising that collaboration technologies have received considerable research attention over the past decades [24, 26, 77]. Several studies have examined the adoption of collaboration technologies, such as voice mail, e-mail, and group support systems (e.g., [3, 4, 44, 56, 63]). these studies focused on organizational factors leading to adoption (e.g., size, centralization) or on testing the boundary conditions of taM (e.g., could taM be applied to collaboration technologies). Given that adoption of collaboration technologies is not progressing as fast or as broadly as expected [20, 54], it seems a different approach is needed. It is possible that these two streams could inform each other to develop a more complete understanding of collaboration technology use, one in which we can begin to understand how collaboration factors influence adoption and use. a model that integrates knowledge from technology adoption and collaboration technology research is lacking, a void that this paper seeks to address. In doing so, we answer the call for research by Venkatesh et al. [76] to integrate the technology adoption stream with another dominant research stream, which in turn will move us toward a more cumulative and expansive nomological network (see [41, 70]). we also build on the work of wixom and todd [80] by examining the important role of technology characteristics leading to use. the current study will help us take a step toward alleviating one of the criticisms of IS research discussed by Benbasat and Zmud, especially in the context of technology adoption research: “we should neither focus our research on variables outside the nomological net nor exclusively on intermediate-level variables, such as ease of use, usefulness or behavioral intentions, without clarifying 12 BrOwN, DENNIS, aND VENkatESh the IS nuances involved” [6, p. 193]. Specifically, our work accomplishes the goal of “developing conceptualizations and theories of It [information technology] artifacts; and incorporating such conceptualizations and theories of It artifacts” [53, p. 130] by extending utaut to incorporate the specific artifact of collaboration technology and its related characteristics. In addition to the scientific value, such a model will provide greater value to practitioners who are attempting to foster successful use of a specific technology. Given this background, the primary objective of this paper is to develop and test a model to understand collaboration technology adoption that integrates utaut with key constructs from theories about collaboration technologies. we identify specific antecedents to utaut constructs by drawing from social presence theory [64], channel expansion theory [11] (a descendant of media richness theory [16]), and the task closure model [66], as well as a broad range of prior collaboration technology research. we test our model in two different studies conducted in Finland: the use of short message service (SMS) among working professionals and the use of a collaboration technology in an organization.",
"title": ""
},
{
"docid": "neg:1840348_4",
"text": "Lots of data from different domains is published as Linked Open Data (LOD). While there are quite a few browsers for such data, as well as intelligent tools for particular purposes, a versatile tool for deriving additional knowledge by mining the Web of Linked Data is still missing. In this system paper, we introduce the RapidMiner Linked Open Data extension. The extension hooks into the powerful data mining and analysis platform RapidMiner, and offers operators for accessing Linked Open Data in RapidMiner, allowing for using it in sophisticated data analysis workflows without the need for expert knowledge in SPARQL or RDF. The extension allows for autonomously exploring the Web of Data by following links, thereby discovering relevant datasets on the fly, as well as for integrating overlapping data found in different datasets. As an example, we show how statistical data from the World Bank on scientific publications, published as an RDF data cube, can be automatically linked to further datasets and analyzed using additional background knowledge from ten different LOD datasets.",
"title": ""
},
{
"docid": "neg:1840348_5",
"text": "This study examines the economic effect of information security breaches reported in newspapers on publicly traded US corporations. We find limited evidence of an overall negative stock market reaction to public announcements of information security breaches. However, further investigation reveals that the nature of the breach affects this result. We find a highly significant negative market reaction for information security breaches involving unauthorized access to confidential data, but no significant reaction when the breach does not involve confidential information. Thus, stock market participants appear to discriminate across types of breaches when assessing their economic impact on affected firms. These findings are consistent with the argument that the economic consequences of information security breaches vary according to the nature of the underlying assets affected by the breach.",
"title": ""
},
{
"docid": "neg:1840348_6",
"text": "We present a new end-to-end network architecture for facial expression recognition with an attention model. It focuses attention in the human face and uses a Gaussian space representation for expression recognition. We devise this architecture based on two fundamental complementary components: (1) facial image correction and attention and (2) facial expression representation and classification. The first component uses an encoder-decoder style network and a convolutional feature extractor that are pixel-wise multiplied to obtain a feature attention map. The second component is responsible for obtaining an embedded representation and classification of the facial expression. We propose a loss function that creates a Gaussian structure on the representation space. To demonstrate the proposed method, we create two larger and more comprehensive synthetic datasets using the traditional BU3DFE and CK+ facial datasets. We compared results with the PreActResNet18 baseline. Our experiments on these datasets have shown the superiority of our approach in recognizing facial expressions.",
"title": ""
},
{
"docid": "neg:1840348_7",
"text": "Scene text detection is challenging as the input may have different orientations, sizes, font styles, lighting conditions, perspective distortions and languages. This paper addresses the problem by designing a Rotational Region CNN (R2CNN). R2CNN includes a Text Region Proposal Network (Text-RPN) to estimate approximate text regions and a multitask refinement network to get the precise inclined box. Our work has the following features. First, we use a novel multi-task regression method to support arbitrarily-oriented scene text detection. Second, we introduce multiple ROIPoolings to address the scene text detection problem for the first time. Third, we use an inclined Non-Maximum Suppression (NMS) to post-process the detection candidates. Experiments show that our method outperforms the state-of-the-art on standard benchmarks: ICDAR 2013, ICDAR 2015, COCO-Text and MSRA-TD500.",
"title": ""
},
{
"docid": "neg:1840348_8",
"text": "Deep learning is quickly becoming the leading methodology for medical image analysis. Given a large medical archive, where each image is associated with a diagnosis, efficient pathology detectors or classifiers can be trained with virtually no expert knowledge about the target pathologies. However, deep learning algorithms, including the popular ConvNets, are black boxes: little is known about the local patterns analyzed by ConvNets to make a decision at the image level. A solution is proposed in this paper to create heatmaps showing which pixels in images play a role in the image-level predictions. In other words, a ConvNet trained for image-level classification can be used to detect lesions as well. A generalization of the backpropagation method is proposed in order to train ConvNets that produce high-quality heatmaps. The proposed solution is applied to diabetic retinopathy (DR) screening in a dataset of almost 90,000 fundus photographs from the 2015 Kaggle Diabetic Retinopathy competition and a private dataset of almost 110,000 photographs (e-ophtha). For the task of detecting referable DR, very good detection performance was achieved: Az=0.954 in Kaggle's dataset and Az=0.949 in e-ophtha. Performance was also evaluated at the image level and at the lesion level in the DiaretDB1 dataset, where four types of lesions are manually segmented: microaneurysms, hemorrhages, exudates and cotton-wool spots. For the task of detecting images containing these four lesion types, the proposed detector, which was trained to detect referable DR, outperforms recent algorithms trained to detect those lesions specifically, with pixel-level supervision. At the lesion level, the proposed detector outperforms heatmap generation algorithms for ConvNets. This detector is part of the Messidor® system for mobile eye pathology screening. Because it does not rely on expert knowledge or manual segmentation for detecting relevant patterns, the proposed solution is a promising image mining tool, which has the potential to discover new biomarkers in images.",
"title": ""
},
{
"docid": "neg:1840348_9",
"text": "Augmented reality (AR) is currently considered as having potential for pedagogical applications. However, in science education, research regarding AR-aided learning is in its infancy. To understand how AR could help science learning, this review paper firstly has identified two major approaches of utilizing AR technology in science education, which are named as image-based AR and locationbased AR. These approaches may result in different affordances for science learning. It is then found that students’ spatial ability, practical skills, and conceptual understanding are often afforded by image-based AR and location-based AR usually supports inquiry-based scientific activities. After examining what has been done in science learning with AR supports, several suggestions for future research are proposed. For example, more research is required to explore learning experience (e.g., motivation or cognitive load) and learner characteristics (e.g., spatial ability or perceived presence) involved in AR. Mixed methods of investigating learning process (e.g., a content analysis and a sequential analysis) and in-depth examination of user experience beyond usability (e.g., affective variables of esthetic pleasure or emotional fulfillment) should be considered. Combining image-based and location-based AR technology may bring new possibility for supporting science learning. Theories including mental models, spatial cognition, situated cognition, and social constructivist learning are suggested for the profitable uses of future AR research in science education.",
"title": ""
},
{
"docid": "neg:1840348_10",
"text": "This paper focuses on the problem of object detection when the annotation at training time is restricted to presence or absence of object instances at image level. We present a method based on features extracted from a Convolutional Neural Network and latent SVM that can represent and exploit the presence of multiple object instances in an image. Moreover, the detection of the object instances in the image is improved by incorporating in the learning procedure additional constraints that represent domain-specific knowledge such as symmetry and mutual exclusion. We show that the proposed method outperforms the state-of-the-art in weakly-supervised object detection and object classification on the Pascal VOC 2007 dataset.",
"title": ""
},
{
"docid": "neg:1840348_11",
"text": "Gated-Attention (GA) Reader has been effective for reading comprehension. GA Reader makes two assumptions: (1) a uni-directional attention that uses an input query to gate token encodings of a document; (2) encoding at the cloze position of an input query is considered for answer prediction. In this paper, we propose Collaborative Gating (CG) and Self-Belief Aggregation (SBA) to address the above assumptions respectively. In CG, we first use an input document to gate token encodings of an input query so that the influence of irrelevant query tokens may be reduced. Then the filtered query is used to gate token encodings of an document in a collaborative fashion. In SBA, we conjecture that query tokens other than the cloze token may be informative for answer prediction. We apply self-attention to link the cloze token with other tokens in a query so that the importance of query tokens with respect to the cloze position are weighted. Then their evidences are weighted, propagated and aggregated for better reading comprehension. Experiments show that our approaches advance the state-of-theart results in CNN, Daily Mail, and Who Did What public test sets.",
"title": ""
},
{
"docid": "neg:1840348_12",
"text": "This paper studies the problem of recognizing gender from full body images. This problem has not been addressed before, partly because of the variant nature of human bodies and clothing that can bring tough difficulties. However, gender recognition has high application potentials, e.g. security surveillance and customer statistics collection in restaurants, supermarkets, and even building entrances. In this paper, we build a system of recognizing gender from full body images, taken from frontal or back views. Our contributions are three-fold. First, to handle the variety of human body characteristics, we represent each image by a collection of patch features, which model different body parts and provide a set of clues for gender recognition. To combine the clues, we build an ensemble learning algorithm from those body parts to recognize gender from fixed view body images (frontal or back). Second, we relax the fixed view constraint and show the possibility to train a flexible classifier for mixed view images with the almost same accuracy as the fixed view case. At last, our approach is shown to be robust to small alignment errors, which is preferred in many applications.",
"title": ""
},
{
"docid": "neg:1840348_13",
"text": "LLC resonant converter is a nonlinear system, limiting the use of typical linear control methods. This paper proposed a new nonlinear control strategy, using load feedback linearization for an LLC resonant converter. Compared with the conventional PI controllers, the proposed feedback linearized control strategy can achieve better performance with elimination of the nonlinear characteristics. The LLC resonant converter's dynamic model is built based on fundamental harmonic approximation using extended describing function. By assuming the dynamics of resonant network is much faster than the output voltage and controller, the LLC resonant converter's model is simplified from seven-order state equations to two-order ones. Then, the feedback linearized control strategy is presented. A double loop PI controller is designed to regulate the modulation voltage. The switching frequency can be calculated as a function of the load, input voltage, and modulation voltage. Finally, a 200 W laboratory prototype is built to verify the proposed control scheme. The settling time of the LLC resonant converter is reduced from 38.8 to 20.4 ms under the positive load step using the proposed controller. Experimental results prove the superiority of the proposed feedback linearized controller over the conventional PI controller.",
"title": ""
},
{
"docid": "neg:1840348_14",
"text": "We propose a category-independent method to produce a bag of regions and rank them, such that top-ranked regions are likely to be good segmentations of different objects. Our key objectives are completeness and diversity: Every object should have at least one good proposed region, and a diverse set should be top-ranked. Our approach is to generate a set of segmentations by performing graph cuts based on a seed region and a learned affinity function. Then, the regions are ranked using structured learning based on various cues. Our experiments on the Berkeley Segmentation Data Set and Pascal VOC 2011 demonstrate our ability to find most objects within a small bag of proposed regions.",
"title": ""
},
{
"docid": "neg:1840348_15",
"text": "As they grapple with increasingly large data sets, biologists and computer scientists uncork new bottlenecks. B iologists are joining the big-data club. With the advent of high-throughput genomics, life scientists are starting to grapple with massive data sets, encountering challenges with handling, processing and moving information that were once the domain of astronomers and high-energy physicists 1. With every passing year, they turn more often to big data to probe everything from the regulation of genes and the evolution of genomes to why coastal algae bloom, what microbes dwell where in human body cavities and how the genetic make-up of different cancers influences how cancer patients fare 2. The European Bioinformatics Institute (EBI) in Hinxton, UK, part of the European Molecular Biology Laboratory and one of the world's largest biology-data repositories, currently stores 20 petabytes (1 petabyte is 10 15 bytes) of data and backups about genes, proteins and small molecules. Genomic data account for 2 peta-bytes of that, a number that more than doubles every year 3 (see 'Data explosion'). This data pile is just one-tenth the size of the data store at CERN, Europe's particle-physics laboratory near Geneva, Switzerland. Every year, particle-collision events in CERN's Large Hadron Collider generate around 15 petabytes of data — the equivalent of about 4 million high-definition feature-length films. But the EBI and institutes like it face similar data-wrangling challenges to those at CERN, says Ewan Birney, associate director of the EBI. He and his colleagues now regularly meet with organizations such as CERN and the European Space Agency (ESA) in Paris to swap lessons about data storage, analysis and sharing. All labs need to manipulate data to yield research answers. As prices drop for high-throughput instruments such as automated Extremely powerful computers are needed to help biologists to handle big-data traffic jams.",
"title": ""
},
{
"docid": "neg:1840348_16",
"text": "We present a neural semantic parser that translates natural language questions into executable SQL queries with two key ideas. First, we develop an encoder-decoder model, where the decoder uses a simple type system of SQL to constraint the output prediction, and propose a value-based loss when copying from input tokens. Second, we explore using the execution semantics of SQL to repair decoded programs that result in runtime error or return empty result. We propose two modelagnostics repair approaches, an ensemble model and a local program repair, and demonstrate their effectiveness over the original model. We evaluate our model on the WikiSQL dataset and show that our model achieves close to state-of-the-art results with lesser model complexity.",
"title": ""
},
{
"docid": "neg:1840348_17",
"text": "As the rapid growth of multi-modal data, hashing methods for cross-modal retrieval have received considerable attention. Deep-networks-based cross-modal hashing methods are appealing as they can integrate feature learning and hash coding into end-to-end trainable frameworks. However, it is still challenging to find content similarities between different modalities of data due to the heterogeneity gap. To further address this problem, we propose an adversarial hashing network with attention mechanism to enhance the measurement of content similarities by selectively focusing on informative parts of multi-modal data. The proposed new adversarial network, HashGAN, consists of three building blocks: 1) the feature learning module to obtain feature representations, 2) the generative attention module to generate an attention mask, which is used to obtain the attended (foreground) and the unattended (background) feature representations, 3) the discriminative hash coding module to learn hash functions that preserve the similarities between different modalities. In our framework, the generative module and the discriminative module are trained in an adversarial way: the generator is learned to make the discriminator cannot preserve the similarities of multi-modal data w.r.t. the background feature representations, while the discriminator aims to preserve the similarities of multimodal data w.r.t. both the foreground and the background feature representations. Extensive evaluations on several benchmark datasets demonstrate that the proposed HashGAN brings substantial improvements over other state-ofthe-art cross-modal hashing methods.",
"title": ""
},
{
"docid": "neg:1840348_18",
"text": "Underwater wireless sensor networks (UWSNs) will pave the way for a new era of underwater monitoring and actuation applications. The envisioned landscape of UWSN applications will help us learn more about our oceans, as well as about what lies beneath them. They are expected to change the current reality where no more than 5% of the volume of the oceans has been observed by humans. However, to enable large deployments of UWSNs, networking solutions toward efficient and reliable underwater data collection need to be investigated and proposed. In this context, the use of topology control algorithms for a suitable, autonomous, and on-the-fly organization of the UWSN topology might mitigate the undesired effects of underwater wireless communications and consequently improve the performance of networking services and protocols designed for UWSNs. This article presents and discusses the intrinsic properties, potentials, and current research challenges of topology control in underwater sensor networks. We propose to classify topology control algorithms based on the principal methodology used to change the network topology. They can be categorized in three major groups: power control, wireless interface mode management, and mobility assisted–based techniques. Using the proposed classification, we survey the current state of the art and present an in-depth discussion of topology control solutions designed for UWSNs.",
"title": ""
},
{
"docid": "neg:1840348_19",
"text": "We propose a novel approach for constructing effective treatment policies when the observed data is biased and lacks counterfactual information. Learning in settings where the observed data does not contain all possible outcomes for all treatments is difficult since the observed data is typically biased due to existing clinical guidelines. This is an important problem in the medical domain as collecting unbiased data is expensive and so learning from the wealth of existing biased data is a worthwhile task. Our approach separates the problem into two stages: first we reduce the bias by learning a representation map using a novel auto-encoder network – this allows us to control the trade-off between the bias-reduction and the information loss – and then we construct effective treatment policies on the transformed data using a novel feedforward network. Separation of the problem into these two stages creates an algorithm that can be adapted to the problem at hand – the bias-reduction step can be performed as a preprocessing step for other algorithms. We compare our algorithm against state-of-art algorithms on two semi-synthetic datasets and demonstrate that our algorithm achieves a significant improvement in performance.",
"title": ""
}
] |
1840349 | TopicLens: Efficient Multi-Level Visual Topic Exploration of Large-Scale Document Collections | [
{
"docid": "pos:1840349_0",
"text": "Clustering plays an important role in many large-scale data analyses providing users with an overall understanding of their data. Nonetheless, clustering is not an easy task due to noisy features and outliers existing in the data, and thus the clustering results obtained from automatic algorithms often do not make clear sense. To remedy this problem, automatic clustering should be complemented with interactive visualization strategies. This paper proposes an interactive visual analytics system for document clustering, called iVisClustering, based on a widelyused topic modeling method, latent Dirichlet allocation (LDA). iVisClustering provides a summary of each cluster in terms of its most representative keywords and visualizes soft clustering results in parallel coordinates. The main view of the system provides a 2D plot that visualizes cluster similarities and the relation among data items with a graph-based representation. iVisClustering provides several other views, which contain useful interaction methods. With help of these visualization modules, we can interactively refine the clustering results in various ways.",
"title": ""
},
{
"docid": "pos:1840349_1",
"text": "Topic modeling has been widely used for analyzing text document collections. Recently, there have been significant advancements in various topic modeling techniques, particularly in the form of probabilistic graphical modeling. State-of-the-art techniques such as Latent Dirichlet Allocation (LDA) have been successfully applied in visual text analytics. However, most of the widely-used methods based on probabilistic modeling have drawbacks in terms of consistency from multiple runs and empirical convergence. Furthermore, due to the complicatedness in the formulation and the algorithm, LDA cannot easily incorporate various types of user feedback. To tackle this problem, we propose a reliable and flexible visual analytics system for topic modeling called UTOPIAN (User-driven Topic modeling based on Interactive Nonnegative Matrix Factorization). Centered around its semi-supervised formulation, UTOPIAN enables users to interact with the topic modeling method and steer the result in a user-driven manner. We demonstrate the capability of UTOPIAN via several usage scenarios with real-world document corpuses such as InfoVis/VAST paper data set and product review data sets.",
"title": ""
}
] | [
{
"docid": "neg:1840349_0",
"text": "We investigate automatic classification of speculative language (‘hedging’), in biomedical text using weakly supervised machine learning. Our contributions include a precise description of the task with annotation guidelines, analysis and discussion, a probabilistic weakly supervised learning model, and experimental evaluation of the methods presented. We show that hedge classification is feasible using weakly supervised ML, and point toward avenues for future research.",
"title": ""
},
{
"docid": "neg:1840349_1",
"text": "In this paper, we evaluate the performance of Multicarrier-Low Density Spreading Multiple Access (MC-LDSMA) as a multiple access technique for mobile communication systems. The MC-LDSMA technique is compared with current multiple access techniques, OFDMA and SC-FDMA. The performance is evaluated in terms of cubic metric, block error rate, spectral efficiency and fairness. The aim is to investigate the expected gains of using MC-LDSMA in the uplink for next generation cellular systems. The simulation results of the link and system-level performance evaluation show that MC-LDSMA has significant performance improvements over SC-FDMA and OFDMA. It is shown that using MC-LDSMA can considerably reduce the required transmission power and increase the spectral efficiency and fairness among the users.",
"title": ""
},
{
"docid": "neg:1840349_2",
"text": "Average public feedback scores given to sellers have increased strongly over time in an online labor market. Changes in marketplace composition or improved seller performance cannot fully explain this trend. We propose that two factors inflated reputations: (1) it costs more to give bad feedback than good feedback and (2) this cost to raters is increasing in the cost to sellers from bad feedback. Together, (1) and (2) can lead to an equilibrium where feedback is always positive, regardless of performance. In response, the marketplace encouraged buyers to additionally give private feedback. This private feedback was substantially more candid and more predictive of future worker performance. When aggregates of private feedback about each job applicant were experimentally provided to employers as a private feedback score, employers used these scores when making screening and hiring decisions.",
"title": ""
},
{
"docid": "neg:1840349_3",
"text": "Although integrating multiple levels of data into an analysis can often yield better inferences about the phenomenon under study, traditional methodologies used to combine multiple levels of data are problematic. In this paper, we discuss several methodologies under the rubric of multil evel analysis. Multil evel methods, we argue, provide researchers, particularly researchers using comparative data, substantial leverage in overcoming the typical problems associated with either ignoring multiple levels of data, or problems associated with combining lower-level and higherlevel data (including overcoming implicit assumptions of fixed and constant effects). The paper discusses several variants of the multil evel model and provides an application of individual-level support for European integration using comparative politi cal data from Western Europe.",
"title": ""
},
{
"docid": "neg:1840349_4",
"text": "We explore story generation: creative systems that can build coherent and fluent passages of text about a topic. We collect a large dataset of 300K human-written stories paired with writing prompts from an online forum. Our dataset enables hierarchical story generation, where the model first generates a premise, and then transforms it into a passage of text. We gain further improvements with a novel form of model fusion that improves the relevance of the story to the prompt, and adding a new gated multi-scale self-attention mechanism to model long-range context. Experiments show large improvements over strong baselines on both automated and human evaluations. Human judges prefer stories generated by our approach to those from a strong non-hierarchical model by a factor of two to one.",
"title": ""
},
{
"docid": "neg:1840349_5",
"text": "The proposed system for the cloud based automatic system involves the automatic updating of the data to the lighting system. It also reads the data from the base station in case of emergencies. Zigbee devices are used for wireless transmission of the data from the base station to the light system thus enabling an efficient street lamp control system. Infrared sensor and dimming control circuit is used to track the movement of human in a specific range and dims/bright the street lights accordingly hence saving a large amount of power. In case of emergencies data is sent from the particular light or light system and effective measures are taken accordingly.",
"title": ""
},
{
"docid": "neg:1840349_6",
"text": "Cloud of Things (CoT) is a computing model that combines the widely popular cloud computing with Internet of Things (IoT). One of the major problems with CoT is the latency of accessing distant cloud resources from the devices, where the data is captured. To address this problem, paradigms such as fog computing and Cloudlets have been proposed to interpose another layer of computing between the clouds and devices. Such a three-layered cloud-fog-device computing architecture is touted as the most suitable approach for deploying many next generation ubiquitous computing applications. Programming applications to run on such a platform is quite challenging because disconnections between the different layers are bound to happen in a large-scale CoT system, where the devices can be mobile. This paper presents a programming language and system for a three-layered CoT system. We illustrate how our language and system addresses some of the key challenges in the three-layered CoT. A proof-of-concept prototype compiler and runtime have been implemented and several example applications are developed using it.",
"title": ""
},
{
"docid": "neg:1840349_7",
"text": "PURPOSE\nThis study aimed to prospectively analyze the outcomes of 304 feldspathic porcelain veneers prepared by the same operator, in 100 patients, that were in situ for up to 16 years.\n\n\nMATERIALS AND METHODS\nA total of 304 porcelain veneers on incisors, canines, and premolars in 100 patients completed by one prosthodontist between 1988 and 2003 were sequentially included. Preparations were designed with chamfer margins, incisal reduction, and palatal overlap. At least 80% of each preparation was in enamel. Feldspathic porcelain veneers from refractory dies were etched (hydrofluoric acid), silanated, and cemented (Vision 2, Mirage Dental Systems). Outcomes were expressed as percentages (success, survival, unknown, dead, repair, failure). The results were statistically analyzed using the chi-square test and Kaplan-Meier survival estimation. Statistical significance was set at P < .05.\n\n\nRESULTS\nThe cumulative survival for veneers was 96% +/- 1% at 5 to 6 years, 93% +/- 2% at 10 to 11 years, 91% +/- 3% at 12 to 13 years, and 73% +/- 16% at 15 to 16 years. The marked drop in survival between 13 and 16 years was the result of the death of 1 patient and the low number of veneers in that period. The cumulative survival was greater when different statistical methods were employed. Sixteen veneers in 14 patients failed. Failed veneers were associated with esthetics (31%), mechanical complications (31%), periodontal support (12.5%), loss of retention >2 (12.5%), caries (6%), and tooth fracture (6%). Statistically significantly fewer veneers survived as the time in situ increased.\n\n\nCONCLUSIONS\nFeldspathic porcelain veneers, when bonded to enamel substrate, offer a predictable long-term restoration with a low failure rate. The statistical methods used to calculate the cumulative survival can markedly affect the apparent outcome and thus should be clearly defined in outcome studies.",
"title": ""
},
{
"docid": "neg:1840349_8",
"text": "We present two methods for determining the sentiment expressed by a movie review. The semantic orientation of a review can be positive, negative, or neutral. We examine the effect of valence shifters on classifying the reviews. We examine three types of valence shifters: negations, intensifiers, and diminishers. Negations are used to reverse the semantic polarity of a particular term, while intensifiers and diminishers are used to increase and decrease, respectively, the degree to which a term is positive or negative. The first method classifies reviews based on the number of positive and negative terms they contain. We use the General Inquirer to identify positive and negative terms, as well as negation terms, intensifiers, and diminishers. We also use positive and negative terms from other sources, including a dictionary of synonym differences and a very large Web corpus. To compute corpus-based semantic orientation values of terms, we use their association scores with a small group of positive and negative terms. We show that extending the term-counting method with contextual valence shifters improves the accuracy of the classification. The second method uses a Machine Learning algorithm, Support Vector Machines. We start with unigram features and then add bigrams that consist of a valence shifter and another word. The accuracy of classification is very high, and the valence shifter bigrams slightly improve it. The features that contribute to the high accuracy are the words in the lists of positive and negative terms. Previous work focused on either the term-counting method or the Machine Learning method. We show that combining the two methods achieves better results than either method alone.",
"title": ""
},
{
"docid": "neg:1840349_9",
"text": "A new silicon controlled rectifier-based power-rail electrostatic discharge (ESD) clamp circuit was proposed with a novel trigger circuit that has very low leakage current in a small layout area for implementation. This circuit was successfully verified in a 40-nm CMOS process by using only low-voltage devices. The novel trigger circuit uses a diode-string based level-sensing ESD detection circuit, but not using MOS capacitor, which has very large leakage current. Moreover, the leakage current on the ESD detection circuit is further reduced, adding a diode in series with the trigger transistor. By combining these two techniques, the total silicon area of the power-rail ESD clamp circuit can be reduced three times, whereas the leakage current is three orders of magnitude smaller than that of the traditional design.",
"title": ""
},
{
"docid": "neg:1840349_10",
"text": "This paper introduces a method for optimizing the tiles of a quad-mesh. Given a quad-based surface, the goal is to generate a set of K quads whose instances can produce a tiled surface that approximates the input surface. A solution to the problem is a K-set tilable surface, which can lead to an effective cost reduction in the physical construction of the given surface. Rather than molding lots of different building blocks, a K-set tilable surface requires the construction of K prefabricated components only. To realize the K-set tilable surface, we use a cluster-optimize approach. First, we iteratively cluster and analyze: clusters of similar shapes are merged, while edge connections between the K quads on the target surface are analyzed to learn the induced flexibility of the K-set tilable surface. Then, we apply a non-linear optimization model with constraints that maintain the K quads connections and shapes, and show how quad-based surfaces are optimized into K-set tilable surfaces. Our algorithm is demonstrated on various surfaces, including some that mimic the exteriors of certain renowned building landmarks.",
"title": ""
},
{
"docid": "neg:1840349_11",
"text": "This paper presents a pure textile, capacitive pressure sensor designed for integration into clothing to measure pressure on human body. The applications fields cover all domains where a soft and bendable sensor with a high local resolution is needed, e.g. in rehabilitation, pressure-sore prevention or motion detection due to muscle activities. We developed several textile sensors with spatial resolution of 2 times 2 cm and an average error below 4 percent within the measurement range 0 to 10 N/cm2. Applied on the upper arm the textile pressure sensor determines the deflection of the forearm between 0 and 135 degrees due to the muscle bending.",
"title": ""
},
{
"docid": "neg:1840349_12",
"text": "Named entity recognition (NER) is a popular domain of natural language processing. For this reason, many tools exist to perform this task. Amongst other points, they differ in the processing method they rely upon, the entity types they can detect, the nature of the text they can handle, and their input/output formats. This makes it difficult for a user to select an appropriate NER tool for a specific situation. In this article, we try to answer this question in the context of biographic texts. For this matter, we first constitute a new corpus by annotating 247 Wikipedia articles. We then select 4 publicly available, well known and free for research NER tools for comparison: Stanford NER, Illinois NET, OpenCalais NER WS and Alias-i LingPipe. We apply them to our corpus, assess their performances and compare them. When considering overall performances, a clear hierarchy emerges: Stanford has the best results, followed by LingPipe, Illionois and OpenCalais. However, a more detailed evaluation performed relatively to entity types and article categories highlights the fact their performances are diversely influenced by those factors. This complementarity opens an interesting perspective regarding the combination of these individual tools in order to improve performance.",
"title": ""
},
{
"docid": "neg:1840349_13",
"text": "We present a piezoelectric-on-silicon Lorentz force magnetometer (LFM) based on a mechanically coupled array of clamped–clamped beam resonators for the detection of lateral ( $xy$ plane) magnetic fields with an extended operating bandwidth of 1.36 kHz. The proposed device exploits piezoelectric transduction to greatly enhance the electromechanical coupling efficiency, which benefits the device sensitivity. Coupling multiple clamped–clamped beams increases the area for piezoelectric transduction, which further increases the sensitivity. The reported device has the widest operating bandwidth among LFMs reported to date with comparable normalized sensitivity despite the quality factor being limited to 30 when operating at ambient pressure instead of vacuum as in most cases of existing LFMs.",
"title": ""
},
{
"docid": "neg:1840349_14",
"text": "Universal Dependencies (UD) provides a cross-linguistically uniform syntactic representation, with the aim of advancing multilingual applications of parsing and natural language understanding. Reddy et al. (2016) recently developed a semantic interface for (English) Stanford Dependencies, based on the lambda calculus. In this work, we introduce UDEPLAMBDA, a similar semantic interface for UD, which allows mapping natural language to logical forms in an almost language-independent framework. We evaluate our approach on semantic parsing for the task of question answering against Freebase. To facilitate multilingual evaluation, we provide German and Spanish translations of the WebQuestions and GraphQuestions datasets. Results show that UDEPLAMBDA outperforms strong baselines across languages and datasets. For English, it achieves the strongest result to date on GraphQuestions, with competitive results on WebQuestions.",
"title": ""
},
{
"docid": "neg:1840349_15",
"text": "This work discusses the regulation of the ball and plate system, the problemis to design a control laws which generates a voltage u for the servomotors to move the ball from the actual position to a desired one. The controllers are constructed by introducing nonlinear compensation terms into the traditional PD controller. In this paper, a complete physical system and controller design is explored from conception to modeling to testing and implementation. The stability of the control is presented. Experiment results are obtained via our prototype of the ball and plate system.",
"title": ""
},
{
"docid": "neg:1840349_16",
"text": "Thee KL divergence is the most commonly used measure for comparing query and document language models in the language modeling framework to ad hoc retrieval. Since KL is rank equivalent to a specific weighted geometric mean, we examine alternative weighted means for language-model comparison, as well as alternative divergence measures. The study includes analysis of the inverse document frequency (IDF) effect of the language-model comparison methods. Empirical evaluation, performed with different types of queries (short and verbose) and query-model induction approaches, shows that there are methods that often outperform the KL divergence in some settings.",
"title": ""
},
{
"docid": "neg:1840349_17",
"text": "Moore type II Entire Condyle fractures of the tibia plateau represent a rare and highly unstable fracture pattern that usually results from high impact traumas. Specific recommendations regarding the surgical treatment of these fractures are sparse. We present a series of Moore type II fractures treated by open reduction and internal fixation through a direct dorsal approach. Five patients (3 females, 2 males) with Entire Condyle fractures were retrospectively analyzed after a mean follow-up period of 39 months (range 12–61 months). Patient mean age at the time of operation was 36 years (range 26–43 years). Follow-up included clinical and radiological examination. Furthermore, all patient finished a SF36 and Lysholm knee score questionnaire. Average range of motion was 127/0/1° with all patients reaching full extension at the time of last follow up. Patients reached a mean Lysholm score of 81.2 points (range 61–100 points) and an average SF36 of 82.36 points (range 53.75–98.88 points). One patient sustained deep wound infection after elective implant removal 1 year after the initial surgery. Overall all patients were highly satisfied with the postoperative result. The direct dorsal approach to the tibial plateau represents an adequate method to enable direct fracture exposure, open reduction, and internal fixation in posterior shearing medial Entire Condyle fractures and is especially valuable when also the dorso-lateral plateau is depressed.",
"title": ""
},
{
"docid": "neg:1840349_18",
"text": "We report a unique MEMS magnetometer based on a disk shaped radial contour mode thin-film piezoelectric on silicon (TPoS) CMOS-compatible resonator. This is the first device of its kind that targets operation under atmospheric pressure conditions as opposed that existing Lorentz force MEMS magnetometers that depend on vacuum. We exploit the chosen vibration mode to enhance coupling to deliver a field sensitivity of 10.92 mV/T while operating at a resonant frequency of 6.27 MHz, despite of a sub-optimal mechanical quality (Q) factor of 697 under ambient conditions in air.",
"title": ""
},
{
"docid": "neg:1840349_19",
"text": "In this paper, we propose a novel representation learning framework, namely HIN2Vec, for heterogeneous information networks (HINs). The core of the proposed framework is a neural network model, also called HIN2Vec, designed to capture the rich semantics embedded in HINs by exploiting different types of relationships among nodes. Given a set of relationships specified in forms of meta-paths in an HIN, HIN2Vec carries out multiple prediction training tasks jointly based on a target set of relationships to learn latent vectors of nodes and meta-paths in the HIN. In addition to model design, several issues unique to HIN2Vec, including regularization of meta-path vectors, node type selection in negative sampling, and cycles in random walks, are examined. To validate our ideas, we learn latent vectors of nodes using four large-scale real HIN datasets, including Blogcatalog, Yelp, DBLP and U.S. Patents, and use them as features for multi-label node classification and link prediction applications on those networks. Empirical results show that HIN2Vec soundly outperforms the state-of-the-art representation learning models for network data, including DeepWalk, LINE, node2vec, PTE, HINE and ESim, by 6.6% to 23.8% of $micro$-$f_1$ in multi-label node classification and 5% to 70.8% of $MAP$ in link prediction.",
"title": ""
}
] |
1840350 | Blockchain And Its Applications | [
{
"docid": "pos:1840350_0",
"text": "The potential of blockchain technology has received attention in the area of FinTech — the combination of finance and technology. Blockchain technology was first introduced as the technology behind the Bitcoin decentralized virtual currency, but there is the expectation that its characteristics of accurate and irreversible data transfer in a decentralized P2P network could make other applications possible. Although a precise definition of blockchain technology has not yet been given, it is important to consider how to classify different blockchain systems in order to better understand their potential and limitations. The goal of this paper is to add to the discussion on blockchain technology by proposing a classification based on two dimensions external to the system: (1) existence of an authority (without an authority and under an authority) and (2) incentive to participate in the blockchain (market-based and non-market-based). The combination of these elements results in four types of blockchains. We define these dimensions and describe the characteristics of the blockchain systems belonging to each classification.",
"title": ""
},
{
"docid": "pos:1840350_1",
"text": "The purpose of this paper is to explore applications of blockchain technology related to the 4th Industrial Revolution (Industry 4.0) and to present an example where blockchain is employed to facilitate machine-to-machine (M2M) interactions and establish a M2M electricity market in the context of the chemical industry. The presented scenario includes two electricity producers and one electricity consumer trading with each other over a blockchain. The producers publish exchange offers of energy (in kWh) for currency (in USD) in a data stream. The consumer reads the offers, analyses them and attempts to satisfy its energy demand at a minimum cost. When an offer is accepted it is executed as an atomic exchange (multiple simultaneous transactions). Additionally, this paper describes and discusses the research and application landscape of blockchain technology in relation to the Industry 4.0. It concludes that this technology has significant under-researched potential to support and enhance the efficiency gains of the revolution and identifies areas for future research. Producer 2 • Issue energy • Post purchase offers (as atomic transactions) Consumer • Look through the posted offers • Choose cheapest and satisfy its own demand Blockchain Stream Published offers are visible here Offer sent",
"title": ""
}
] | [
{
"docid": "neg:1840350_0",
"text": "Imagery texts are usually organized as a hierarchy of several visual elements, i.e. characters, words, text lines and text blocks. Among these elements, character is the most basic one for various languages such as Western, Chinese, Japanese, mathematical expression and etc. It is natural and convenient to construct a common text detection engine based on character detectors. However, training character detectors requires a vast of location annotated characters, which are expensive to obtain. Actually, the existing real text datasets are mostly annotated in word or line level. To remedy this dilemma, we propose a weakly supervised framework that can utilize word annotations, either in tight quadrangles or the more loose bounding boxes, for character detector training. When applied in scene text detection, we are thus able to train a robust character detector by exploiting word annotations in the rich large-scale real scene text datasets, e.g. ICDAR15 [19] and COCO-text [39]. The character detector acts as a key role in the pipeline of our text detection engine. It achieves the state-of-the-art performance on several challenging scene text detection benchmarks. We also demonstrate the flexibility of our pipeline by various scenarios, including deformed text detection and math expression recognition.",
"title": ""
},
{
"docid": "neg:1840350_1",
"text": "Dirty data is a serious problem for businesses leading to incorrect decision making, inefficient daily operations, and ultimately wasting both time and money. Dirty data often arises when domain constraints and business rules, meant to preserve data consistency and accuracy, are enforced incompletely or not at all in application code. In this work, we propose a new data-driven tool that can be used within an organization’s data quality management process to suggest possible rules, and to identify conformant and non-conformant records. Data quality rules are known to be contextual, so we focus on the discovery of context-dependent rules. Specifically, we search for conditional functional dependencies (CFDs), that is, functional dependencies that hold only over a portion of the data. The output of our tool is a set of functional dependencies together with the context in which they hold (for example, a rule that states for CS graduate courses, the course number and term functionally determines the room and instructor). Since the input to our tool will likely be a dirty database, we also search for CFDs that almost hold. We return these rules together with the non-conformant records (as these are potentially dirty records). We present effective algorithms for discovering CFDs and dirty values in a data instance. Our discovery algorithm searches for minimal CFDs among the data values and prunes redundant candidates. No universal objective measures of data quality or data quality rules are known. Hence, to avoid returning an unnecessarily large number of CFDs and only those that are most interesting, we evaluate a set of interest metrics and present comparative results using real datasets. We also present an experimental study showing the scalability of our techniques.",
"title": ""
},
{
"docid": "neg:1840350_2",
"text": "The balance compensating techniques for asymmetric Marchand balun are presented in this letter. The amplitude and phase difference are characterized explicitly by S21 and S31, from which the factors responsible for the balance compensating are determined. Finally, two asymmetric Marchand baluns, which have normal and enhanced balance compensation, respectively, are designed and fabricated in a 0.18 μm CMOS technology for demonstration. The simulation and measurement results show that the proposed balance compensating techniques are valid in a very wide frequency range up to millimeter-wave (MMW) band.",
"title": ""
},
{
"docid": "neg:1840350_3",
"text": "Considering the rapid growth of China’s elderly rural population, establishing both an adequate and a financially sustainable rural pension system is a major challenge. Focusing on financial sustainability, this article defines this concept of financial sustainability before constructing sound actuarial models for China’s rural pension system. Based on these models and statistical data, the analysis finds that the rural pension funding gap should rise from 97.80 billion Yuan in 2014 to 3062.31 billion Yuan in 2049, which represents an annual growth rate of 10.34%. This implies that, as it stands, the rural pension system in China is not financially sustainable. Finally, the article explains how this problem could be fixed through policy recommendations based on recent international experiences.",
"title": ""
},
{
"docid": "neg:1840350_4",
"text": "This work intends to build a Game Mechanics Ontology based on the mechanics category presented in BoardGameGeek.com vis à vis the formal concepts from the MDA framework. The 51 concepts presented in BoardGameGeek (BGG) as game mechanics are analyzed and arranged in a systemic way in order to build a domain sub-ontology in which the root concept is the mechanics as defined in MDA. The relations between the terms were built from its available descriptions as well as from the authors’ previous experiences. Our purpose is to show that a set of terms commonly accepted by players can lead us to better understand how players perceive the games components that are closer to the designer. The ontology proposed in this paper is not exhaustive. The intent of this work is to supply a tool to game designers, scholars, and others that see game artifacts as study objects or are interested in creating games. However, although it can be used as a starting point for games construction or study, the proposed Game Mechanics Ontology should be seen as the seed of a domain ontology encompassing game mechanics in general.",
"title": ""
},
{
"docid": "neg:1840350_5",
"text": "Sorting is a key kernel in numerous big data application including database operations, graphs and text analytics. Due to low control overhead, parallel bitonic sorting networks are usually employed for hardware implementations to accelerate sorting. Although a typical implementation of merge sort network can lead to low latency and small memory usage, it suffers from low throughput due to the lack of parallelism in the final stage. We analyze a pipelined merge sort network, showing its theoretical limits in terms of latency, memory and, throughput. To increase the throughput, we propose a merge sort based hybrid design where the final few stages in the merge sort network are replaced with “folded” bitonic merge networks. In these “folded” networks, all the interconnection patterns are realized by streaming permutation networks (SPN). We present a theoretical analysis to quantify latency, memory and throughput of our proposed design. Performance evaluations are performed by experiments on Xilinx Virtex-7 FPGA with post place-androute results. We demonstrate that our implementation achieves a throughput close to 10 GBps, outperforming state-of-the-art implementation of sorting on the same hardware by 1.2x, while preserving lower latency and higher memory efficiency.",
"title": ""
},
{
"docid": "neg:1840350_6",
"text": "The lifetime of micro electro–thermo–mechanical actuators with complex electro–thermo–mechanical coupling mechanisms can be decreased significantly due to unexpected failure events. Even more serious is the fact that various failures are tightly coupled due to micro-size and multi-physics effects. Interrelation between performance and potential failures should be established to predict reliability of actuators and improve their design. Thus, a multiphysics modeling approach is proposed to evaluate such interactive effects of failure mechanisms on actuators, where potential failures are pre-analyzed via FMMEA (Failure Modes, Mechanisms, and Effects Analysis) tool for guiding the electro–thermo–mechanical-reliability modeling process. Peak values of temperature, thermal stresses/strains and tip deflection are estimated as indicators for various failure modes and factors (e.g. residual stresses, thermal fatigue, electrical overstress, plastic deformation and parameter variations). Compared with analytical solutions and experimental data, the obtained simulation results were found suitable for coupled performance and reliability analysis of micro actuators and assessment of their design.",
"title": ""
},
{
"docid": "neg:1840350_7",
"text": "This half-day hands-on studio will teach how to design and develop effective interfaces for head mounted and wrist worn wearable computers through the application of user-centered design principles. Attendees will learn gain the knowledge and tools needed to rapidly develop prototype applications, and also complete a hands-on design task. They will also learn good design guidelines for wearable systems and how to apply those guidelines. A variety of tools will be used that do not require any hardware or software experience, many of which are free and/or open source. Attendees will also be provided with material that they can use to continue their learning after the studio is over.",
"title": ""
},
{
"docid": "neg:1840350_8",
"text": "All humans will become presbyopic as part of the aging process where the eye losses the ability to focus at different depths. Progressive additive lenses (PALs) allow a person to focus on objects located at near versus far by combing lenses of different strengths within the same spectacle. However, it is unknown why some patients easily adapt to wearing these lenses while others struggle and complain of vertigo, swim, and nausea as well as experience difficulties with balance. Sixteen presbyopes (nine who adapted to PALs and seven who had tried but could not adapt) participated in this study. This research investigated vergence dynamics and its adaptation using a short-term motor learning experiment to asses the ability to adapt. Vergence dynamics were on average faster and the ability to change vergence dynamics was also greater for presbyopes who adapted to progressive lenses compared to those who could not. Data suggest that vergence dynamics and its adaptation may be used to predict which patients will easily adapt to progressive lenses and discern those who will have difficulty.",
"title": ""
},
{
"docid": "neg:1840350_9",
"text": "As more firms begin to collect (and seek value from) richer customer-level datasets, a focus on the emerging concept of customer-base analysis is becoming increasingly common and critical. Such analyses include forward-looking projections ranging from aggregate-level sales trajectories to individual-level conditional expectations (which, in turn, can be used to derive estimates of customer lifetime value). We provide an overview of a class of parsimonious models (called probability models) that are well-suited to meet these rising challenges. We first present a taxonomy that captures some of the key distinctions across different kinds of business settings and customer relationships, and identify some of the unique modeling and measurement issues that arise across them. We then provide deeper coverage of these modeling issues, first for noncontractual settings (i.e., situations in which customer “death” is unobservable), then contractual ones (i.e., situations in which customer “death” can be observed). We review recent literature in these areas, highlighting substantive insights that arise from the research as well as the methods used to capture them. We focus on practical applications that use appropriately chosen data summaries (such as recency and frequency) and rely on commonly available software packages (such as Microsoft Excel). n 2009 Direct Marketing Educational Foundation, Inc. Published by Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "neg:1840350_10",
"text": "The authors present full wave simulations and experimental results of propagation of electromagnetic waves in shallow seawaters. Transmitter and receiver antennas are ten-turns loops placed on the seabed. Some propagation frameworks are presented and simulated. Finally, simulation results are compared with experimental ones.",
"title": ""
},
{
"docid": "neg:1840350_11",
"text": "This paper presents an ultra-low-power event-driven analog-to-digital converter (ADC) with real-time QRS detection for wearable electrocardiogram (ECG) sensors in wireless body sensor network (WBSN) applications. Two QRS detection algorithms, pulse-triggered (PUT) and time-assisted PUT (t-PUT), are proposed based on the level-crossing events generated from the ADC. The PUT detector achieves 97.63% sensitivity and 97.33% positive prediction in simulation on the MIT-BIH Arrhythmia Database. The t-PUT improves the sensitivity and positive prediction to 97.76% and 98.59% respectively. Fabricated in 0.13 μm CMOS technology, the ADC with QRS detector consumes only 220 nW measured under 300 mV power supply, making it the first nanoWatt compact analog-to-information (A2I) converter with embedded QRS detector.",
"title": ""
},
{
"docid": "neg:1840350_12",
"text": "Traditional bullying has received considerable research but the emerging phenomenon of cyber-bullying much less so. Our study aims to investigate environmental and psychological factors associated with traditional and cyber-bullying. In a school-based 2-year prospective survey, information was collected on 1,344 children aged 10 including bullying behavior/experience, depression, anxiety, coping strategies, self-esteem, and psychopathology. Parents reported demographic data, general health, and attention-deficit hyperactivity disorder (ADHD) symptoms. These were investigated in relation to traditional and cyber-bullying perpetration and victimization at age 12. Male gender and depressive symptoms were associated with all types of bullying behavior and experience. Living with a single parent was associated with perpetration of traditional bullying while higher ADHD symptoms were associated with victimization from this. Lower academic achievement and lower self esteem were associated with cyber-bullying perpetration and victimization, and anxiety symptoms with cyber-bullying perpetration. After adjustment, previous bullying perpetration was associated with victimization from cyber-bullying but not other outcomes. Cyber-bullying has differences in predictors from traditional bullying and intervention programmes need to take these into consideration.",
"title": ""
},
{
"docid": "neg:1840350_13",
"text": "Warping is one of the basic image processing techniques. Directly applying existing monocular image warping techniques to stereoscopic images is problematic as it often introduces vertical disparities and damages the original disparity distribution. In this paper, we show that these problems can be solved by appropriately warping both the disparity map and the two images of a stereoscopic image. We accordingly develop a technique for extending existing image warping algorithms to stereoscopic images. This technique divides stereoscopic image warping into three steps. Our method first applies the user-specified warping to one of the two images. Our method then computes the target disparity map according to the user specified warping. The target disparity map is optimized to preserve the perceived 3D shape of image content after image warping. Our method finally warps the other image using a spatially-varying warping method guided by the target disparity map. Our experiments show that our technique enables existing warping methods to be effectively applied to stereoscopic images, ranging from parametric global warping to non-parametric spatially-varying warping.",
"title": ""
},
{
"docid": "neg:1840350_14",
"text": "Errors are prevalent in data sequences, such as GPS trajectories or sensor readings. Existing methods on cleaning sequential data employ a constraint on value changing speeds and perform constraint-based repairing. While such speed constraints are effective in identifying large spike errors, the small errors that do not significantly deviate from the truth and indeed satisfy the speed constraints can hardly be identified and repaired. To handle such small errors, in this paper, we propose a statistical based cleaning method. Rather than declaring a broad constraint of max/min speeds, we model the probability distribution of speed changes. The repairing problem is thus to maximize the likelihood of the sequence w.r.t. the probability of speed changes. We formalize the likelihood-based cleaning problem, show its NP-hardness, devise exact algorithms, and propose several approximate/heuristic methods to trade off effectiveness for efficiency. Experiments on real data sets (in various applications) demonstrate the superiority of our proposal.",
"title": ""
},
{
"docid": "neg:1840350_15",
"text": "In the recent past, several sampling-based algorithms have been proposed to compute trajectories that are collision-free and dynamically-feasible. However, the outputs of such algorithms are notoriously jagged. In this paper, by focusing on robots with car-like dynamics, we present a fast and simple heuristic algorithm, named Convex Elastic Smoothing (CES) algorithm, for trajectory smoothing and speed optimization. The CES algorithm is inspired by earlier work on elastic band planning and iteratively performs shape and speed optimization. The key feature of the algorithm is that both optimization problems can be solved via convex programming, making CES particularly fast. A range of numerical experiments show that the CES algorithm returns high-quality solutions in a matter of a few hundreds of milliseconds and hence appears amenable to a real-time implementation.",
"title": ""
},
{
"docid": "neg:1840350_16",
"text": "Customer retention is a major issue for various service-based organizations particularly telecom industry, wherein predictive models for observing the behavior of customers are one of the great instruments in customer retention process and inferring the future behavior of the customers. However, the performances of predictive models are greatly affected when the real-world data set is highly imbalanced. A data set is called imbalanced if the samples size from one class is very much smaller or larger than the other classes. The most commonly used technique is over/under sampling for handling the class-imbalance problem (CIP) in various domains. In this paper, we survey six well-known sampling techniques and compare the performances of these key techniques, i.e., mega-trend diffusion function (MTDF), synthetic minority oversampling technique, adaptive synthetic sampling approach, couples top-N reverse k-nearest neighbor, majority weighted minority oversampling technique, and immune centroids oversampling technique. Moreover, this paper also reveals the evaluation of four rules-generation algorithms (the learning from example module, version 2 (LEM2), covering, exhaustive, and genetic algorithms) using publicly available data sets. The empirical results demonstrate that the overall predictive performance of MTDF and rules-generation based on genetic algorithms performed the best as compared with the rest of the evaluated oversampling methods and rule-generation algorithms.",
"title": ""
},
{
"docid": "neg:1840350_17",
"text": "Increasing of head rise (HR) and decreasing of head loss (HL), simultaneously, are important purpose in the design of different types of fans. Therefore, multi-objective optimization process is more applicable for the design of such turbo machines. In the present study, multi-objective optimization of Forward-Curved (FC) blades centrifugal fans is performed at three steps. At the first step, Head rise (HR) and the Head loss (HL) in a set of FC centrifugal fan is numerically investigated using commercial software NUMECA. Two meta-models based on the evolved group method of data handling (GMDH) type neural networks are obtained, at the second step, for modeling of HR and HL with respect to geometrical design variables. Finally, using obtained polynomial neural networks, multi-objective genetic algorithms are used for Pareto based optimization of FC centrifugal fans considering two conflicting objectives, HR and HL. It is shown that some interesting and important relationships as useful optimal design principles involved in the performance of FC fans can be discovered by Pareto based multi-objective optimization of the obtained polynomial meta-models representing their HR and HL characteristics. Such important optimal principles would not have been obtained without the use of both GMDH type neural network modeling and the Pareto optimization approach.",
"title": ""
},
{
"docid": "neg:1840350_18",
"text": "The Hadoop Distributed File System (HDFS) is a distributed storage system that stores large-scale data sets reliably and streams those data sets to applications at high bandwidth. HDFS provides high performance, reliability and availability by replicating data, typically three copies of every data. The data in HDFS changes in popularity over time. To get better performance and higher disk utilization, the replication policy of HDFS should be elastic and adapt to data popularity. In this paper, we describe ERMS, an elastic replication management system for HDFS. ERMS provides an active/standby storage model for HDFS. It utilizes a complex event processing engine to distinguish real-time data types, and then dynamically increases extra replicas for hot data, cleans up these extra replicas when the data cool down, and uses erasure codes for cold data. ERMS also introduces a replica placement strategy for the extra replicas of hot data and erasure coding parities. The experiments show that ERMS effectively improves the reliability and performance of HDFS and reduce storage overhead.",
"title": ""
},
{
"docid": "neg:1840350_19",
"text": "One of the major challenges with electric shipboard power systems (SPS) is preserving the survivability of the system under fault situations. Some minor faults in SPS can result in catastrophic consequences. Therefore, it is essential to investigate available fault management techniques for SPS applications that can enhance SPS robustness and reliability. Many recent studies in this area take different approaches to address fault tolerance in SPSs. This paper provides an overview of the concepts and methodologies that are utilized to deal with faults in the electric SPS. First, a taxonomy of the types of faults and their sources in SPS is presented; then, the methods that are used to detect, identify, isolate, and manage faults are reviewed. Furthermore, common techniques for designing a fault management system in SPS are analyzed and compared. This paper also highlights several possible future research directions.",
"title": ""
}
] |
1840351 | A Knowledge-Grounded Neural Conversation Model | [
{
"docid": "pos:1840351_0",
"text": "In an end-to-end dialog system, the aim of dialog state tracking is to accurately estimate a compact representation of the current dialog status from a sequence of noisy observations produced by the speech recognition and the natural language understanding modules. This paper introduces a novel method of dialog state tracking based on the general paradigm of machine reading and proposes to solve it using an End-to-End Memory Network, MemN2N, a memory-enhanced neural network architecture. We evaluate the proposed approach on the second Dialog State Tracking Challenge (DSTC-2) dataset. The corpus has been converted for the occasion in order to frame the hidden state variable inference as a questionanswering task based on a sequence of utterances extracted from a dialog. We show that the proposed tracker gives encouraging results. Then, we propose to extend the DSTC-2 dataset and the definition of this dialog state task with specific reasoning capabilities like counting, list maintenance, yes-no question answering and indefinite knowledge management. Finally, we present encouraging results using our proposed MemN2N based tracking model.",
"title": ""
},
{
"docid": "pos:1840351_1",
"text": "Even when the role of a conversational agent is well known users persist in confronting them with Out-of-Domain input. This often results in inappropriate feedback, leaving the user unsatisfied. In this paper we explore the automatic creation/enrichment of conversational agents’ knowledge bases by taking advantage of natural language interactions present in the Web, such as movies subtitles. Thus, we introduce Filipe, a chatbot that answers users’ request by taking advantage of a corpus of turns obtained from movies subtitles (the Subtle corpus). Filipe is based on Say Something Smart, a tool responsible for indexing a corpus of turns and selecting the most appropriate answer, which we fully describe in this paper. Moreover, we show how this corpus of turns can help an existing conversational agent to answer Out-of-Domain interactions. A preliminary evaluation is also presented.",
"title": ""
},
{
"docid": "pos:1840351_2",
"text": "We describe a new class of learning models called memory networks. Memory networks reason with inference components combined with a long-term memory component; they learn how to use these jointly. The long-term memory can be read and written to, with the goal of using it for prediction. We investigate these models in the context of question answering (QA) where the long-term memory effectively acts as a (dynamic) knowledge base, and the output is a textual response. We evaluate them on a large-scale QA task, and a smaller, but more complex, toy task generated from a simulated world. In the latter, we show the reasoning power of such models by chaining multiple supporting sentences to answer questions that require understanding the intension of verbs.",
"title": ""
}
] | [
{
"docid": "neg:1840351_0",
"text": "We analyze the line simpli cation algorithm reported by Douglas and Peucker and show that its worst case is quadratic in n, the number of input points. Then we give a algorithm, based on path hulls, that uses the geometric structure of the problem to attain a worst-case running time proportional to n log 2 n, which is the best case of the Douglas algorithm. We give complete C code and compare the two algorithms theoretically, by operation counts, and practically, by machine timings.",
"title": ""
},
{
"docid": "neg:1840351_1",
"text": "For the last few years, the EC Commission has been reviewing its application of Article 82EC which prohibits the abuse of a dominant position on the Common Market. The review has resulted in a Communication from the EC Commission which for the first time sets out its enforcement priorities under Article 82EC. The review had been limited to the so-called ‘exclusionary’ abuses and excluded ‘exploitative’ abuses; the enforcement priorities of the EC Commission set out in the Guidance (2008) are also limited to ‘exclusionary’ abuses. This is, however, odd since the EC Commission expresses the objective of Article 82EC as enhancing consumer welfare: exploitative abuses can directly harm consumers unlike exclusionary abuses which can only indirectly harm consumers as the result of exclusion of competitors. This paper questions whether and under which circumstances exploitation can and/or should be found ‘abusive’. It argues that ‘exploitative’ abuse can and should be used as the test of anticompetitive effects on the market under an effects-based approach and thus conduct should only be found abusive if it is ‘exploitative’. Similarly, mere exploitation does not demonstrate harm to competition and without the latter, exploitation on its own should not be found abusive. December 2008",
"title": ""
},
{
"docid": "neg:1840351_2",
"text": "Single case studies led to the discovery and phenomenological description of Gelotophobia and its definition as the pathological fear of appearing to social partners as a ridiculous object (Titze 1995, 1996, 1997). The aim of the present study is to empirically examine the core assumptions about the fear of being laughed at in a sample comprising a total of 863 clinical and non-clinical participants. Discriminant function analysis yielded that gelotophobes can be separated from other shame-based neurotics, non-shamebased neurotics, and controls. Separation was best for statements specifically describing the gelotophobic symptomatology and less potent for more general questions describing socially avoidant behaviors. Factor analysis demonstrates that while Gelotophobia is composed of a set of correlated elements in homogenous samples, overall the concept is best conceptualized as unidimensional. Predicted and actual group membership converged well in a cross-classification (approximately 69% of correctly classified cases). Overall, it can be concluded that the fear of being laughed at varies tremendously among adults and might hold a key to understanding certain forms",
"title": ""
},
{
"docid": "neg:1840351_3",
"text": "Despite significant progress in the development of human action detection datasets and algorithms, no current dataset is representative of real-world aerial view scenarios. We present Okutama-Action, a new video dataset for aerial view concurrent human action detection. It consists of 43 minute-long fully-annotated sequences with 12 action classes. Okutama-Action features many challenges missing in current datasets, including dynamic transition of actions, significant changes in scale and aspect ratio, abrupt camera movement, as well as multi-labeled actors. As a result, our dataset is more challenging than existing ones, and will help push the field forward to enable real-world applications.",
"title": ""
},
{
"docid": "neg:1840351_4",
"text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstoLorg/aboutiterms.html. JSTOR's Terms and Conditions ofDse provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use. Each copy of any part of a JSTOR transmission must contain the same copyright notice that appears on the screen or printed page of such transmission. Operations Research is published by INFORMS. Please contact the publisher for further permissions regarding the use of this work. Publisher contact information may be obtained at http://www.jstor.org/jowllalslinforms.html.",
"title": ""
},
{
"docid": "neg:1840351_5",
"text": "Medical image fusion is the process of registering and combining multiple images from single or multiple imaging modalities to improve the imaging quality and reduce randomness and redundancy in order to increase the clinical applicability of medical images for diagnosis and assessment of medical problems. Multi-modal medical image fusion algorithms and devices have shown notable achievements in improving clinical accuracy of decisions based on medical images. This review article provides a factual listing of methods and summarizes the broad scientific challenges faced in the field of medical image fusion. We characterize the medical image fusion research based on (1) the widely used image fusion methods, (2) imaging modalities, and (3) imaging of organs that are under study. This review concludes that even though there exists several open ended technological and scientific challenges, the fusion of medical images has proved to be useful for advancing the clinical reliability of using medical imaging for medical diagnostics and analysis, and is a scientific discipline that has the potential to significantly grow in the coming years.",
"title": ""
},
{
"docid": "neg:1840351_6",
"text": "Numerous recommendation approaches are in use today. However, comparing their effectiveness is a challenging task because evaluation results are rarely reproducible. In this article, we examine the challenge of reproducibility in recommender-system research. We conduct experiments using Plista’s news recommender system, and Docear’s research-paper recommender system. The experiments show that there are large discrepancies in the effectiveness of identical recommendation approaches in only slightly different scenarios, as well as large discrepancies for slightly different approaches in identical scenarios. For example, in one news-recommendation scenario, the performance of a content-based filtering approach was twice as high as the second-best approach, while in another scenario the same content-based filtering approach was the worst performing approach. We found several determinants that may contribute to the large discrepancies observed in recommendation effectiveness. Determinants we examined include user characteristics (gender and age), datasets, weighting schemes, the time at which recommendations were shown, and user-model size. Some of the determinants have interdependencies. For instance, the optimal size of an algorithms’ user model depended on users’ age. Since minor variations in approaches and scenarios can lead to significant changes in a recommendation approach’s performance, ensuring reproducibility of experimental results is difficult. We discuss these findings and conclude that to ensure reproducibility, the recommender-system community needs to (1) survey other research fields and learn from them, (2) find a common understanding of reproducibility, (3) identify and understand the determinants that affect reproducibility, (4) conduct more comprehensive experiments, (5) modernize publication practices, (6) foster the development and use of recommendation frameworks, and (7) establish best-practice guidelines for recommender-systems research.",
"title": ""
},
{
"docid": "neg:1840351_7",
"text": "Many real-world datasets are comprised of different representations or views which often provide information complementary to each other. To integrate information from multiple views in the unsupervised setting, multiview clustering algorithms have been developed to cluster multiple views simultaneously to derive a solution which uncovers the common latent structure shared by multiple views. In this paper, we propose a novel NMFbased multi-view clustering algorithm by searching for a factorization that gives compatible clustering solutions across multiple views. The key idea is to formulate a joint matrix factorization process with the constraint that pushes clustering solution of each view towards a common consensus instead of fixing it directly. The main challenge is how to keep clustering solutions across different views meaningful and comparable. To tackle this challenge, we design a novel and effective normalization strategy inspired by the connection between NMF and PLSA. Experimental results on synthetic and several real datasets demonstrate the effectiveness of our approach.",
"title": ""
},
{
"docid": "neg:1840351_8",
"text": "Tasks like search-and-rescue and urban reconnaissance benefit from large numbers of robots working together, but high levels of autonomy are needed in order to reduce operator requirements to practical levels. Reducing the reliance of such systems on human operators presents a number of technical challenges including automatic task allocation, global state and map estimation, robot perception, path planning, communications, and human-robot interfaces. This paper describes our 14-robot team, designed to perform urban reconnaissance missions, that won the MAGIC 2010 competition. This paper describes a variety of autonomous systems which require minimal human effort to control a large number of autonomously exploring robots. Maintaining a consistent global map, essential for autonomous planning and for giving humans situational awareness, required the development of fast loop-closing, map optimization, and communications algorithms. Key to our approach was a decoupled centralized planning architecture that allowed individual robots to execute tasks myopically, but whose behavior was coordinated centrally. In this paper, we will describe technical contributions throughout our system that played a significant role in the performance of our system. We will also present results from our system both from the competition and from subsequent quantitative evaluations, pointing out areas in which the system performed well and where interesting research problems remain.",
"title": ""
},
{
"docid": "neg:1840351_9",
"text": "With the rapid development of the Internet, many types of websites have been developed. This variety of websites makes it necessary to adopt systemized evaluation criteria with a strong theoretical basis. This study proposes a set of evaluation criteria derived from an architectural perspective which has been used for over a 1000 years in the evaluation of buildings. The six evaluation criteria are internal reliability and external security for structural robustness, useful content and usable navigation for functional utility, and system interface and communication interface for aesthetic appeal. The impacts of the six criteria on user satisfaction and loyalty have been investigated through a large-scale survey. The study results indicate that the six criteria have different impacts on user satisfaction for different types of websites, which can be classified along two dimensions: users’ goals and users’ activity levels.",
"title": ""
},
{
"docid": "neg:1840351_10",
"text": "Big data is often mined using clustering algorithms. Density-Based Spatial Clustering of Applications with Noise (DBSCAN) is a popular spatial clustering algorithm. However, it is computationally expensive and thus for clustering big data, parallel processing is required. The two prevalent paradigms for parallel processing are High-Performance Computing (HPC) based on Message Passing Interface (MPI) or Open Multi-Processing (OpenMP) and the newer big data frameworks such as Apache Spark or Hadoop. This report surveys for these two different paradigms publicly available implementations that aim at parallelizing DBSCAN and compares their performance. As a result, it is found that the big data implementations are not yet mature and in particular for skewed data, the implementation’s decomposition of the input data into parallel tasks has a huge influence on the performance in terms of running time.",
"title": ""
},
{
"docid": "neg:1840351_11",
"text": "BACKGROUND\nIncreasing evidence demonstrates that motor-skill memories improve across a night of sleep, and that non-rapid eye movement (NREM) sleep commonly plays a role in orchestrating these consolidation enhancements. Here we show the benefit of a daytime nap on motor memory consolidation and its relationship not simply with global sleep-stage measures, but unique characteristics of sleep spindles at regionally specific locations; mapping to the corresponding memory representation.\n\n\nMETHODOLOGY/PRINCIPAL FINDINGS\nTwo groups of subjects trained on a motor-skill task using their left hand - a paradigm known to result in overnight plastic changes in the contralateral, right motor cortex. Both groups trained in the morning and were tested 8 hr later, with one group obtaining a 60-90 minute intervening midday nap, while the other group remained awake. At testing, subjects that did not nap showed no significant performance improvement, yet those that did nap expressed a highly significant consolidation enhancement. Within the nap group, the amount of offline improvement showed a significant correlation with the global measure of stage-2 NREM sleep. However, topographical sleep spindle analysis revealed more precise correlations. Specifically, when spindle activity at the central electrode of the non-learning hemisphere (left) was subtracted from that in the learning hemisphere (right), representing the homeostatic difference following learning, strong positive relationships with offline memory improvement emerged-correlations that were not evident for either hemisphere alone.\n\n\nCONCLUSIONS/SIGNIFICANCE\nThese results demonstrate that motor memories are dynamically facilitated across daytime naps, enhancements that are uniquely associated with electrophysiological events expressed at local, anatomically discrete locations of the brain.",
"title": ""
},
{
"docid": "neg:1840351_12",
"text": "This paper deals with mean-field Eshelby-based homogenization techniques for multi-phase composites and focuses on three subjects which in our opinion deserved more attention than they did in the existing literature. Firstly, for two-phase composites, that is when in a given representative volume element all the inclusions have the same material properties, aspect ratio and orientation, an interpolative double inclusion model gives perhaps the best predictions to date for a wide range of volume fractions and stiffness contrasts. Secondly, for multi-phase composites (including two-phase composites with non-aligned inclusions as a special case), direct homogenization schemes might lead to a non-symmetric overall stiffness tensor, while a two-step homogenization procedure gives physically acceptable results. Thirdly, a general procedure allows to formulate the thermo-elastic version of any homogenization model defined by its isothermal strain concentration tensors. For all three subjects, the theory is presented in detail and validated against experimental data or finite element results for numerous composite systems. 2004 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "neg:1840351_13",
"text": "Big data storage and processing are considered as one of the main applications for cloud computing systems. Furthermore, the development of the Internet of Things (IoT) paradigm has advanced the research on Machine to Machine (M2M) communications and enabled novel tele-monitoring architectures for E-Health applications. However, there is a need for converging current decentralized cloud systems, general software for processing big data and IoT systems. The purpose of this paper is to analyze existing components and methods of securely integrating big data processing with cloud M2M systems based on Remote Telemetry Units (RTUs) and to propose a converged E-Health architecture built on Exalead CloudView, a search based application. Finally, we discuss the main findings of the proposed implementation and future directions.",
"title": ""
},
{
"docid": "neg:1840351_14",
"text": "BACKGROUND\nHypovitaminosis D and a low calcium intake contribute to increased parathyroid function in elderly persons. Calcium and vitamin D supplements reduce this secondary hyperparathyroidism, but whether such supplements reduce the risk of hip fractures among elderly people is not known.\n\n\nMETHODS\nWe studied the effects of supplementation with vitamin D3 (cholecalciferol) and calcium on the frequency of hip fractures and other nonvertebral fractures, identified radiologically, in 3270 healthy ambulatory women (mean [+/- SD] age, 84 +/- 6 years). Each day for 18 months, 1634 women received tricalcium phosphate (containing 1.2 g of elemental calcium) and 20 micrograms (800 IU) of vitamin D3, and 1636 women received a double placebo. We measured serial serum parathyroid hormone and 25-hydroxyvitamin D (25(OH)D) concentrations in 142 women and determined the femoral bone mineral density at base line and after 18 months in 56 women.\n\n\nRESULTS\nAmong the women who completed the 18-month study, the number of hip fractures was 43 percent lower (P = 0.043) and the total number of nonvertebral fractures was 32 percent lower (P = 0.015) among the women treated with vitamin D3 and calcium than among those who received placebo. The results of analyses according to active treatment and according to intention to treat were similar. In the vitamin D3-calcium group, the mean serum parathyroid hormone concentration had decreased by 44 percent from the base-line value at 18 months (P < 0.001) and the serum 25(OH)D concentration had increased by 162 percent over the base-line value (P < 0.001). The bone density of the proximal femur increased 2.7 percent in the vitamin D3-calcium group and decreased 4.6 percent in the placebo group (P < 0.001).\n\n\nCONCLUSIONS\nSupplementation with vitamin D3 and calcium reduces the risk of hip fractures and other nonvertebral fractures among elderly women.",
"title": ""
},
{
"docid": "neg:1840351_15",
"text": "Automated personality prediction from social media is gaining increasing attention in natural language processing and social sciences communities. However, due to high labeling costs and privacy issues, the few publicly available datasets are of limited size and low topic diversity. We address this problem by introducing a large-scale dataset derived from Reddit, a source so far overlooked for personality prediction. The dataset is labeled with Myers-Briggs Type Indicators (MBTI) and comes with a rich set of features for more than 9k users. We carry out a preliminary feature analysis, revealing marked differences between the MBTI dimensions and poles. Furthermore, we use the dataset to train and evaluate benchmark personality prediction models, achieving macro F1-scores between 67% and 82% on the individual dimensions and 82% accuracy for exact or one-off accurate type prediction. These results are encouraging and comparable with the reliability of standardized tests.",
"title": ""
},
{
"docid": "neg:1840351_16",
"text": "Optic ataxia is a high-order deficit in reaching to visual goals that occurs with posterior parietal cortex (PPC) lesions. It is a component of Balint's syndrome that also includes attentional and gaze disorders. Aspects of optic ataxia are misreaching in the contralesional visual field, difficulty preshaping the hand for grasping, and an inability to correct reaches online. Recent research in nonhuman primates (NHPs) suggests that many aspects of Balint's syndrome and optic ataxia are a result of damage to specific functional modules for reaching, saccades, grasp, attention, and state estimation. The deficits from large lesions in humans are probably composite effects from damage to combinations of these functional modules. Interactions between these modules, either within posterior parietal cortex or downstream within frontal cortex, may account for more complex behaviors such as hand-eye coordination and reach-to-grasp.",
"title": ""
},
{
"docid": "neg:1840351_17",
"text": "This paper presents a novel switch-mode power amplifier based on a multicell multilevel circuit topology. The total output voltage of the system is formed by series connection of several switching cells having a low dc-link voltage. Therefore, the cells can be realized using modern low-voltage high-current power MOSFET devices and the dc link can easily be buffered by rechargeable batteries or “super” capacitors to achieve very high amplifier peak output power levels (“flying-battery” concept). The cells are operated in a phase-shifted interleaved pulsewidth-modulation mode, which, in connection with the low partial voltage of each cell, reduces the filtering effort at the output of the total amplifier to a large extent and, consequently, improves the dynamic system behavior. The paper describes the operating principle of the system, analyzes the fundamental relationships being relevant for the circuit design, and gives guidelines for the dimensioning of the control circuit. Furthermore, simulation results as well as results of measurements taken from a laboratory setup are presented.",
"title": ""
},
{
"docid": "neg:1840351_18",
"text": "Convolution Neural Networks on Graphs are important generalization and extension of classical CNNs. While previous works generally assumed that the graph structures of samples are regular with unified dimensions, in many applications, they are highly diverse or even not well defined. Under some circumstances, e.g. chemical molecular data, clustering or coarsening for simplifying the graphs is hard to be justified chemically. In this paper, we propose a more general and flexible graph convolution network (EGCN) fed by batch of arbitrarily shaped data together with their evolving graph Laplacians trained in supervised fashion. Extensive experiments have been conducted to demonstrate the superior performance in terms of both the acceleration of parameter fitting and the significantly improved prediction accuracy on multiple graph-structured datasets.",
"title": ""
}
] |
1840352 | On the wafer/pad friction of chemical-mechanical planarization (CMP) processes - Part I: modeling and analysis | [
{
"docid": "pos:1840352_0",
"text": "A chemical-mechanical planarization (CMP) model based on lubrication theory is developed which accounts for pad compressibility, pad porosity and means of slurry delivery. Slurry ®lm thickness and velocity distributions between the pad and the wafer are predicted using the model. Two regimes of CMP operation are described: the lubrication regime (for ,40±70 mm slurry ®lm thickness) and the contact regime (for thinner ®lms). These regimes are identi®ed for two different pads using experimental copper CMP data and the predictions of the model. The removal rate correlation based on lubrication and mass transport theory agrees well with our experimental data in the lubrication regime. q 2000 Elsevier Science S.A. All rights reserved.",
"title": ""
},
{
"docid": "pos:1840352_1",
"text": "This paper presents the experimental validation and some application examples of the proposed wafer/pad friction models for linear chemical-mechanical planarization (CMP) processes in the companion paper. An experimental setup of a linear CMP polisher is first presented and some polishing processes are then designed for validation of the wafer/pad friction modeling and analysis. The friction torques of both the polisher spindle and roller systems are used to monitor variations of the friction coefficient in situ . Verification of the friction model under various process parameters is presented. Effects of pad conditioning and the wafer film topography on wafer/pad friction are experimentally demonstrated. Finally, several application examples are presented showing the use of the roller motor current measurement for real-time process monitoring and control.",
"title": ""
}
] | [
{
"docid": "neg:1840352_0",
"text": "This paper presents a new method to control multiple micro-scale magnetic agents operating in close proximity to each other for applications in microrobotics. Controlling multiple magnetic microrobots close to each other is difficult due to magnetic interactions between the agents, and here we seek to control those interactions for the creation of desired multi-agent formations. We use the fact that all magnetic agents orient to the global input magnetic field to modulate the local attraction-repulsion forces between nearby agents. Here we study these controlled interaction magnetic forces for agents at a water-air interface and devise two controllers to regulate the inter-agent spacing and heading of the set, for motion in two dimensions. Simulation and experimental demonstrations show the feasibility of the idea and its potential for the completion of complex tasks using teams of microrobots. Average tracking error of less than 73 μm and 14° is accomplished for the regulation of the inter-agent space and the pair heading angle, respectively, for identical disk-shape agents with nominal radius of 500 μm and thickness of 80 μm operating within several body-lengths of each other.",
"title": ""
},
{
"docid": "neg:1840352_1",
"text": "We present a method for object categorization in real-world scenes. Following a common consensus in the field, we do not assume that a figure-ground segmentation is available prior to recognition. However, in contrast to most standard approaches for object class recognition, our approach automatically segments the object as a result of the categorization. This combination of recognition and segmentation into one process is made possible by our use of an Implicit Shape Model, which integrates both capabilities into a common probabilistic framework. This model can be thought of as a non-parametric approach which can easily handle configurations of large numbers of object parts. In addition to the recognition and segmentation result, it also generates a per-pixel confidence measure specifying the area that supports a hypothesis and how much it can be trusted. We use this confidence to derive a natural extension of the approach to handle multiple objects in a scene and resolve ambiguities between overlapping hypotheses with an MDL-based criterion. In addition, we present an extensive evaluation of our method on a standard dataset for car detection and compare its performance to existing methods from the literature. Our results show that the proposed method outperforms previously published methods while needing one order of magnitude less training examples. Finally, we present results for articulated objects, which show that the proposed method can categorize and segment unfamiliar objects in different articulations and with widely varying texture patterns, even under significant partial occlusion.",
"title": ""
},
{
"docid": "neg:1840352_2",
"text": "Exploring and surveying the world has been an important goal of humankind for thousands of years. Entering the 21st century, the Earth has almost been fully digitally mapped. Widespread deployment of GIS (Geographic Information Systems) technology and a tremendous increase of both satellite and street-level mapping over the last decade enables the public to view large portions of the world using computer applications such as Bing Maps or Google Earth.",
"title": ""
},
{
"docid": "neg:1840352_3",
"text": "We address the highly challenging problem of real-time 3D hand tracking based on a monocular RGB-only sequence. Our tracking method combines a convolutional neural network with a kinematic 3D hand model, such that it generalizes well to unseen data, is robust to occlusions and varying camera viewpoints, and leads to anatomically plausible as well as temporally smooth hand motions. For training our CNN we propose a novel approach for the synthetic generation of training data that is based on a geometrically consistent image-to-image translation network. To be more specific, we use a neural network that translates synthetic images to \"real\" images, such that the so-generated images follow the same statistical distribution as real-world hand images. For training this translation network we combine an adversarial loss and a cycle-consistency loss with a geometric consistency loss in order to preserve geometric properties (such as hand pose) during translation. We demonstrate that our hand tracking system outperforms the current state-of-the-art on challenging RGB-only footage.",
"title": ""
},
{
"docid": "neg:1840352_4",
"text": "Mobile Edge Computing is an emerging technology that provides cloud and IT services within the close proximity of mobile subscribers. Traditional telecom network operators perform traffic control flow (forwarding and filtering of packets), but in Mobile Edge Computing, cloud servers are also deployed in each base station. Therefore, network operator has a great responsibility in serving mobile subscribers. Mobile Edge Computing platform reduces network latency by enabling computation and storage capacity at the edge network. It also enables application developers and content providers to serve context-aware services (such as collaborative computing) by using real time radio access network information. Mobile and Internet of Things devices perform computation offloading for compute intensive applications, such as image processing, mobile gaming, to leverage the Mobile Edge Computing services. In this paper, some of the promising real time Mobile Edge Computing application scenarios are discussed. Later on, a state-of-the-art research efforts on Mobile Edge Computing domain is presented. The paper also presents taxonomy of Mobile Edge Computing, describing key attributes. Finally, open research challenges in successful deployment of Mobile Edge Computing are identified and discussed.",
"title": ""
},
{
"docid": "neg:1840352_5",
"text": "This paper presents modeling approaches for step-up grid-connected photovoltaic systems intended to provide analytical tools for control design. The first approach is based on a voltage source representation of the bulk capacitor interacting with the grid-connected inverter, which is a common model for large DC buses and closed-loop inverters. The second approach considers the inverter of a double-stage PV system as a Norton equivalent, which is widely accepted for open-loop inverters. In addition, the paper considers both ideal and realistic models for the DC/DC converter that interacts with the PV module, providing four mathematical models to cover a wide range of applications. The models are expressed in state space representation to simplify its use in analysis and control design, and also to be easily implemented in simulation software, e.g., Matlab. The PV system was analyzed to demonstrate the non-minimum phase condition for all the models, which is an important aspect to select the control technique. Moreover, the system observability and controllability were studied to define design criteria. Finally, the analytical results are illustrated by means of detailed simulations, and the paper results are validated in an experimental test bench.",
"title": ""
},
{
"docid": "neg:1840352_6",
"text": "This article introduces the functional model of self-disclosure on social network sites by integrating a functional theory of self-disclosure and research on audience representations as situational cues for activating interpersonal goals. According to this model, people pursue strategic goals and disclose differently depending on social media affordances, and self-disclosure goals mediate between media affordances and disclosure intimacy. The results of the empirical study examining self-disclosure motivations and characteristics in Facebook status updates, wall posts, and private messaging lend support to this model and provide insights into the motivational drivers of self-disclosure on SNSs, helping to reconcile traditional views on self-disclosure and self-disclosing behaviors in new media contexts.",
"title": ""
},
{
"docid": "neg:1840352_7",
"text": "A wideband circularly polarized (CP) rectangular dielectric resonator antenna (DRA) is presented. An Archimedean spiral slot is used to excite the rectangular DRA for wideband CP radiation. The operating principle of the proposed antenna is based on using a broadband feeding structure to excite the DRA. A prototype of the proposed antenna is designed, fabricated, and measured. Good agreement between the simulated and measured results is attained, and a wide 3-dB axial-ratio (AR) bandwidth of 25.5% is achieved.",
"title": ""
},
{
"docid": "neg:1840352_8",
"text": "This communication investigates the application of metamaterial absorber (MA) to waveguide slot antenna to reduce its radar cross section (RCS). A novel ultra-thin MA is presented, and its absorbing characteristics and mechanism are analyzed. The PEC ground plane of waveguide slot antenna is covered by this MA. As compared with the slot antenna with a PEC ground plane, the simulation and experiment results demonstrate that the monostatic and bistatic RCS of waveguide slot antenna are reduced significantly, and the performance of antenna is preserved simultaneously.",
"title": ""
},
{
"docid": "neg:1840352_9",
"text": "The VISION (video indexing for searching over networks) digital video library system has been developed in our laboratory as a testbed for evaluating automatic and comprehensive mechanisms for video archive creation and content-based search, ®ltering and retrieval of video over local and wide area networks. In order to provide access to video footage within seconds of broadcast, we have developed a new pipelined digital video processing architecture which is capable of digitizing, processing, indexing and compressing video in real time on an inexpensive general purpose computer. These videos were automatically partitioned into short scenes using video, audio and closed-caption information. The resulting scenes are indexed based on their captions and stored in a multimedia database. A clientserver-based graphical user interface was developed to enable users to remotely search this archive and view selected video segments over networks of dierent bandwidths. Additionally, VISION classi®es the incoming videos with respect to a taxonomy of categories and will selectively send users videos which match their individual pro®les. # 1999 Elsevier Science Ltd. All rights reserved.",
"title": ""
},
{
"docid": "neg:1840352_10",
"text": "This paper proposes a semantics for incorporation that does not require the incorporated nominal to form a syntactic or morphological unit with the verb. Such a semantics is needed for languages like Hindi where semantic intuitions suggest the existence of incorporation but the evidence for syntactic fusion is not compelling. A lexical alternation between regular transitive and incorporating transitive verbs is proposed to derive the particular features of Hindi incorporation. The proposed semantics derives existential force without positing existential closure over the incorporated nominal. It also builds in modality into the meaning of the incorporating verb. This proposal is compared to two other recent proposals for the interpretation of incorporated arguments. The cross-linguistic implications of the analysis developed on the basis of Hindi are also discussed. 1. Identifying Incorporation The primary identification of the phenomenon known as noun incorporation is based on morphological and syntactic evidence about the shape and position of the nominal element involved. Consider the Inuit example in (1a) as well as the more familiar example of English compounding in (1b): 1a. Angunguaq eqalut-tur-p-u-q West Greenlandic -Inuit A-ABS salmon-eat-IND-[-tr]-3S Van Geenhoven (1998) “Angunguaq ate salmon.” b. Mary went apple-picking. The thematic object in (1a) occurs inside the verbal complex, and this affects transitivity. The verb has intransitive marking and the subject has absolutive case instead of the expected ergative. The nominal itself is a bare stem. There is no determiner, case marking, plurality or modification. In other words, an incorporated nominal is an N, not a DP or an NP. Similar comments apply to the English compound in (1b), though it should be noted that English does not have [V N+V] compounds. Though the reasons for this are not particularly well-understood at this time, my purpose in introducing English compounds here is for expository purposes only. A somewhat less obvious case of noun incorporation is attested in Niuean, discussed by Massam (2001). Niuean is an SVO language with obligatory V fronting. Massam notes that in addition to expect VSO order, there also exist sentences with VOS order in Niuean: 1 There can be external modifiers with (a limited set of) determiners, case marking etc. in what is known as the phenomenon of ‘doubling’.",
"title": ""
},
{
"docid": "neg:1840352_11",
"text": "Plants are a tremendous source for the discovery of new products of medicinal value for drug development. Today several distinct chemicals derived from plants are important drugs currently used in one or more countries in the world. Many of the drugs sold today are simple synthetic modifications or copies of the naturally obtained substances. The evolving commercial importance of secondary metabolites has in recent years resulted in a great interest in secondary metabolism, particularly in the possibility of altering the production of bioactive plant metabolites by means of tissue culture technology. Plant cell culture technologies were introduced at the end of the 1960’s as a possible tool for both studying and producing plant secondary metabolites. Different strategies, using an in vitro system, have been extensively studied to improve the production of plant chemicals. The focus of the present review is the application of tissue culture technology for the production of some important plant pharmaceuticals. Also, we describe the results of in vitro cultures and production of some important secondary metabolites obtained in our laboratory.",
"title": ""
},
{
"docid": "neg:1840352_12",
"text": "CTCF and BORIS (CTCFL), two paralogous mammalian proteins sharing nearly identical DNA binding domains, are thought to function in a mutually exclusive manner in DNA binding and transcriptional regulation. Here we show that these two proteins co-occupy a specific subset of regulatory elements consisting of clustered CTCF binding motifs (termed 2xCTSes). BORIS occupancy at 2xCTSes is largely invariant in BORIS-positive cancer cells, with the genomic pattern recapitulating the germline-specific BORIS binding to chromatin. In contrast to the single-motif CTCF target sites (1xCTSes), the 2xCTS elements are preferentially found at active promoters and enhancers, both in cancer and germ cells. 2xCTSes are also enriched in genomic regions that escape histone to protamine replacement in human and mouse sperm. Depletion of the BORIS gene leads to altered transcription of a large number of genes and the differentiation of K562 cells, while the ectopic expression of this CTCF paralog leads to specific changes in transcription in MCF7 cells. We discover two functionally and structurally different classes of CTCF binding regions, 2xCTSes and 1xCTSes, revealed by their predisposition to bind BORIS. We propose that 2xCTSes play key roles in the transcriptional program of cancer and germ cells.",
"title": ""
},
{
"docid": "neg:1840352_13",
"text": "Mechanical valves used for aortic valve replacement (AVR) continue to be associated with bleeding risks because of anticoagulation therapy, while bioprosthetic valves are at risk of structural valve deterioration requiring reoperation. This risk/benefit ratio of mechanical and bioprosthetic valves has led American and European guidelines on valvular heart disease to be consistent in recommending the use of mechanical prostheses in patients younger than 60 years of age. Despite these recommendations, the use of bioprosthetic valves has significantly increased over the last decades in all age groups. A systematic review of manuscripts applying propensity-matching or multivariable analysis to compare the usage of mechanical vs. bioprosthetic valves found either similar outcomes between the two types of valves or favourable outcomes with mechanical prostheses, particularly in younger patients. The risk/benefit ratio and choice of valves will be impacted by developments in valve designs, anticoagulation therapy, reducing the required international normalized ratio, and transcatheter and minimally invasive procedures. However, there is currently no evidence to support lowering the age threshold for implanting a bioprosthesis. Physicians in the Heart Team and patients should be cautious in pursuing more bioprosthetic valve use until its benefit is clearly proven in middle-aged patients.",
"title": ""
},
{
"docid": "neg:1840352_14",
"text": "In this paper, we present a method for cup boundary detection from monocular colour fundus image to help quantify cup changes. The method is based on anatomical evidence such as vessel bends at cup boundary, considered relevant by glaucoma experts. Vessels are modeled and detected in a curvature space to better handle inter-image variations. Bends in a vessel are robustly detected using a region of support concept, which automatically selects the right scale for analysis. A reliable subset called r-bends is derived using a multi-stage strategy and a local splinetting is used to obtain the desired cup boundary. The method has been successfully tested on 133 images comprising 32 normal and 101 glaucomatous images against three glaucoma experts. The proposed method shows high sensitivity in cup to disk ratio-based glaucoma detection and local assessment of the detected cup boundary shows good consensus with the expert markings.",
"title": ""
},
{
"docid": "neg:1840352_15",
"text": "During natural disasters or crises, users on social media tend to easily believe contents of postings related to the events, and retweet the postings with hoping them to be reached to many other users. Unfortunately, there are malicious users who understand the tendency and post misinformation such as spam and fake messages with expecting wider propagation. To resolve the problem, in this paper we conduct a case study of 2013 Moore Tornado and Hurricane Sandy. Concretely, we (i) understand behaviors of these malicious users, (ii) analyze properties of spam, fake and legitimate messages, (iii) propose flat and hierarchical classification approaches, and (iv) detect both fake and spam messages with even distinguishing between them. Our experimental results show that our proposed approaches identify spam and fake messages with 96.43% accuracy and 0.961 F-measure.",
"title": ""
},
{
"docid": "neg:1840352_16",
"text": "Maxim Gumin's WaveFunctionCollapse (WFC) algorithm is an example-driven image generation algorithm emerging from the craft practice of procedural content generation. In WFC, new images are generated in the style of given examples by ensuring every local window of the output occurs somewhere in the input. Operationally, WFC implements a non-backtracking, greedy search method. This paper examines WFC as an instance of constraint solving methods. We trace WFC's explosive influence on the technical artist community, explain its operation in terms of ideas from the constraint solving literature, and probe its strengths by means of a surrogate implementation using answer set programming.",
"title": ""
},
{
"docid": "neg:1840352_17",
"text": "Driven by the prominent thermal signature of humans and following the growing availability of unmanned aerial vehicles (UAVs), more and more research efforts have been focusing on the detection and tracking of pedestrians using thermal infrared images recorded from UAVs. However, pedestrian detection and tracking from the thermal images obtained from UAVs pose many challenges due to the low-resolution of imagery, platform motion, image instability and the relatively small size of the objects. This research tackles these challenges by proposing a pedestrian detection and tracking system. A two-stage blob-based approach is first developed for pedestrian detection. This approach first extracts pedestrian blobs using the regional gradient feature and geometric constraints filtering and then classifies the detected blobs by using a linear Support Vector Machine (SVM) with a hybrid descriptor, which sophisticatedly combines Histogram of Oriented Gradient (HOG) and Discrete Cosine Transform (DCT) features in order to achieve accurate detection. This research further proposes an approach for pedestrian tracking. This approach employs the feature tracker with the update of detected pedestrian location to track pedestrian objects from the registered videos and extracts the motion trajectory data. The proposed detection and tracking approaches have been evaluated by multiple different datasets, and the results illustrate the effectiveness of the proposed methods. This research is expected to significantly benefit many transportation applications, such as the multimodal traffic performance measure, pedestrian behavior study and pedestrian-vehicle crash analysis. Future work will focus on using fused thermal and visual images to further improve the detection efficiency and effectiveness.",
"title": ""
},
{
"docid": "neg:1840352_18",
"text": "BACKGROUND\nDo peripersonal space for acting on objects and interpersonal space for interacting with con-specifics share common mechanisms and reflect the social valence of stimuli? To answer this question, we investigated whether these spaces refer to a similar or different physical distance.\n\n\nMETHODOLOGY\nParticipants provided reachability-distance (for potential action) and comfort-distance (for social processing) judgments towards human and non-human virtual stimuli while standing still (passive) or walking toward stimuli (active).\n\n\nPRINCIPAL FINDINGS\nComfort-distance was larger than other conditions when participants were passive, but reachability and comfort distances were similar when participants were active. Both spaces were modulated by the social valence of stimuli (reduction with virtual females vs males, expansion with cylinder vs robot) and the gender of participants.\n\n\nCONCLUSIONS\nThese findings reveal that peripersonal reaching and interpersonal comfort spaces share a common motor nature and are sensitive, at different degrees, to social modulation. Therefore, social processing seems embodied and grounded in the body acting in space.",
"title": ""
},
{
"docid": "neg:1840352_19",
"text": "China’s New Silk Road initiative is a multistate commercial project as grandiose as it is ambitious. Comprised of an overland economic “belt” and a maritime transit component, it envisages the development of a trade network traversing numerous countries and continents. Major investments in infrastructure are to establish new commercial hubs along the route, linking regions together via railroads, ports, energy transit systems, and technology. A relatively novel concept introduced by China’s President Xi Jinping in 2013, several projects related to the New Silk Road initiative—also called “One Belt, One Road” (OBOR, or B&R)—are being planned, are under construction, or have been recently completed. The New Silk Road is a fluid concept in its formative stages: it encompasses a variety of projects and is all-inclusive in terms of countries welcomed to participate. For these reasons, it has been labeled an abstract or visionary project. However, those in the region can attest that the New Silk Road is a reality, backed by Chinese hard currency. Thus, while Washington continues to deliberate on an overarching policy toward Asia, Beijing is making inroads—literally and figuratively— across the region and beyond.",
"title": ""
}
] |
1840353 | Accurate Monocular Visual-inertial SLAM using a Map-assisted EKF Approach | [
{
"docid": "pos:1840353_0",
"text": "We propose a novel direct visual-inertial odometry method for stereo cameras. Camera pose, velocity and IMU biases are simultaneously estimated by minimizing a combined photometric and inertial energy functional. This allows us to exploit the complementary nature of vision and inertial data. At the same time, and in contrast to all existing visual-inertial methods, our approach is fully direct: geometry is estimated in the form of semi-dense depth maps instead of manually designed sparse keypoints. Depth information is obtained both from static stereo - relating the fixed-baseline images of the stereo camera - and temporal stereo - relating images from the same camera, taken at different points in time. We show that our method outperforms not only vision-only or loosely coupled approaches, but also can achieve more accurate results than state-of-the-art keypoint-based methods on different datasets, including rapid motion and significant illumination changes. In addition, our method provides high-fidelity semi-dense, metric reconstructions of the environment, and runs in real-time on a CPU.",
"title": ""
},
{
"docid": "pos:1840353_1",
"text": "Many popular problems in robotics and computer vision including various types of simultaneous localization and mapping (SLAM) or bundle adjustment (BA) can be phrased as least squares optimization of an error function that can be represented by a graph. This paper describes the general structure of such problems and presents g2o, an open-source C++ framework for optimizing graph-based nonlinear error functions. Our system has been designed to be easily extensible to a wide range of problems and a new problem typically can be specified in a few lines of code. The current implementation provides solutions to several variants of SLAM and BA. We provide evaluations on a wide range of real-world and simulated datasets. The results demonstrate that while being general g2o offers a performance comparable to implementations of state-of-the-art approaches for the specific problems.",
"title": ""
}
] | [
{
"docid": "neg:1840353_0",
"text": "The design and measured results of a 2 times 2 microstrip line fed U-slot rectangular antenna array are presented. The U-slot patches and the feeding network are placed on the same layer, resulting in a very simple structure. The advantage of the microstrip line fed U-slot patch is that it is easy to form the array. An impedance bandwidth (VSWR < 2) of 18% ranging from 5.65 GHz to 6.78 GHz is achieved. The radiation performance including radiation pattern, cross polarization, and gain is also satisfactory within this bandwidth. The measured peak gain of the array is 11.5 dBi. The agreement between simulated results and the measurement ones is good. The 2 times 2 array may be used as a module to form larger array.",
"title": ""
},
{
"docid": "neg:1840353_1",
"text": "The proliferation of distributed energy resources has prompted interest in the expansion of DC power systems. One critical technological limitation that hinders this expansion is the absence of high step-down and high step-up DC converters for interconnecting DC systems. This work attempts to address the latter of these limitations. This paper presents a new transformerless high boost DC-DC converter intended for use as an interconnect between DC systems. With a conversion ratio of 1:10, the converter offers significantly higher boost ratio than the conventional non-isolated boost converter. It is designed to operate at medium to high voltage (>; 1kV), and provides high voltage dc/dc gain (>;5). Based on a current fed resonant topology, the design is well matched to available IGBT switch technology that enables use of relatively high switching frequencies yet accommodates the IGBTs inability to provide reverse blocking functionality. An advanced steady state model suitable for analysis of this converter is presented together with an experimental evaluation of the converter.",
"title": ""
},
{
"docid": "neg:1840353_2",
"text": "AR systems pose potential security concerns that should be addressed before the systems become widespread.",
"title": ""
},
{
"docid": "neg:1840353_3",
"text": "Perfidy is the impersonation of civilians during armed conflict. It is generally outlawed by the laws of war such as the Geneva Conventions as its practice makes wars more dangerous for civilians. Cyber perfidy can be defined as malicious software or hardware masquerading as ordinary civilian software or hardware. We argue that it is also banned by the laws of war in cases where such cyber infrastructure is essential to normal civilian activity. This includes tampering with critical parts of operating systems and security software. We discuss possible targets of cyber perfidy, possible objections to the notion, and possible steps towards international agreements about it. This paper appeared in the Routledge Handbook of War and Ethics as chapter 29, ed. N. Evans, 2013.",
"title": ""
},
{
"docid": "neg:1840353_4",
"text": "Design of self-adaptive software-intensive Cyber-Physical Systems (siCPS) operating in dynamic environments is a significant challenge when a sufficient level of dependability is required. This stems partly from the fact that the concerns of selfadaptivity and dependability are to an extent contradictory. In this paper, we introduce IRM-SA (Invariant Refinement Method for Self-Adaptation) – a design method and associated formally grounded model targeting siCPS – that addresses self-adaptivity and supports dependability by providing traceability between system requirements, distinct situations in the environment, and predefined configurations of system architecture. Additionally, IRM-SA allows for architecture self-adaptation at runtime and integrates the mechanism of predictive monitoring that deals with operational uncertainty. As a proof of concept, it was implemented in DEECo, a component framework that is based on dynamic ensembles of components. Furthermore, its feasibility was evaluated in experimental settings assuming decentralized system operation.",
"title": ""
},
{
"docid": "neg:1840353_5",
"text": "Rearrangement of immunoglobulin heavy-chain variable (VH) gene segments has been suggested to be regulated by interleukin 7 signaling in pro–B cells. However, the genetic evidence for this recombination pathway has been challenged. Furthermore, no molecular components that directly control VH gene rearrangement have been elucidated. Using mice deficient in the interleukin 7–activated transcription factor STAT5, we demonstrate here that STAT5 regulated germline transcription, histone acetylation and DNA recombination of distal VH gene segments. STAT5 associated with VH gene segments in vivo and was recruited as a coactivator with the transcription factor Oct-1. STAT5 did not affect the nuclear repositioning or compaction of the immunoglobulin heavy-chain locus. Therefore, STAT5 functions at a distinct step in regulating distal VH recombination in relation to the transcription factor Pax5 and histone methyltransferase Ezh2.",
"title": ""
},
{
"docid": "neg:1840353_6",
"text": "Company disclosures greatly aid in the process of financial decision-making; therefore, they are consulted by financial investors and automated traders before exercising ownership in stocks. While humans are usually able to correctly interpret the content, the same is rarely true of computerized decision support systems, which struggle with the complexity and ambiguity of natural language. A possible remedy is represented by deep learning, which overcomes several shortcomings of traditional methods of text mining. For instance, recurrent neural networks, such as long shortterm memories, employ hierarchical structures, together with a large number of hidden layers, to automatically extract features from ordered sequences of words and capture highly non-linear relationships such as context-dependent meanings. However, deep learning has only recently started to receive traction, possibly because its performance is largely untested. Hence, this paper studies the use of deep neural networks for financial decision support. We additionally experiment with transfer learning, in which we pre-train the network on a different corpus with a length of 139.1 million words. Our results reveal a higher directional accuracy as compared to traditional machine learning when predicting stock price movements in response ∗Corresponding author. Mail: mathias.kraus@is.uni-freiburg.de; Tel: +49 761 203 2395; Fax: +49 761 203 2416. Email addresses: mathias.kraus@is.uni-freiburg.de (Mathias Kraus), sfeuerriegel@ethz.ch (Stefan Feuerriegel) Preprint submitted to Decision Support Systems July 6, 2018 ar X iv :1 71 0. 03 95 4v 1 [ cs .C L ] 1 1 O ct 2 01 7 to financial disclosures. Our work thereby helps to highlight the business value of deep learning and provides recommendations to practitioners and executives.",
"title": ""
},
{
"docid": "neg:1840353_7",
"text": "Recent research on deep convolutional neural networks (CNNs) has focused primarily on improving accuracy. For a given accuracy level, it is typically possible to identify multiple CNN architectures that achieve that accuracy level. With equivalent accuracy, smaller CNN architectures offer at least three advantages: (1) Smaller CNNs require less communication across servers during distributed training. (2) Smaller CNNs require less bandwidth to export a new model from the cloud to an autonomous car. (3) Smaller CNNs are more feasible to deploy on FPGAs and other hardware with limited memory. To provide all of these advantages, we propose a small CNN architecture called SqueezeNet. SqueezeNet achieves AlexNet-level accuracy on ImageNet with 50x fewer parameters. Additionally, with model compression techniques, we are able to compress SqueezeNet to less than 0.5MB (510× smaller than AlexNet). The SqueezeNet architecture is available for download here: https://github.com/DeepScale/SqueezeNet",
"title": ""
},
{
"docid": "neg:1840353_8",
"text": "This investigation compared the effect of high-volume (VOL) versus high-intensity (INT) resistance training on stimulating changes in muscle size and strength in resistance-trained men. Following a 2-week preparatory phase, participants were randomly assigned to either a high-volume (VOL; n = 14, 4 × 10-12 repetitions with ~70% of one repetition maximum [1RM], 1-min rest intervals) or a high-intensity (INT; n = 15, 4 × 3-5 repetitions with ~90% of 1RM, 3-min rest intervals) training group for 8 weeks. Pre- and posttraining assessments included lean tissue mass via dual energy x-ray absorptiometry, muscle cross-sectional area and thickness of the vastus lateralis (VL), rectus femoris (RF), pectoralis major, and triceps brachii muscles via ultrasound images, and 1RM strength in the back squat and bench press (BP) exercises. Blood samples were collected at baseline, immediately post, 30 min post, and 60 min postexercise at week 3 (WK3) and week 10 (WK10) to assess the serum testosterone, growth hormone (GH), insulin-like growth factor-1 (IGF1), cortisol, and insulin concentrations. Compared to VOL, greater improvements (P < 0.05) in lean arm mass (5.2 ± 2.9% vs. 2.2 ± 5.6%) and 1RM BP (14.8 ± 9.7% vs. 6.9 ± 9.0%) were observed for INT. Compared to INT, area under the curve analysis revealed greater (P < 0.05) GH and cortisol responses for VOL at WK3 and cortisol only at WK10. Compared to WK3, the GH and cortisol responses were attenuated (P < 0.05) for VOL at WK10, while the IGF1 response was reduced (P < 0.05) for INT. It appears that high-intensity resistance training stimulates greater improvements in some measures of strength and hypertrophy in resistance-trained men during a short-term training period.",
"title": ""
},
{
"docid": "neg:1840353_9",
"text": "This paper overviews the International Standards Organization Linguistic Annotation Framework (ISO LAF) developed in ISO TC37 SC4. We describe the XML serialization of ISO LAF, the Graph Annotation Format (GrAF) and discuss the rationale behind the various decisions that were made in determining the standard. We describe the structure of the GrAF headers in detail and provide multiple examples of GrAF representation for text and multi-media. Finally, we discuss the next steps for standardization of interchange formats for linguistic annotations.",
"title": ""
},
{
"docid": "neg:1840353_10",
"text": "Round robin arbiter (RRA) is a critical block in nowadays designs. It is widely found in System-on-chips and Network-on-chips. The need of an efficient RRA has increased extensively as it is a limiting performance block. In this paper, we deliver a comparative review between different RRA architectures found in literature. We also propose a novel efficient RRA architecture. The FPGA implementation results of the previous RRA architectures and our proposed one are given, that show the improvements of the proposed RRA.",
"title": ""
},
{
"docid": "neg:1840353_11",
"text": "Ali, M. A. 2014. Understanding Cancer Mutations by Genome Editing. Digital Comprehensive Summaries of Uppsala Dissertations from the Faculty of Medicine 1054. 37 pp. Uppsala: Acta Universitatis Upsaliensis. ISBN 978-91-554-9106-2. Mutational analyses of cancer genomes have identified novel candidate cancer genes with hitherto unknown function in cancer. To enable phenotyping of mutations in such genes, we have developed a scalable technology for gene knock-in and knock-out in human somatic cells based on recombination-mediated construct generation and a computational tool to design gene targeting constructs. Using this technology, we have generated somatic cell knock-outs of the putative cancer genes ZBED6 and DIP2C in human colorectal cancer cells. In ZBED6 cells complete loss of functional ZBED6 was validated and loss of ZBED6 induced the expression of IGF2. Whole transcriptome and ChIP-seq analyses revealed relative enrichment of ZBED6 binding sites at upregulated genes as compared to downregulated genes. The functional annotation of differentially expressed genes revealed enrichment of genes related to cell cycle and cell proliferation and the transcriptional modulator ZBED6 affected the cell growth and cell cycle of human colorectal cancer cells. In DIP2Ccells, transcriptome sequencing revealed 780 differentially expressed genes as compared to their parental cells including the tumour suppressor gene CDKN2A. The DIP2C regulated genes belonged to several cancer related processes such as angiogenesis, cell structure and motility. The DIP2Ccells were enlarged and grew slower than their parental cells. To be able to directly compare the phenotypes of mutant KRAS and BRAF in colorectal cancers, we have introduced a KRAS allele in RKO BRAF cells. The expression of the mutant KRAS allele was confirmed and anchorage independent growth was restored in KRAS cells. The differentially expressed genes both in BRAF and KRAS mutant cells included ERBB, TGFB and histone modification pathways. Together, the isogenic model systems presented here can provide insights to known and novel cancer pathways and can be used for drug discovery.",
"title": ""
},
{
"docid": "neg:1840353_12",
"text": "Over the years, several security measures have been employed to combat the menace of insecurity of lives and property. This is done by preventing unauthorized entrance into buildings through entrance doors using conventional and electronic locks, discrete access code, and biometric methods such as the finger prints, thumb prints, the iris and facial recognition. In this paper, a prototyped door security system is designed to allow a privileged user to access a secure keyless door where valid smart card authentication guarantees an entry. The model consists of hardware module and software which provides a functionality to allow the door to be controlled through the authentication of smart card by the microcontroller unit. (",
"title": ""
},
{
"docid": "neg:1840353_13",
"text": "This study is conducted with the collaboration of the Malaysian Atomic Energy Licensing Board (AELB) in order to establish dose reference level (DRL) for computed tomography (CT) examinations in Malaysia. 426 examinations for standard adult patients and 26 examinations for paediatric patients comprising different types of CT examinations were collected from 33 out of 109 (30.3%) hospitals that have CT scanner in Malaysia. Measurements of Computed Tomography Dose Index in air (CTDIair) were done at every CT scanner in the hospitals that were involved in this study to investigate the scanner-specific values comparable to the data published by the ImPACT. Effective doses for all CT examinations were calculated using ImPACT Dosimetry Calculator for both ImPACT CTDIair and measured CTDIair values as a comparison. This study found that 4% to 22% of deviations between both values and the deviations represent the dose influence factors contributed by the CT machines. Every protocol used at certain CT examinations were analysed and it was found that tube potential (kVp) was not the main contribution for effective doses deviation. Other scanning parameters such as tube current – time product (mAs), scan length and nonstandardisation in some of the procedures were significant contributors to the effective dose deviations in most of the CT examinations. Effective doses calculated using ImPACT CTDIair were used to compare with other studies to provide an overview of CT practice in Malaysia. Effective doses for examinations of routine head, routine chest and pelvis are within the same range with studies conducted for the European guidelines, the UK and Taiwan. For the routine abdomen examination, the effective dose is still within the range compared to the studies for European guidelines and Taiwan, but 55.1% higher than the value from the study conducted in the UK. Lastly, this study also provided the third quartile values of effective doses for every CT",
"title": ""
},
{
"docid": "neg:1840353_14",
"text": "Endophytes are fungi which infect plants without causing symptoms. Fungi belonging to this group are ubiquitous, and plant species not associated to fungal endophytes are not known. In addition, there is a large biological diversity among endophytes, and it is not rare for some plant species to be hosts of more than one hundred different endophytic species. Different mechanisms of transmission, as well as symbiotic lifestyles occur among endophytic species. Latent pathogens seem to represent a relatively small proportion of endophytic assemblages, also composed by latent saprophytes and mutualistic species. Some endophytes are generalists, being able to infect a wide range of hosts, while others are specialists, limited to one or a few hosts. Endophytes are gaining attention as a subject for research and applications in Plant Pathology. This is because in some cases plants associated to endophytes have shown increased resistance to plant pathogens, particularly fungi and nematodes. Several possible mechanisms by which endophytes may interact with pathogens are discussed in this review. Additional key words: biocontrol, biodiversity, symbiosis.",
"title": ""
},
{
"docid": "neg:1840353_15",
"text": "Recent advances in consumer-grade depth sensors have enable the collection of massive real-world 3D objects. Together with the rise of deep learning, it brings great potential for large-scale 3D object retrieval. In this challenge, we aim to study and evaluate the performance of 3D object retrieval algorithms with RGB-D data. To support the study, we expanded the previous ObjectNN dataset [HTT∗17] to include RGB-D objects from both SceneNN [HPN∗16] and ScanNet [DCS∗17], with the CAD models from ShapeNetSem [CFG∗15]. Evaluation results show that while the RGB-D to CAD retrieval problem is indeed challenging due to incomplete RGB-D reconstructions, it can be addressed to a certain extent using deep learning techniques trained on multi-view 2D images or 3D point clouds. The best method in this track has a 82% retrieval accuracy.",
"title": ""
},
{
"docid": "neg:1840353_16",
"text": "Metagenomic approaches are now commonly used in microbial ecology to study microbial communities in more detail, including many strains that cannot be cultivated in the laboratory. Bioinformatic analyses make it possible to mine huge metagenomic datasets and discover general patterns that govern microbial ecosystems. However, the findings of typical metagenomic and bioinformatic analyses still do not completely describe the ecology and evolution of microbes in their environments. Most analyses still depend on straightforward sequence similarity searches against reference databases. We herein review the current state of metagenomics and bioinformatics in microbial ecology and discuss future directions for the field. New techniques will allow us to go beyond routine analyses and broaden our knowledge of microbial ecosystems. We need to enrich reference databases, promote platforms that enable meta- or comprehensive analyses of diverse metagenomic datasets, devise methods that utilize long-read sequence information, and develop more powerful bioinformatic methods to analyze data from diverse perspectives.",
"title": ""
},
{
"docid": "neg:1840353_17",
"text": "We introduce here a new dual-layer multibeam antenna with a folded Rotman lens used as a compact beam forming network in SIW technology. The objective is to reduce the overall size of the antenna system by folding the Rotman lens on two layers along the array port contour and using a transition based on an exotic reflector and several coupling vias holes. To validate the proposed concepts, an antenna system has been designed at 24.15 GHz. The radiating structure is a SIW slotted waveguide array made of fifteen resonant waveguides. The simulated results show very good scanning performances over ±47°. It is also demonstrated that the proposed transition can lead to a size reduction of about 50% for the lens, and more than 33% for the overall size of the antenna.",
"title": ""
},
{
"docid": "neg:1840353_18",
"text": "The present paper continues our investigations in the field of Supercapacitors or Electrochemical Double Layer Capacitors, briefly named EDLCs. The series connection of EDLCs is usual in order to obtain higher voltage levels. The inherent uneven state of charge (SOC) and manufacturing dispersions determine during charging at constant current that one of the capacitors reaches first the rated voltage levels and could, by further charging, be damaged. The balancing circuit with resistors and transistors used to bypass the charging current can be improved using the proposed circuit. We present here a complex variant, based on integrated circuit acting similar to a microcontroller. The circuit is adapted from the circuits investigated in the last 7–8 years for the batteries, especially for Lithium-ion type. The test board built around the circuit is performant, energy efficient and can be further improved to ensure the balancing control for larger capacitances.",
"title": ""
},
{
"docid": "neg:1840353_19",
"text": "The sparsity of images in a fixed analytic transform domain or dictionary such as DCT or Wavelets has been exploited in many applications in image processing including image compression. Recently, synthesis sparsifying dictionaries that are directly adapted to the data have become popular in image processing. However, the idea of learning sparsifying transforms has received only little attention. We propose a novel problem formulation for learning doubly sparse transforms for signals or image patches. These transforms are a product of a fixed, fast analytic transform such as the DCT, and an adaptive matrix constrained to be sparse. Such transforms can be learnt, stored, and implemented efficiently. We show the superior promise of our approach as compared to analytical sparsifying transforms such as DCT for image representation.",
"title": ""
}
] |
1840354 | A Brief Review of Network Embedding | [
{
"docid": "pos:1840354_0",
"text": "Node embedding techniques have gained prominence since they produce continuous and low-dimensional features, which are effective for various tasks. Most existing approaches learn node embeddings by exploring the structure of networks and are mainly focused on static non-attributed graphs. However, many real-world applications, such as stock markets and public review websites, involve bipartite graphs with dynamic and attributed edges, called attributed interaction graphs. Different from conventional graph data, attributed interaction graphs involve two kinds of entities (e.g. investors/stocks and users/businesses) and edges of temporal interactions with attributes (e.g. transactions and reviews). In this paper, we study the problem of node embedding in attributed interaction graphs. Learning embeddings in interaction graphs is highly challenging due to the dynamics and heterogeneous attributes of edges. Different from conventional static graphs, in attributed interaction graphs, each edge can have totally different meanings when the interaction is at different times or associated with different attributes. We propose a deep node embedding method called IGE (Interaction Graph Embedding). IGE is composed of three neural networks: an encoding network is proposed to transform attributes into a fixed-length vector to deal with the heterogeneity of attributes; then encoded attribute vectors interact with nodes multiplicatively in two coupled prediction networks that investigate the temporal dependency by treating incident edges of a node as the analogy of a sentence in word embedding methods. The encoding network can be specifically designed for different datasets as long as it is differentiable, in which case it can be trained together with prediction networks by back-propagation. We evaluate our proposed method and various comparing methods on four real-world datasets. The experimental results prove the effectiveness of the learned embeddings by IGE on both node clustering and classification tasks.",
"title": ""
},
{
"docid": "pos:1840354_1",
"text": "Since the invention of word2vec, the skip-gram model has significantly advanced the research of network embedding, such as the recent emergence of the DeepWalk, LINE, PTE, and node2vec approaches. In this work, we show that all of the aforementioned models with negative sampling can be unified into the matrix factorization framework with closed forms. Our analysis and proofs reveal that: (1) DeepWalk empirically produces a low-rank transformation of a network's normalized Laplacian matrix; (2) LINE, in theory, is a special case of DeepWalk when the size of vertices' context is set to one; (3) As an extension of LINE, PTE can be viewed as the joint factorization of multiple networks» Laplacians; (4) node2vec is factorizing a matrix related to the stationary distribution and transition probability tensor of a 2nd-order random walk. We further provide the theoretical connections between skip-gram based network embedding algorithms and the theory of graph Laplacian. Finally, we present the NetMF method as well as its approximation algorithm for computing network embedding. Our method offers significant improvements over DeepWalk and LINE for conventional network mining tasks. This work lays the theoretical foundation for skip-gram based network embedding methods, leading to a better understanding of latent network representation learning.",
"title": ""
}
] | [
{
"docid": "neg:1840354_0",
"text": "We present a machine learning algorithm that takes as input a 2D RGB image and synthesizes a 4D RGBD light field (color and depth of the scene in each ray direction). For training, we introduce the largest public light field dataset, consisting of over 3300 plenoptic camera light fields of scenes containing flowers and plants. Our synthesis pipeline consists of a convolutional neural network (CNN) that estimates scene geometry, a stage that renders a Lambertian light field using that geometry, and a second CNN that predicts occluded rays and non-Lambertian effects. Our algorithm builds on recent view synthesis methods, but is unique in predicting RGBD for each light field ray and improving unsupervised single image depth estimation by enforcing consistency of ray depths that should intersect the same scene point.",
"title": ""
},
{
"docid": "neg:1840354_1",
"text": "Whilst studies on emotion recognition show that genderdependent analysis can improve emotion classification performance, the potential differences in the manifestation of depression between male and female speech have yet to be fully explored. This paper presents a qualitative analysis of phonetically aligned acoustic features to highlight differences in the manifestation of depression. Gender-dependent analysis with phonetically aligned gender-dependent features are used for speech-based depression recognition. The presented experimental study reveals gender differences in the effect of depression on vowel-level features. Considering the experimental study, we also show that a small set of knowledge-driven gender-dependent vowel-level features can outperform state-of-the-art turn-level acoustic features when performing a binary depressed speech recognition task. A combination of these preselected gender-dependent vowel-level features with turn-level standardised openSMILE features results in additional improvement for depression recognition.",
"title": ""
},
{
"docid": "neg:1840354_2",
"text": "Find loads of the research methods in the social sciences book catalogues in this site as the choice of you visiting this page. You can also join to the website book library that will show you numerous books from any types. Literature, science, politics, and many more catalogues are presented to offer you the best book to find. The book that really makes you feels satisfied. Or that's the book that will save you from your job deadline.",
"title": ""
},
{
"docid": "neg:1840354_3",
"text": "We present a method to produce free, enormous corpora to train taggers for Named Entity Recognition (NER), the task of identifying and classifying names in text, often solved by statistical learning systems. Our approach utilises the text of Wikipedia, a free online encyclopedia, transforming links between Wikipedia articles into entity annotations. Having derived a baseline corpus, we found that altering Wikipedia’s links and identifying classes of capitalised non-entity terms would enable the corpus to conform more closely to gold-standard annotations, increasing performance by up to 32% F score. The evaluation of our method is novel since the training corpus is not usually a variable in NER experimentation. We therefore develop a number of methods for analysing and comparing training corpora. Gold-standard training corpora for NER perform poorly (F score up to 32% lower) when evaluated on test data from a different gold-standard corpus. Our Wikipedia-derived data can outperform manually-annotated corpora on this cross-corpus evaluation task by up to 7% on held-out test data. These experimental results show that Wikipedia is viable as a source of automatically-annotated training corpora, which have wide domain coverage applicable to a broad range of NLP applications.",
"title": ""
},
{
"docid": "neg:1840354_4",
"text": "As humans we are a highly social species: in order to coordinate our joint actions and assure successful communication, we use language skills to explicitly convey information to each other, and social abilities such as empathy or perspective taking to infer another person's emotions and mental state. The human cognitive capacity to draw inferences about other peoples' beliefs, intentions and thoughts has been termed mentalizing, theory of mind or cognitive perspective taking. This capacity makes it possible, for instance, to understand that people may have views that differ from our own. Conversely, the capacity to share the feelings of others is called empathy. Empathy makes it possible to resonate with others' positive and negative feelings alike--we can thus feel happy when we vicariously share the joy of others and we can share the experience of suffering when we empathize with someone in pain. Importantly, in empathy one feels with someone, but one does not confuse oneself with the other; that is, one still knows that the emotion one resonates with is the emotion of another. If this self-other distinction is not present, we speak of emotion contagion, a precursor of empathy that is already present in babies.",
"title": ""
},
{
"docid": "neg:1840354_5",
"text": "An overview of the basics of metaphorical thought and language from the perspective of Neurocognition, the integrated interdisciplinary study of how conceptual thought and language work in the brain. The paper outlines a theory of metaphor circuitry and discusses how everyday reason makes use of embodied metaphor circuitry.",
"title": ""
},
{
"docid": "neg:1840354_6",
"text": "Vegetable quality is frequently referred to size, shape, mass, firmness, color and bruises from which fruits can be classified and sorted. However, technological by small and middle producers implementation to assess this quality is unfeasible, due to high costs of software, equipment as well as operational costs. Based on these considerations, the proposal of this research is to evaluate a new open software that enables the classification system by recognizing fruit shape, volume, color and possibly bruises at a unique glance. The software named ImageJ, compatible with Windows, Linux and MAC/OS, is quite popular in medical research and practices, and offers algorithms to obtain the above mentioned parameters. The software allows calculation of volume, area, averages, border detection, image improvement and morphological operations in a variety of image archive formats as well as extensions by means of “plugins” written in Java.",
"title": ""
},
{
"docid": "neg:1840354_7",
"text": "Practitioners in Europe and the U.S. recently have proposed two distinct approaches to address what they believe are shortcomings of traditional budgeting practices. One approach advocates improving the budgeting process and primarily focuses on the planning problems with budgeting. The other advocates abandoning the budget and primarily focuses on the performance evaluation problems with budgeting. This paper provides an overview and research perspective on these two recent developments. We discuss why practitioners have become dissatisfied with budgets, describe the two distinct approaches, place them in a research context, suggest insights that may aid the practitioners, and use the practitioner perspectives to identify fruitful areas for research. INTRODUCTION Budgeting is the cornerstone of the management control process in nearly all organizations, but despite its widespread use, it is far from perfect. Practitioners express concerns about using budgets for planning and performance evaluation. The practitioners argue that budgets impede the allocation of organizational resources to their best uses and encourage myopic decision making and other dysfunctional budget games. They attribute these problems, in part, to traditional budgeting’s financial, top-down, commandand-control orientation as embedded in annual budget planning and performance evaluation processes (e.g., Schmidt 1992; Bunce et al. 1995; Hope and Fraser 1997, 2000, 2003; Wallander 1999; Ekholm and Wallin 2000; Marcino 2000; Jensen 2001). We demonstrate practitioners’ concerns with budgets by describing two practice-led developments: one advocating improving the budgeting process, the other abandoning it. These developments illustrate two points. First, they show practitioners’ concerns with budgeting problems that the scholarly literature has largely ignored while focusing instead 1 For example, Comshare (2000) surveyed financial executives about their current experience with their organizations’ budgeting processes. One hundred thirty of the 154 participants (84 percent) identified 332 frustrations with their organizations’ budgeting processes, an average of 2.6 frustrations per person. We acknowledge the many helpful suggestions by the reviewers, Bjorn Jorgensen, Murray Lindsay, Ken Merchant, and Mark Young. 96 Hansen, Otley, and Van der Stede Journal of Management Accounting Research, 2003 on more traditional issues like participative budgeting. Second, the two conflicting developments illustrate that firms face a critical decision regarding budgeting: maintain it, improve it, or abandon it? Our discussion has two objectives. First, we demonstrate the level of concern with budgeting in practice, suggesting its potential for continued scholarly research. Second, we wish to raise academics’ awareness of apparent disconnects between budgeting practice and research. We identify areas where prior research may aid the practitioners and, conversely, use the practitioners’ insights to suggest areas for research. In the second section, we review some of the most common criticisms of budgets in practice. The third section describes and analyzes the main thrust of two recent practiceled developments in budgeting. In the fourth section, we place these two practice developments in a research context and suggest research that may be relevant to the practitioners. The fifth section turns the tables by using the practitioner insights to offer new perspectives for research. In the sixth section, we conclude. PROBLEMS WITH BUDGETING IN PRACTICE The ubiquitous use of budgetary control is largely due to its ability to weave together all the disparate threads of an organization into a comprehensive plan that serves many different purposes, particularly performance planning and ex post evaluation of actual performance vis-à-vis the plan. Despite performing this integrative function and laying the basis for performance evaluation, budgetary control has many limitations, such as its longestablished and oft-researched susceptibility to induce budget games or dysfunctional behaviors (Hofstede 1967; Onsi 1973; Merchant 1985b; Lukka 1988). A recent report by Neely et al. (2001), drawn primarily from the practitioner literature, lists the 12 most cited weaknesses of budgetary control as: 1. Budgets are time-consuming to put together; 2. Budgets constrain responsiveness and are often a barrier to change; 3. Budgets are rarely strategically focused and often contradictory; 4. Budgets add little value, especially given the time required to prepare them; 5. Budgets concentrate on cost reduction and not value creation; 6. Budgets strengthen vertical command-and-control; 7. Budgets do not reflect the emerging network structures that organizations are adopting; 8. Budgets encourage gaming and perverse behaviors; 9. Budgets are developed and updated too infrequently, usually annually; 10. Budgets are based on unsupported assumptions and guesswork; 11. Budgets reinforce departmental barriers rather than encourage knowledge sharing; and 12. Budgets make people feel undervalued. 2 For example, in their review of nearly 2,000 research and professional articles in management accounting in the 1996–2000 period, Selto and Widener (2001) document several areas of ‘‘fit’’ and ‘‘misfit’’ between practice and research. They document that more research than practice exists in the area of participative budgeting and state that ‘‘[this] topic appears to be of little current, practical interest, but continues to attract research efforts, perhaps because of the interesting theoretical issues it presents.’’ Selto and Widener (2001) also document virtually no research on activity-based budgeting (one of the practice-led developments we discuss in this paper) and planning and forecasting, although these areas have grown in practice coverage each year during the 1996– 2000 period. Practice Developments in Budgeting 97 Journal of Management Accounting Research, 2003 While not all would agree with these criticisms, other recent critiques (e.g., Schmidt 1992; Hope and Fraser 1997, 2000, 2003; Ekholm and Wallin 2000; Marcino 2000; Jensen 2001) also support the perception of widespread dissatisfaction with budgeting in practice. We synthesize the sources of dissatisfaction as follows. Claims 1, 4, 9, and 10 relate to the recurring criticism that by the time budgets are used, their assumptions are typically outdated, reducing the value of the budgeting process. A more radical version of this criticism is that conventional budgets can never be valid because they cannot capture the uncertainty involved in rapidly changing environments (Wallender 1999). In more conceptual terms, the operation of a useful budgetary control system requires two related elements. First, there must be a high degree of operational stability so that the budget provides a valid plan for a reasonable period of time (typically the next year). Second, managers must have good predictive models so that the budget provides a reasonable performance standard against which to hold managers accountable (Berry and Otley 1980). Where these criteria hold, budgetary control is a useful control mechanism, but for organizations that operate in more turbulent environments, it becomes less useful (Samuelson 2000). Claims 2, 3, 5, 6, and 8 relate to another common criticism that budgetary controls impose a vertical command-and-control structure, centralize decision making, stifle initiative, and focus on cost reductions rather than value creation. As such, budgetary controls often impede the pursuit of strategic goals by supporting such mechanical practices as lastyear-plus budget setting and across-the-board cuts. Moreover, the budget’s exclusive focus on annual financial performance causes a mismatch with operational and strategic decisions that emphasize nonfinancial goals and cut across the annual planning cycle, leading to budget games involving skillful timing of revenues, expenditures, and investments (Merchant 1985a). Finally, claims 7, 11, and 12 reflect organizational and people-related budgeting issues. The critics argue that vertical, command-and-control, responsibility center-focused budgetary controls are incompatible with flat, network, or value chain-based organizational designs and impede empowered employees from making the best decisions (Hope and Fraser 2003). Given such a long list of problems and many calls for improvement, it seems odd that the vast majority of U.S. firms retain a formal budgeting process (97 percent of the respondents in Umapathy [1987]). One reason that budgets may be retained in most firms is because they are so deeply ingrained in an organization’s fabric (Scapens and Roberts 1993). ‘‘They remain a centrally coordinated activity (often the only one) within the business’’ (Neely et al. 2001, 9) and constitute ‘‘the only process that covers all areas of organizational activity’’ (Otley 1999). However, a more recent survey of Finnish firms found that although 25 percent are retaining their traditional budgeting system, 61 percent are actively upgrading their system, and 14 percent are either abandoning budgets or at least considering it (Ekholm and Wallin 2000). We discuss two practice-led developments that illustrate proposals to improve budgeting or to abandon it. Although the two developments reach different conclusions, both originated in the same organization, the Consortium for Advanced Manufacturing-International (CAM-I); one in 3 We note that there are several factors that inevitably contribute to the seemingly negative evaluation of budgetary controls. First, given information asymmetries, budgets operate under second-best conditions in most organizations. Second, information is costly. Finally, unlike the costs, the benefits of budgeting are indirect, and thus, less salient. 98 Hansen, Otley, and Van der Stede Journal of Management Accounting Research, 2003 the U.S. and the other in Europe. The U",
"title": ""
},
{
"docid": "neg:1840354_8",
"text": "We propose a novel and robust hashing paradigm that uses iterative geometric techniques and relies on observations that main geometric features within an image would approximately stay invariant under small perturbations. A key goal of this algorithm is to produce sufficiently randomized outputs which are unpredictable, thereby yielding properties akin to cryptographic MACs. This is a key component for robust multimedia identification and watermarking (for synchronization as well as content dependent key generation). Our algorithm withstands standard benchmark (e.g Stirmark) attacks provided they do not cause severe perceptually significant distortions. As verified by our detailed experiments, the approach is relatively media independent and works for",
"title": ""
},
{
"docid": "neg:1840354_9",
"text": "Content popularity prediction finds application in many areas, including media advertising, content caching, movie revenue estimation, traffic management and macro-economic trends forecasting, to name a few. However, predicting this popularity is difficult due to, among others, the effects of external phenomena, the influence of context such as locality and relevance to users,and the difficulty of forecasting information cascades.\n In this paper we identify patterns of temporal evolution that are generalisable to distinct types of data, and show that we can (1) accurately classify content based on the evolution of its popularity over time and (2) predict the value of the content's future popularity. We verify the generality of our method by testing it on YouTube, Digg and Vimeo data sets and find our results to outperform the K-Means baseline when classifying the behaviour of content and the linear regression baseline when predicting its popularity.",
"title": ""
},
{
"docid": "neg:1840354_10",
"text": "During natural disasters or crises, users on social media tend to easily believe contents of postings related to the events, and retweet the postings with hoping them to be reached to many other users. Unfortunately, there are malicious users who understand the tendency and post misinformation such as spam and fake messages with expecting wider propagation. To resolve the problem, in this paper we conduct a case study of 2013 Moore Tornado and Hurricane Sandy. Concretely, we (i) understand behaviors of these malicious users, (ii) analyze properties of spam, fake and legitimate messages, (iii) propose flat and hierarchical classification approaches, and (iv) detect both fake and spam messages with even distinguishing between them. Our experimental results show that our proposed approaches identify spam and fake messages with 96.43% accuracy and 0.961 F-measure.",
"title": ""
},
{
"docid": "neg:1840354_11",
"text": "Cloud balancing provides an organization with the ability to distribute application requests across any number of application deployments located in different data centers and through Cloud-computing providers. In this paper, we propose a load balancing methodMinsd (Minimize standard deviation of Cloud load method) and apply it on three levels control: PEs (Processing Elements), Hosts and Data Centers. Simulations on CloudSim are used to check its performance and its influence on makespan, communication overhead and throughput. A true log of a cluster also is used to test our method. Results indicate that our method not only gives good Cloud balancing but also ensures reducing makespan and communication overhead and enhancing throughput of the whole the system.",
"title": ""
},
{
"docid": "neg:1840354_12",
"text": "This study examines the role of online daters’ physical attractiveness in their profile selfpresentation and, in particular, their use of deception. Sixty-nine online daters identified the deceptions in their online dating profiles and had their photograph taken in the lab. Independent judges rated the online daters’ physical attractiveness. Results show that the lower online daters’ attractiveness, the more likely they were to enhance their profile photographs and lie about their physical descriptors (height, weight, age). The association between attractiveness and deception did not extend to profile elements unrelated to their physical appearance (e.g., income, occupation), suggesting that their deceptions were limited and strategic. Results are discussed in terms of (a) evolutionary theories about the importance of physical attractiveness in the dating realm and (b) the technological affordances that allow online daters to engage in selective self-presentation.",
"title": ""
},
{
"docid": "neg:1840354_13",
"text": "Agricultural monitoring, especially in developing countries, can help prevent famine and support humanitarian efforts. A central challenge is yield estimation, i.e., predicting crop yields before harvest. We introduce a scalable, accurate, and inexpensive method to predict crop yields using publicly available remote sensing data. Our approach improves existing techniques in three ways. First, we forego hand-crafted features traditionally used in the remote sensing community and propose an approach based on modern representation learning ideas. We also introduce a novel dimensionality reduction technique that allows us to train a Convolutional Neural Network or Long-short Term Memory network and automatically learn useful features even when labeled training data are scarce. Finally, we incorporate a Gaussian Process component to explicitly model the spatio-temporal structure of the data and further improve accuracy. We evaluate our approach on county-level soybean yield prediction in the U.S. and show that it outperforms competing techniques.",
"title": ""
},
{
"docid": "neg:1840354_14",
"text": "BACKGROUND\nPatients on surveillance for clinical stage I (CSI) testicular cancer are counseled regarding their baseline risk of relapse. The conditional risk of relapse (cRR), which provides prognostic information on patients who have survived for a period of time without relapse, have not been determined for CSI testicular cancer.\n\n\nOBJECTIVE\nTo determine cRR in CSI testicular cancer.\n\n\nDESIGN, SETTING, AND PARTICIPANTS\nWe reviewed 1239 patients with CSI testicular cancer managed with surveillance at a tertiary academic centre between 1980 and 2014. OUTCOME MEASUREMENTS AND STATISTICAL ANALYSIS: cRR estimates were calculated using the Kaplan-Meier method. We stratified patients according to validated risk factors for relapse. We used linear regression to determine cRR trends over time.\n\n\nRESULTS AND LIMITATIONS\nAt orchiectomy, the risk of relapse within 5 yr was 42.4%, 17.3%, 20.3%, and 12.2% among patients with high-risk nonseminomatous germ cell tumor (NSGCT), low-risk NSGCT, seminoma with tumor size ≥3cm, and seminoma with tumor size <3cm, respectively. However, for patients without relapse within the first 2 yr of follow-up, the corresponding risk of relapse within the next 5 yr in the groups was 0.0%, 1.0% (95% confidence interval [CI] 0.3-1.7%), 5.6% (95% CI 3.1-8.2%), and 3.9% (95% CI 1.4-6.4%). Over time, cRR decreased (p≤0.021) in all models. Limitations include changes to surveillance protocols over time and few late relapses.\n\n\nCONCLUSIONS\nAfter 2 yr, the risk of relapse on surveillance for CSI testicular cancer is very low. Consideration should be given to adapting surveillance protocols to individualized risk of relapse based on cRR as opposed to static protocols based on baseline factors. This strategy could reduce the intensity of follow-up for the majority of patients.\n\n\nPATIENT SUMMARY\nOur study is the first to provide data on the future risk of relapse during surveillance for clinical stage I testicular cancer, given a patient has been without relapse for a specified period of time.",
"title": ""
},
{
"docid": "neg:1840354_15",
"text": "Slowly but surely, Alzheimer's disease (AD) patients lose their memory and their cognitive abilities, and even their personalities may change dramatically. These changes are due to the progressive dysfunction and death of nerve cells that are responsible for the storage and processing of information. Although drugs can temporarily improve memory, at present there are no treatments that can stop or reverse the inexorable neurodegenerative process. But rapid progress towards understanding the cellular and molecular alterations that are responsible for the neuron's demise may soon help in developing effective preventative and therapeutic strategies.",
"title": ""
},
{
"docid": "neg:1840354_16",
"text": "To present a summary of current scientific evidence about the cannabinoid, cannabidiol (CBD) with regard to its relevance to epilepsy and other selected neuropsychiatric disorders. We summarize the presentations from a conference in which invited participants reviewed relevant aspects of the physiology, mechanisms of action, pharmacology, and data from studies with animal models and human subjects. Cannabis has been used to treat disease since ancient times. Δ(9) -Tetrahydrocannabinol (Δ(9) -THC) is the major psychoactive ingredient and CBD is the major nonpsychoactive ingredient in cannabis. Cannabis and Δ(9) -THC are anticonvulsant in most animal models but can be proconvulsant in some healthy animals. The psychotropic effects of Δ(9) -THC limit tolerability. CBD is anticonvulsant in many acute animal models, but there are limited data in chronic models. The antiepileptic mechanisms of CBD are not known, but may include effects on the equilibrative nucleoside transporter; the orphan G-protein-coupled receptor GPR55; the transient receptor potential of vanilloid type-1 channel; the 5-HT1a receptor; and the α3 and α1 glycine receptors. CBD has neuroprotective and antiinflammatory effects, and it appears to be well tolerated in humans, but small and methodologically limited studies of CBD in human epilepsy have been inconclusive. More recent anecdotal reports of high-ratio CBD:Δ(9) -THC medical marijuana have claimed efficacy, but studies were not controlled. CBD bears investigation in epilepsy and other neuropsychiatric disorders, including anxiety, schizophrenia, addiction, and neonatal hypoxic-ischemic encephalopathy. However, we lack data from well-powered double-blind randomized, controlled studies on the efficacy of pure CBD for any disorder. Initial dose-tolerability and double-blind randomized, controlled studies focusing on target intractable epilepsy populations such as patients with Dravet and Lennox-Gastaut syndromes are being planned. Trials in other treatment-resistant epilepsies may also be warranted. A PowerPoint slide summarizing this article is available for download in the Supporting Information section here.",
"title": ""
},
{
"docid": "neg:1840354_17",
"text": "As video games become increasingly popular pastimes, it becomes more important to understand how different individuals behave when they play these games. Previous research has focused mainly on behavior in massively multiplayer online role-playing games; therefore, in the current study we sought to extend on this research by examining the connections between personality traits and behaviors in video games more generally. Two hundred and nineteen university students completed measures of personality traits, psychopathic traits, and a questionnaire regarding frequency of different behaviors during video game play. A principal components analysis of the video game behavior questionnaire revealed four factors: Aggressing, Winning, Creating, and Helping. Each behavior subscale was significantly correlated with at least one personality trait. Men reported significantly more Aggressing, Winning, and Helping behavior than women. Controlling for participant sex, Aggressing was negatively correlated with Honesty–Humility, Helping was positively correlated with Agreeableness, and Creating was negatively correlated with Conscientiousness. Aggressing was also positively correlated with all psychopathic traits, while Winning and Creating were correlated with one psychopathic trait each. Frequency of playing video games online was positively correlated with the Aggressing, Winning, and Helping scales, but not with the Creating scale. The results of the current study provide support for previous research on personality and behavior in massively multiplayer online role-playing games. 2015 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "neg:1840354_18",
"text": "Motivation\nText mining has become an important tool for biomedical research. The most fundamental text-mining task is the recognition of biomedical named entities (NER), such as genes, chemicals and diseases. Current NER methods rely on pre-defined features which try to capture the specific surface properties of entity types, properties of the typical local context, background knowledge, and linguistic information. State-of-the-art tools are entity-specific, as dictionaries and empirically optimal feature sets differ between entity types, which makes their development costly. Furthermore, features are often optimized for a specific gold standard corpus, which makes extrapolation of quality measures difficult.\n\n\nResults\nWe show that a completely generic method based on deep learning and statistical word embeddings [called long short-term memory network-conditional random field (LSTM-CRF)] outperforms state-of-the-art entity-specific NER tools, and often by a large margin. To this end, we compared the performance of LSTM-CRF on 33 data sets covering five different entity classes with that of best-of-class NER tools and an entity-agnostic CRF implementation. On average, F1-score of LSTM-CRF is 5% above that of the baselines, mostly due to a sharp increase in recall.\n\n\nAvailability and implementation\nThe source code for LSTM-CRF is available at https://github.com/glample/tagger and the links to the corpora are available at https://corposaurus.github.io/corpora/ .\n\n\nContact\nhabibima@informatik.hu-berlin.de.",
"title": ""
},
{
"docid": "neg:1840354_19",
"text": "We present a real-time system that renders antialiased hard shadows using irregular z-buffers (IZBs). For subpixel accuracy, we use 32 samples per pixel at roughly twice the cost of a single sample. Our system remains interactive on a variety of game assets and CAD models while running at 1080p and 2160p and imposes no constraints on light, camera or geometry, allowing fully dynamic scenes without precomputation. Unlike shadow maps we introduce no spatial or temporal aliasing, smoothly animating even subpixel shadows from grass or wires.\n Prior irregular z-buffer work relies heavily on GPU compute. Instead we leverage the graphics pipeline, including hardware conservative raster and early-z culling. We observe a duality between irregular z-buffer performance and shadow map quality; this allows common shadow map algorithms to reduce our cost. Compared to state-of-the-art ray tracers, we spawn similar numbers of triangle intersections per pixel yet completely rebuild our data structure in under 2 ms per frame.",
"title": ""
}
] |
1840355 | A fuzzy spatial coherence-based approach to background/foreground separation for moving object detection | [
{
"docid": "pos:1840355_0",
"text": "Running average method and its modified version are two simple and fast methods for background modeling. In this paper, some weaknesses of running average method and standard background subtraction are mentioned. Then, a fuzzy approach for background modeling and background subtraction is proposed. For fuzzy background modeling, fuzzy running average is suggested. Background modeling and background subtraction algorithms are very commonly used in vehicle detection systems. To demonstrate the advantages of fuzzy running average and fuzzy background subtraction, these methods and their standard versions are compared in vehicle detection application. Experimental results show that fuzzy approach is relatively more accurate than classical approach.",
"title": ""
}
] | [
{
"docid": "neg:1840355_0",
"text": "This thesis deals with interaction design for a class of upcoming computer technologies for human use characterized by being different from traditional desktop computers in their physical appearance and the contexts in which they are used. These are typically referred to as emerging technologies. Emerging technologies often imply interaction dissimilar from how computers are usually operated. This challenges the scope and applicability of existing knowledge about human-computer interaction design. The thesis focuses on three specific technologies: virtual reality, augmented reality and mobile computer systems. For these technologies, five themes are addressed: current focus of research, concepts, interaction styles, methods and tools. These themes inform three research questions, which guide the conducted research. The thesis consists of five published research papers and a summary. In the summary, current focus of research is addressed from the perspective of research methods and research purpose. Furthermore, the notions of human-computer interaction design and emerging technologies are discussed and two central distinctions are introduced. Firstly, interaction design is divided into two categories with focus on systems and processes respectively. Secondly, the three studied emerging technologies are viewed in relation to immersion into virtual space and mobility in physical space. These distinctions are used to relate the five paper contributions, each addressing one of the three studied technologies with focus on properties of systems or the process of creating them respectively. Three empirical sources contribute to the results. Experiments with interaction design inform the development of concepts and interaction styles suitable for virtual reality, augmented reality and mobile computer systems. Experiments with designing interaction inform understanding of how methods and tools support design processes for these technologies. Finally, a literature survey informs a review of existing research, and identifies current focus, limitations and opportunities for future research. The primary results of the thesis are: 1) Current research within human-computer interaction design for the studied emerging technologies focuses on building systems ad-hoc and evaluating them in artificial settings. This limits the generation of cumulative theoretical knowledge. 2) Interaction design for the emerging technologies studied requires the development of new suitable concepts and interaction styles. Suitable concepts describe unique properties and challenges of a technology. Suitable interaction styles respond to these challenges by exploiting the technology’s unique properties. 3) Designing interaction for the studied emerging technologies involves new use situations, a distance between development and target platforms and complex programming. Elements of methods exist, which are useful for supporting the design of interaction, but they are fragmented and do not support the process as a whole. The studied tools do not support the design process as a whole either but support aspects of interaction design by bridging the gulf between development and target platforms and providing advanced programming environments. Menneske-maskine interaktionsdesign for opkommende teknologier Virtual Reality, Augmented Reality og Mobile Computersystemer",
"title": ""
},
{
"docid": "neg:1840355_1",
"text": "Due to the growing number of vehicles on the roads worldwide, road traffic accidents are currently recognized as a major public safety problem. In this context, connected vehicles are considered as the key enabling technology to improve road safety and to foster the emergence of next generation cooperative intelligent transport systems (ITS). Through the use of wireless communication technologies, the deployment of ITS will enable vehicles to autonomously communicate with other nearby vehicles and roadside infrastructures and will open the door for a wide range of novel road safety and driver assistive applications. However, connecting wireless-enabled vehicles to external entities can make ITS applications vulnerable to various security threats, thus impacting the safety of drivers. This article reviews the current research challenges and opportunities related to the development of secure and safe ITS applications. It first explores the architecture and main characteristics of ITS systems and surveys the key enabling standards and projects. Then, various ITS security threats are analyzed and classified, along with their corresponding cryptographic countermeasures. Finally, a detailed ITS safety application case study is analyzed and evaluated in light of the European ETSI TC ITS standard. An experimental test-bed is presented, and several elliptic curve digital signature algorithms (ECDSA) are benchmarked for signing and verifying ITS safety messages. To conclude, lessons learned, open research challenges and opportunities are discussed. Electronics 2015, 4 381",
"title": ""
},
{
"docid": "neg:1840355_2",
"text": "The development of social networks has led the public in general to find easy accessibility for communication with respect to rapid communication to each other at any time. Such services provide the quick transmission of information which is its positive side but its negative side needs to be kept in mind thereby misinformation can spread. Nowadays, in this era of digitalization, the validation of such information has become a real challenge, due to lack of information authentication method. In this paper, we design a framework for the rumors detection from the Facebook events data, which is based on inquiry comments. The proposed Inquiry Comments Detection Model (ICDM) identifies inquiry comments utilizing a rule-based approach which entails regular expressions to categorize the sentences as an inquiry into those starting with an intransitive verb (like is, am, was, will, would and so on) and also those sentences ending with a question mark. We set the threshold value to compare with the ratio of Inquiry to English comments and identify the rumors. We verified the proposed ICDM on labeled data, collected from snopes.com. Our experiments revealed that the proposed method achieved considerably well in comparison to the existing machine learning techniques. The proposed ICDM approach attained better results of 89% precision, 77% recall, and 82% F-measure. We are of the opinion that our experimental findings of this study will be useful for the worldwide adoption. Keywords—Social networks; rumors; inquiry comments; question identification",
"title": ""
},
{
"docid": "neg:1840355_3",
"text": "This paper presents a Fuzzy Neural Network (FNN) control system for a traveling-wave ultrasonic motor (TWUSM) driven by a dual mode modulation non-resonant driving circuit. First, the motor configuration and the proposed driving circuit of a TWUSM are introduced. To drive a TWUSM effectively, a novel driving circuit, that simultaneously employs both the driving frequency and phase modulation control scheme, is proposed to provide two-phase balance voltage for a TWUSM. Since the dynamic characteristics and motor parameters of the TWUSM are highly nonlinear and time-varying, a FNN control system is therefore investigated to achieve high-precision speed control. The proposed FNN control system incorporates neuro-fuzzy control and the driving frequency and phase modulation to solve the problem of nonlinearities and variations. The proposed control system is digitally implemented by a low-cost digital signal processor based microcontroller, hence reducing the system hardware size and cost. The effectiveness of the proposed driving circuit and control system is verified with hardware experiments under the occurrence of uncertainties. In addition, the advantages of the proposed control scheme are indicated in comparison with a conventional proportional-integral control system.",
"title": ""
},
{
"docid": "neg:1840355_4",
"text": "A statistical analysis of full text downloads of articles in Elsevier's ScienceDirect covering all disciplines reveals large differences in download frequencies, their skewness, and their correlation with Scopus-based citation counts, between disciplines, journals, and document types. Download counts tend to be two orders of magnitude higher and less skewedly distributed than citations. A mathematical model based on the sum of two exponentials does not adequately capture monthly download counts. The degree of correlation at the article level within a journal is similar to that at the journal level in the discipline covered by that journal, suggesting that the differences between journals are to a large extent discipline-specific. Despite the fact that in all study journals download and citation counts per article positively correlate, little overlap may exist between the set of articles appearing in the top of the citation distribution and that with the most frequently downloaded ones. Usage and citation leaks, bulk downloading, differences between reader and author populations in a subject field, the type of document or its content, differences in obsolescence patterns between downloads and citations, different functions of reading and citing in the research process, all provide possible explanations of differences between download and citation distributions.",
"title": ""
},
{
"docid": "neg:1840355_5",
"text": "The security research community has invested significant effort in improving the security of Android applications over the past half decade. This effort has addressed a wide range of problems and resulted in the creation of many tools for application analysis. In this article, we perform the first systematization of Android security research that analyzes applications, characterizing the work published in more than 17 top venues since 2010. We categorize each paper by the types of problems they solve, highlight areas that have received the most attention, and note whether tools were ever publicly released for each effort. Of the released tools, we then evaluate a representative sample to determine how well application developers can apply the results of our community’s efforts to improve their products. We find not only that significant work remains to be done in terms of research coverage but also that the tools suffer from significant issues ranging from lack of maintenance to the inability to produce functional output for applications with known vulnerabilities. We close by offering suggestions on how the community can more successfully move forward.",
"title": ""
},
{
"docid": "neg:1840355_6",
"text": "The state-of-the-art techniques for aspect-level sentiment analysis focus on feature modeling using a variety of deep neural networks (DNN). Unfortunately, their practical performance may fall short of expectations due to semantic complexity of natural languages. Motivated by the observation that linguistic hints (e.g. explicit sentiment words and shift words) can be strong indicators of sentiment, we present a joint framework, SenHint, which integrates the output of deep neural networks and the implication of linguistic hints into a coherent reasoning model based on Markov Logic Network (MLN). In SenHint, linguistic hints are used in two ways: (1) to identify easy instances, whose sentiment can be automatically determined by machine with high accuracy; (2) to capture implicit relations between aspect polarities. We also empirically evaluate the performance of SenHint on both English and Chinese benchmark datasets. Our experimental results show that SenHint can effectively improve accuracy compared with the state-of-the-art alternatives.",
"title": ""
},
{
"docid": "neg:1840355_7",
"text": "The majority of online display ads are served through real-time bidding (RTB) --- each ad display impression is auctioned off in real-time when it is just being generated from a user visit. To place an ad automatically and optimally, it is critical for advertisers to devise a learning algorithm to cleverly bid an ad impression in real-time. Most previous works consider the bid decision as a static optimization problem of either treating the value of each impression independently or setting a bid price to each segment of ad volume. However, the bidding for a given ad campaign would repeatedly happen during its life span before the budget runs out. As such, each bid is strategically correlated by the constrained budget and the overall effectiveness of the campaign (e.g., the rewards from generated clicks), which is only observed after the campaign has completed. Thus, it is of great interest to devise an optimal bidding strategy sequentially so that the campaign budget can be dynamically allocated across all the available impressions on the basis of both the immediate and future rewards. In this paper, we formulate the bid decision process as a reinforcement learning problem, where the state space is represented by the auction information and the campaign's real-time parameters, while an action is the bid price to set. By modeling the state transition via auction competition, we build a Markov Decision Process framework for learning the optimal bidding policy to optimize the advertising performance in the dynamic real-time bidding environment. Furthermore, the scalability problem from the large real-world auction volume and campaign budget is well handled by state value approximation using neural networks. The empirical study on two large-scale real-world datasets and the live A/B testing on a commercial platform have demonstrated the superior performance and high efficiency compared to state-of-the-art methods.",
"title": ""
},
{
"docid": "neg:1840355_8",
"text": "The deep learning revolution brought us an extensive array of neural network architectures that achieve state-of-the-art performance in a wide variety of Computer Vision tasks including among others classification, detection and segmentation. In parallel, we have also been observing an unprecedented demand in computational and memory requirements, rendering the efficient use of neural networks in low-powered devices virtually unattainable. Towards this end, we propose a threestage compression and acceleration pipeline that sparsifies, quantizes and entropy encodes activation maps of Convolutional Neural Networks. Sparsification increases the representational power of activation maps leading to both acceleration of inference and higher model accuracy. Inception-V3 and MobileNet-V1 can be accelerated by as much as 1.6× with an increase in accuracy of 0.38% and 0.54% on the ImageNet and CIFAR-10 datasets respectively. Quantizing and entropy coding the sparser activation maps lead to higher compression over the baseline, reducing the memory cost of the network execution. Inception-V3 and MobileNet-V1 activation maps, quantized to 16 bits, are compressed by as much as 6× with an increase in accuracy of 0.36% and 0.55% respectively.",
"title": ""
},
{
"docid": "neg:1840355_9",
"text": "The establishment of an endosymbiotic relationship typically seems to be driven through complementation of the host's limited metabolic capabilities by the biochemical versatility of the endosymbiont. The most significant examples of endosymbiosis are represented by the endosymbiotic acquisition of plastids and mitochondria, introducing photosynthesis and respiration to eukaryotes. However, there are numerous other endosymbioses that evolved more recently and repeatedly across the tree of life. Recent advances in genome sequencing technology have led to a better understanding of the physiological basis of many endosymbiotic associations. This review focuses on endosymbionts in protists (unicellular eukaryotes). Selected examples illustrate the incorporation of various new biochemical functions, such as photosynthesis, nitrogen fixation and recycling, and methanogenesis, into protist hosts by prokaryotic endosymbionts. Furthermore, photosynthetic eukaryotic endosymbionts display a great diversity of modes of integration into different protist hosts. In conclusion, endosymbiosis seems to represent a general evolutionary strategy of protists to acquire novel biochemical functions and is thus an important source of genetic innovation.",
"title": ""
},
{
"docid": "neg:1840355_10",
"text": "Using a novel replacement gate SOI FinFET device structure, we have fabricated FinFETs with fin width (D<inf>Fin</inf>) of 4nm, fin pitch (FP) of 40nm, and gate length (L<inf>G</inf>) of 20nm. With this structure, we have achieved arrays of thousands of fins for D<inf>Fin</inf> down to 4nm with robust yield and structural integrity. We observe performance degradation, increased variability, and V<inf>T</inf> shift as D<inf>Fin</inf> is reduced. Capacitance measurements agree with quantum confinement behavior which has been predicted to pose a fundamental limit to scaling FinFETs below 10nm L<inf>G</inf>.",
"title": ""
},
{
"docid": "neg:1840355_11",
"text": "Regularization of Deep Neural Networks (DNNs) for the sake of improving their generalization capability is important and challenging. The development in this line benefits theoretical foundation of DNNs and promotes their usability in different areas of artificial intelligence. In this paper, we investigate the role of Rademacher complexity in improving generalization of DNNs and propose a novel regularizer rooted in Local Rademacher Complexity (LRC). While Rademacher complexity is well known as a distribution-free complexity measure of function class that help boost generalization of statistical learning methods, extensive study shows that LRC, its counterpart focusing on a restricted function class, leads to sharper convergence rates and potential better generalization given finite training sample. Our LRC based regularizer is developed by estimating the complexity of the function class centered at the minimizer of the empirical loss of DNNs. Experiments on various types of network architecture demonstrate the effectiveness of LRC regularization in improving generalization. Moreover, our method features the state-of-the-art result on the CIFAR-10 dataset with network architecture found by neural architecture search.",
"title": ""
},
{
"docid": "neg:1840355_12",
"text": "Alzheimer's disease (AD) is a major neurodegenerative disease and is one of the most common cause of dementia in older adults. Among several factors, neuroinflammation is known to play a critical role in the pathogenesis of chronic neurodegenerative diseases. In particular, studies of brains affected by AD show a clear involvement of several inflammatory pathways. Furthermore, depending on the brain regions affected by the disease, the nature and the effect of inflammation can vary. Here, in order to shed more light on distinct and common features of inflammation in different brain regions affected by AD, we employed a computational approach to analyze gene expression data of six site-specific neuronal populations from AD patients. Our network based computational approach is driven by the concept that a sustained inflammatory environment could result in neurotoxicity leading to the disease. Thus, our method aims to infer intracellular signaling pathways/networks that are likely to be constantly activated or inhibited due to persistent inflammatory conditions. The computational analysis identified several inflammatory mediators, such as tumor necrosis factor alpha (TNF-a)-associated pathway, as key upstream receptors/ligands that are likely to transmit sustained inflammatory signals. Further, the analysis revealed that several inflammatory mediators were mainly region specific with few commonalities across different brain regions. Taken together, our results show that our integrative approach aids identification of inflammation-related signaling pathways that could be responsible for the onset or the progression of AD and can be applied to study other neurodegenerative diseases. Furthermore, such computational approaches can enable the translation of clinical omics data toward the development of novel therapeutic strategies for neurodegenerative diseases.",
"title": ""
},
{
"docid": "neg:1840355_13",
"text": "Quorum-sensing bacteria communicate with extracellular signal molecules called autoinducers. This process allows community-wide synchronization of gene expression. A screen for additional components of the Vibrio harveyi and Vibrio cholerae quorum-sensing circuits revealed the protein Hfq. Hfq mediates interactions between small, regulatory RNAs (sRNAs) and specific messenger RNA (mRNA) targets. These interactions typically alter the stability of the target transcripts. We show that Hfq mediates the destabilization of the mRNA encoding the quorum-sensing master regulators LuxR (V. harveyi) and HapR (V. cholerae), implicating an sRNA in the circuit. Using a bioinformatics approach to identify putative sRNAs, we identified four candidate sRNAs in V. cholerae. The simultaneous deletion of all four sRNAs is required to stabilize hapR mRNA. We propose that Hfq, together with these sRNAs, creates an ultrasensitive regulatory switch that controls the critical transition into the high cell density, quorum-sensing mode.",
"title": ""
},
{
"docid": "neg:1840355_14",
"text": "The purpose was to investigate the effect of 25 weeks heavy strength training in young elite cyclists. Nine cyclists performed endurance training and heavy strength training (ES) while seven cyclists performed endurance training only (E). ES, but not E, resulted in increases in isometric half squat performance, lean lower body mass, peak power output during Wingate test, peak aerobic power output (W(max)), power output at 4 mmol L(-1)[la(-)], mean power output during 40-min all-out trial, and earlier occurrence of peak torque during the pedal stroke (P < 0.05). ES achieved superior improvements in W(max) and mean power output during 40-min all-out trial compared with E (P < 0.05). The improvement in 40-min all-out performance was associated with the change toward achieving peak torque earlier in the pedal stroke (r = 0.66, P < 0.01). Neither of the groups displayed alterations in VO2max or cycling economy. In conclusion, heavy strength training leads to improved cycling performance in elite cyclists as evidenced by a superior effect size of ES training vs E training on relative improvements in power output at 4 mmol L(-1)[la(-)], peak power output during 30-s Wingate test, W(max), and mean power output during 40-min all-out trial.",
"title": ""
},
{
"docid": "neg:1840355_15",
"text": "This paper presents the design of an X-band active antenna self-oscillating down-converter mixer in substrate integrated waveguide technology (SIW). Electromagnetic analysis is used to design a SIW cavity backed patch antenna with resonance at 9.9 GHz used as the receiving antenna, and subsequently harmonic balance analysis combined with optimization techniques are used to synthesize a self-oscillating mixer with oscillating frequency of 6.525 GHz. The conversion gain is optimized for the mixing product involving the second harmonic of the oscillator and the RF input signal, generating an IF frequency of 3.15 GHz to have conversion gain in at least 600 MHz bandwidth around the IF frequency. The active antenna circuit finds application in compact receiver front-end modules as well as active self-oscillating mixer arrays.",
"title": ""
},
{
"docid": "neg:1840355_16",
"text": "Recent years have observed a significant progress in information retrieval and natural language processing with deep learning technologies being successfully applied into almost all of their major tasks. The key to the success of deep learning is its capability of accurately learning distributed representations (vector representations or structured arrangement of them) of natural language expressions such as sentences, and effectively utilizing the representations in the tasks. This tutorial aims at summarizing and introducing the results of recent research on deep learning for information retrieval, in order to stimulate and foster more significant research and development work on the topic in the future.\n The tutorial mainly consists of three parts. In the first part, we introduce the fundamental techniques of deep learning for natural language processing and information retrieval, such as word embedding, recurrent neural networks, and convolutional neural networks. In the second part, we explain how deep learning, particularly representation learning techniques, can be utilized in fundamental NLP and IR problems, including matching, translation, classification, and structured prediction. In the third part, we describe how deep learning can be used in specific application tasks in details. The tasks are search, question answering (from either documents, database, or knowledge base), and image retrieval.",
"title": ""
},
{
"docid": "neg:1840355_17",
"text": "The paper describes our approach for SemEval-2018 Task 1: Affect Detection in Tweets. We perform experiments with manually compelled sentiment lexicons and word embeddings. We test their performance on twitter affect detection task to determine which features produce the most informative representation of a sentence. We demonstrate that general-purpose word embeddings produces more informative sentence representation than lexicon features. However, combining lexicon features with embeddings yields higher performance than embeddings alone.",
"title": ""
},
{
"docid": "neg:1840355_18",
"text": "The energy consumption of DRAM is a critical concern in modern computing systems. Improvements in manufacturing process technology have allowed DRAM vendors to lower the DRAM supply voltage conservatively, which reduces some of the DRAM energy consumption. We would like to reduce the DRAM supply voltage more aggressively, to further reduce energy. Aggressive supply voltage reduction requires a thorough understanding of the effect voltage scaling has on DRAM access latency and DRAM reliability.\n In this paper, we take a comprehensive approach to understanding and exploiting the latency and reliability characteristics of modern DRAM when the supply voltage is lowered below the nominal voltage level specified by DRAM standards. Using an FPGA-based testing platform, we perform an experimental study of 124 real DDR3L (low-voltage) DRAM chips manufactured recently by three major DRAM vendors. We find that reducing the supply voltage below a certain point introduces bit errors in the data, and we comprehensively characterize the behavior of these errors. We discover that these errors can be avoided by increasing the latency of three major DRAM operations (activation, restoration, and precharge). We perform detailed DRAM circuit simulations to validate and explain our experimental findings. We also characterize the various relationships between reduced supply voltage and error locations, stored data patterns, DRAM temperature, and data retention.\n Based on our observations, we propose a new DRAM energy reduction mechanism, called Voltron. The key idea of Voltron is to use a performance model to determine by how much we can reduce the supply voltage without introducing errors and without exceeding a user-specified threshold for performance loss. Our evaluations show that Voltron reduces the average DRAM and system energy consumption by 10.5% and 7.3%, respectively, while limiting the average system performance loss to only 1.8%, for a variety of memory-intensive quad-core workloads. We also show that Voltron significantly outperforms prior dynamic voltage and frequency scaling mechanisms for DRAM.",
"title": ""
}
] |
1840356 | A novel Bayesian network-based fault prognostic method for semiconductor manufacturing process | [
{
"docid": "pos:1840356_0",
"text": "Pattern recognition encompasses two fundamental tasks: description and classification. Given an object to analyze, a pattern recognition system first generates a description of it (i.e., the pattern) and then classifies the object based on that description (i.e., the recognition). Two general approaches for implementing pattern recognition systems, statistical and structural, employ different techniques for description and classification. Statistical approaches to pattern recognition use decision-theoretic concepts to discriminate among objects belonging to different groups based upon their quantitative features. Structural approaches to pattern recognition use syntactic grammars to discriminate among objects belonging to different groups based upon the arrangement of their morphological (i.e., shape-based or structural) features. Hybrid approaches to pattern recognition combine aspects of both statistical and structural pattern recognition. Structural pattern recognition systems are difficult to apply to new domains because implementation of both the description and classification tasks requires domain knowledge. Knowledge acquisition techniques necessary to obtain domain knowledge from experts are tedious and often fail to produce a complete and accurate knowledge base. Consequently, applications of structural pattern recognition have been primarily restricted to domains in which the set of useful morphological features has been established in the literature (e.g., speech recognition and character recognition) and the syntactic grammars can be composed by hand (e.g., electrocardiogram diagnosis). To overcome this limitation, a domain-independent approach to structural pattern recognition is needed that is capable of extracting morphological features and performing classification without relying on domain knowledge. A hybrid system that employs a statistical classification technique to perform discrimination based on structural features is a natural solution. While a statistical classifier is inherently domain independent, the domain knowledge necessary to support the description task can be eliminated with a set of generally-useful morphological features. Such a set of morphological features is suggested as the foundation for the development of a suite of structure detectors to perform generalized feature extraction for structural pattern recognition in time-series data. The ability of the suite of structure detectors to generate features useful for structural pattern recognition is evaluated by comparing the classification accuracies achieved when using the structure detectors versus commonly-used statistical feature extractors. Two real-world databases with markedly different characteristics and established ground truth serve as sources of data for the evaluation. The classification accuracies achieved using the features extracted by the structure detectors were consistently as good as or better than the classification accuracies achieved when using the features generated by the statistical feature extractors, thus demonstrating that the suite of structure detectors effectively performs generalized feature extraction for structural pattern recognition in time-series data.",
"title": ""
}
] | [
{
"docid": "neg:1840356_0",
"text": "The biological properties of dietary polyphenols are greatly dependent on their bioavailability that, in turn, is largely influenced by their degree of polymerization. The gut microbiota play a key role in modulating the production, bioavailability and, thus, the biological activities of phenolic metabolites, particularly after the intake of food containing high-molecular-weight polyphenols. In addition, evidence is emerging on the activity of dietary polyphenols on the modulation of the colonic microbial population composition or activity. However, although the great range of health-promoting activities of dietary polyphenols has been widely investigated, their effect on the modulation of the gut ecology and the two-way relationship \"polyphenols ↔ microbiota\" are still poorly understood. Only a few studies have examined the impact of dietary polyphenols on the human gut microbiota, and most were focused on single polyphenol molecules and selected bacterial populations. This review focuses on the reciprocal interactions between the gut microbiota and polyphenols, the mechanisms of action and the consequences of these interactions on human health.",
"title": ""
},
{
"docid": "neg:1840356_1",
"text": "Many studies of digital communication, in particular of Twitter, use natural language processing (NLP) to find topics, assess sentiment, and describe user behaviour. In finding topics often the relationships between users who participate in the topic are neglected. We propose a novel method of describing and classifying online conversations using only the structure of the underlying temporal network and not the content of individual messages. This method utilises all available information in the temporal network (no aggregation), combining both topological and temporal structure using temporal motifs and inter-event times. This allows us create an embedding of the temporal network in order to describe the behaviour of individuals and collectives over time and examine the structure of conversation over multiple timescales.",
"title": ""
},
{
"docid": "neg:1840356_2",
"text": "Surgery and other invasive therapies are complex interventions, the assessment of which is challenged by factors that depend on operator, team, and setting, such as learning curves, quality variations, and perception of equipoise. We propose recommendations for the assessment of surgery based on a five-stage description of the surgical development process. We also encourage the widespread use of prospective databases and registries. Reports of new techniques should be registered as a professional duty, anonymously if necessary when outcomes are adverse. Case series studies should be replaced by prospective development studies for early technical modifications and by prospective research databases for later pre-trial evaluation. Protocols for these studies should be registered publicly. Statistical process control techniques can be useful in both early and late assessment. Randomised trials should be used whenever possible to investigate efficacy, but adequate pre-trial data are essential to allow power calculations, clarify the definition and indications of the intervention, and develop quality measures. Difficulties in doing randomised clinical trials should be addressed by measures to evaluate learning curves and alleviate equipoise problems. Alternative prospective designs, such as interrupted time series studies, should be used when randomised trials are not feasible. Established procedures should be monitored with prospective databases to analyse outcome variations and to identify late and rare events. Achievement of improved design, conduct, and reporting of surgical research will need concerted action by editors, funders of health care and research, regulatory bodies, and professional societies.",
"title": ""
},
{
"docid": "neg:1840356_3",
"text": "Significant research and development of algorithms in intelligent transportation has grabbed more attention in recent years. An automated, fast, accurate and robust vehicle plate recognition system has become need for traffic control and law enforcement of traffic regulations; and the solution is ANPR. This paper is dedicated on an improved technique of OCR based license plate recognition using neural network trained dataset of object features. A blended algorithm for recognition of license plate is proposed and is compared with existing methods for improve accuracy. The whole system can be categorized under three major modules, namely License Plate Localization, Plate Character Segmentation, and Plate Character Recognition. The system is simulated on 300 national and international motor vehicle LP images and results obtained justifies the main requirement.",
"title": ""
},
{
"docid": "neg:1840356_4",
"text": "Background: A considerable amount of research has been done to explore the key factors that affect learning a second language. Among these factors are students' learning strategies, motivation, attitude, learning environment, and the age at which they are exposed to a second language. Such issues have not been explored extensively in Saudi Arabia, even though English is used as the medium of teaching and learning for medical studies. Objectives: First, to explore the learning strategies used to study English as a second language. Second, to identify students' motivations for studying English. Third, to assess students' perceptions toward their learning environment. Fourth, to investigate students' attitude towards the speakers of English. Fifth, to explore any possible relationships among English language proficiency grades of students and the following: demographic variables, grades for their general medical courses, learning strategies, motivational variables, attitudes, and environmental variables. It is also the aim of this study to explore the relationships between English language learning strategies and motivational variables. Methods: A cross-sectional descriptive study was conducted in May, 2008. The Attitudinal Measure of Learners of English as a Second Language (AMLESL) questionnaire was used to explore the learning strategies used by students to study English as a second language, their motivation to study English, their attitude toward English speaking people, and perceptions toward the environment where the learning is taking place. Results: A total of 110 out of 120 questionnaires were completed by Applied Medical Science undergraduates, yielding a response rate of 92%. Students utilize all types of learning strategies. Students were motivated 'integratively' and 'instrumentally'. There were significant correlations between the achievement in English and performance in general medical courses, learning strategies, motivation, age, and the formal level at which the student started to learn English. Conclusion: The study showed that students utilize all types of language learning strategies. However, cognitive strategies were the most frequently utilized. Students considered their learning environment as more positive than negative. Students were happy with their teacher, and with their English courses. Students held a positive attitude toward English speaking people. Achievement in English was associated positively with performance in the general medical courses, motivation, and social learning strategies. Relationship between English Language, Learning Strategies, Attitudes, Motivation, and Students’ Academic Achievement",
"title": ""
},
{
"docid": "neg:1840356_5",
"text": "Detecting representative frames in videos based on human actions is quite challenging because of the combined factors of human pose in action and the background. This paper addresses this problem and formulates the key frame detection as one of finding the video frames that optimally maximally contribute to differentiating the underlying action category from all other categories. To this end, we introduce a deep two-stream ConvNet for key frame detection in videos that learns to directly predict the location of key frames. Our key idea is to automatically generate labeled data for the CNN learning using a supervised linear discriminant method. While the training data is generated taking many different human action videos into account, the trained CNN can predict the importance of frames from a single video. We specify a new ConvNet framework, consisting of a summarizer and discriminator. The summarizer is a two-stream ConvNet aimed at, first, capturing the appearance and motion features of video frames, and then encoding the obtained appearance and motion features for video representation. The discriminator is a fitting function aimed at distinguishing between the key frames and others in the video. We conduct experiments on a challenging human action dataset UCF101 and show that our method can detect key frames with high accuracy.",
"title": ""
},
{
"docid": "neg:1840356_6",
"text": "Article history: Received 4 February 2009 Received in revised form 14 April 2010 Accepted 15 June 2010 Available online xxxx",
"title": ""
},
{
"docid": "neg:1840356_7",
"text": "This paper reports on the design, implementation and characterization of wafer-level packaging technology for a wide range of microelectromechanical system (MEMS) devices. The encapsulation technique is based on thermal decomposition of a sacrificial polymer through a polymer overcoat to form a released thin-film organic membrane with scalable height on top of the active part of the MEMS. Hermiticity and vacuum operation are obtained by thin-film deposition of a metal such as chromium, aluminum or gold. The thickness of the overcoat can be optimized according to the size of the device and differential pressure to package a wide variety of MEMS such as resonators, accelerometers and gyroscopes. The key performance metrics of several batches of packaged devices do not degrade as a result of residues from the sacrificial polymer. A Q factor of 5000 at a resonant frequency of 2.5 MHz for the packaged resonator, and a static sensitivity of 2 pF g−1 for the packaged accelerometer were obtained. Cavities as small as 0.000 15 mm3 for the resonator and as large as 1 mm3 for the accelerometer have been made by this method. (Some figures in this article are in colour only in the electronic version)",
"title": ""
},
{
"docid": "neg:1840356_8",
"text": "As interest in cryptocurrency has increased, problems have arisen with Proof-of-Work (PoW) and Proof-of-Stake (PoS) methods, the most representative methods of acquiring cryptocurrency in a blockchain. The PoW method is uneconomical and the PoS method can be easily monopolized by a few people. To cope with this issue, this paper introduces a Proof-of-Probability (PoP) method. The PoP is a method where each node sorts the encrypted actual hash as well as a number of fake hash, and then the first node to decrypt actual hash creates block. In addition, a wait time is used when decrypting one hash and then decrypting the next hash for restricting the excessive computing power competition. In addition, the centralization by validaters with many stakes can be avoided in the proposed PoP method.",
"title": ""
},
{
"docid": "neg:1840356_9",
"text": "We describe the implementation and use of a reverse compiler from Analog Devices 21xx assembler source to ANSI-C (with optional use of the language extensions for the TMS320C6x processors) which has been used to port substantial applications. The main results of this work are that reverse compilation is feasible and that some of the features that make small DSP's hard to compile for actually assist the process of reverse compilation compared to that of a general purpose processor. We present statistics on the occurrence of non-statically visible features of hand-written assembler code and look at the quality of the code generated by an optimising ANSI-C compiler from our reverse compiled source and compare it to code generated from conventionally authored ANSI-C programs.",
"title": ""
},
{
"docid": "neg:1840356_10",
"text": "Information extraction from microblog posts is an important task, as today microblogs capture an unprecedented amount of information and provide a view into the pulse of the world. As the core component of information extraction, we consider the task of Twitter entity linking in this paper. In the current entity linking literature, mention detection and entity disambiguation are frequently cast as equally important but distinct problems. However, in our task, we find that mention detection is often the performance bottleneck. The reason is that messages on micro-blogs are short, noisy and informal texts with little context, and often contain phrases with ambiguous meanings. To rigorously address the Twitter entity linking problem, we propose a structural SVM algorithm for entity linking that jointly optimizes mention detection and entity disambiguation as a single end-to-end task. By combining structural learning and a variety of firstorder, second-order, and context-sensitive features, our system is able to outperform existing state-of-the art entity linking systems by 15% F1.",
"title": ""
},
{
"docid": "neg:1840356_11",
"text": "Historically, social scientists have sought out explanations of human and social phenomena that provide interpretable causal mechanisms, while often ignoring their predictive accuracy. We argue that the increasingly computational nature of social science is beginning to reverse this traditional bias against prediction; however, it has also highlighted three important issues that require resolution. First, current practices for evaluating predictions must be better standardized. Second, theoretical limits to predictive accuracy in complex social systems must be better characterized, thereby setting expectations for what can be predicted or explained. Third, predictive accuracy and interpretability must be recognized as complements, not substitutes, when evaluating explanations. Resolving these three issues will lead to better, more replicable, and more useful social science.",
"title": ""
},
{
"docid": "neg:1840356_12",
"text": "In different applications like Complex document image processing, Advertisement and Intelligent transportation logo recognition is an important issue. Logo Recognition is an essential sub process although there are many approaches to study logos in these fields. In this paper a robust method for recognition of a logo is proposed, which involves K-nearest neighbors distance classifier and Support Vector Machine classifier to evaluate the similarity between images under test and trained images. For test images eight set of logo image with a rotation angle of 0°, 45°, 90°, 135°, 180°, 225°, 270°, and 315° are considered. A Dual Tree Complex Wavelet Transform features were used for determining features. Final result is obtained by measuring the similarity obtained from the feature vectors of the trained image and image under test. Total of 31 classes of logo images of different organizations are considered for experimental results. An accuracy of 87.49% is obtained using KNN classifier and 92.33% from SVM classifier.",
"title": ""
},
{
"docid": "neg:1840356_13",
"text": "We describe a web browser fingerprinting technique based on measuring the onscreen dimensions of font glyphs. Font rendering in web browsers is affected by many factors—browser version, what fonts are installed, and hinting and antialiasing settings, to name a few— that are sources of fingerprintable variation in end-user systems. We show that even the relatively crude tool of measuring glyph bounding boxes can yield a strong fingerprint, and is a threat to users’ privacy. Through a user experiment involving over 1,000 web browsers and an exhaustive survey of the allocated space of Unicode, we find that font metrics are more diverse than User-Agent strings, uniquely identifying 34% of participants, and putting others into smaller anonymity sets. Fingerprinting is easy and takes only milliseconds. We show that of the over 125,000 code points examined, it suffices to test only 43 in order to account for all the variation seen in our experiment. Font metrics, being orthogonal to many other fingerprinting techniques, can augment and sharpen those other techniques. We seek ways for privacy-oriented web browsers to reduce the effectiveness of font metric–based fingerprinting, without unduly harming usability. As part of the same user experiment of 1,000 web browsers, we find that whitelisting a set of standard font files has the potential to more than quadruple the size of anonymity sets on average, and reduce the fraction of users with a unique font fingerprint below 10%. We discuss other potential countermeasures.",
"title": ""
},
{
"docid": "neg:1840356_14",
"text": "BACKGROUND\nA new framework for heart sound analysis is proposed. One of the most difficult processes in heart sound analysis is segmentation, due to interference form murmurs.\n\n\nMETHOD\nEqual number of cardiac cycles were extracted from heart sounds with different heart rates using information from envelopes of autocorrelation functions without the need to label individual fundamental heart sounds (FHS). The complete method consists of envelope detection, calculation of cardiac cycle lengths using auto-correlation of envelope signals, features extraction using discrete wavelet transform, principal component analysis, and classification using neural network bagging predictors.\n\n\nRESULT\nThe proposed method was tested on a set of heart sounds obtained from several on-line databases and recorded with an electronic stethoscope. Geometric mean was used as performance index. Average classification performance using ten-fold cross-validation was 0.92 for noise free case, 0.90 under white noise with 10 dB signal-to-noise ratio (SNR), and 0.90 under impulse noise up to 0.3 s duration.\n\n\nCONCLUSION\nThe proposed method showed promising results and high noise robustness to a wide range of heart sounds. However, more tests are needed to address any bias that may have been introduced by different sources of heart sounds in the current training set, and to concretely validate the method. Further work include building a new training set recorded from actual patients, then further evaluate the method based on this new training set.",
"title": ""
},
{
"docid": "neg:1840356_15",
"text": "Progress in signal processing continues to enable welcome advances in high-frequency (HF) radio performance and efficiency. The latest data waveforms use channels wider than 3 kHz to boost data throughput and robustness. This has driven the need for a more capable Automatic Link Establishment (ALE) system that links faster and adapts the wideband HF (WBHF) waveform to efficiently use available spectrum. In this paper, we investigate the possibility and advantages of using various non-scanning ALE techniques with the new wideband ALE (WALE) to further improve spectrum awareness and linking speed.",
"title": ""
},
{
"docid": "neg:1840356_16",
"text": "In this paper, we investigate safe and efficient map-building strategies for a mobile robot with imperfect control and sensing. In the implementation, a robot equipped with a range sensor builds a polygonal map (layout) of a previously unknown indoor environment. The robot explores the environment and builds the map concurrently by patching together the local models acquired by the sensor into a global map. A well-studied and related problem is the simultaneous localization and mapping (SLAM) problem, where the goal is to integrate the information collected during navigation into the most accurate map possible. However, SLAM does not address the sensorplacement portion of the map-building task. That is, given the map built so far, where should the robot go next? This is the main question addressed in this paper. Concretely, an algorithm is proposed to guide the robot through a series of “good” positions, where “good” refers to the expected amount and quality of the information that will be revealed at each new location. This is similar to the nextbest-view (NBV) problem studied in computer vision and graphics. However, in mobile robotics the problem is complicated by several issues, two of which are particularly crucial. One is to achieve safe navigation despite an incomplete knowledge of the environment and sensor limitations (e.g., in range and incidence). The other issue is the need to ensure sufficient overlap between each new local model and the current map, in order to allow registration of successive views under positioning uncertainties inherent to mobile robots. To address both issues in a coherent framework, in this paper we introduce the concept of a safe region, defined as the largest region that is guaranteed to be free of obstacles given the sensor readings made so far. The construction of a safe region takes sensor limitations into account. In this paper we also describe an NBV algorithm that uses the safe-region concept to select the next robot position at each step. The International Journal of Robotics Research Vol. 21, No. 10–11, October-November 2002, pp. 829-848, ©2002 Sage Publications The new position is chosen within the safe region in order to maximize the expected gain of information under the constraint that the local model at this new position must have a minimal overlap with the current global map. In the future, NBV and SLAM algorithms should reinforce each other. While a SLAM algorithm builds a map by making the best use of the available sensory data, an NBV algorithm, such as that proposed here, guides the navigation of the robot through positions selected to provide the best sensory inputs. KEY WORDS—next-best view, safe region, online exploration, incidence constraints, map building",
"title": ""
},
{
"docid": "neg:1840356_17",
"text": "This paper describes several optimization techniques used to create an adequate route network graph for autonomous cars as a map reference for driving on German autobahn or similar highway tracks. We have taken the Route Network Definition File Format (RNDF) specified by DARPA and identified multiple flaws of the RNDF for creating digital maps for autonomous vehicles. Thus, we introduce various enhancements to it to form a digital map graph called RND-FGraph, which is well suited to map almost any urban transportation infrastructure. We will also outline and show results of fast optimizations to reduce the graph size. The RNDFGraph has been used for path-planning and trajectory evaluation by the behavior module of our two autonomous cars “Spirit of Berlin” and “MadeInGermany”. We have especially tuned the graph to map structured high speed environments such as autobahns where we have tested autonomously hundreds of kilometers under real traffic conditions.",
"title": ""
},
{
"docid": "neg:1840356_18",
"text": "In this paper, a new task scheduling algorithm called RASA, considering the distribution and scalability characteristics of grid resources, is proposed. The algorithm is built through a comprehensive study and analysis of two well known task scheduling algorithms, Min-min and Max-min. RASA uses the advantages of the both algorithms and covers their disadvantages. To achieve this, RASA firstly estimates the completion time of the tasks on each of the available grid resources and then applies the Max-min and Min-min algorithms, alternatively. In this respect, RASA uses the Min-min strategy to execute small tasks before the large ones and applies the Max-min strategy to avoid delays in the execution of large tasks and to support concurrency in the execution of large and small tasks. Our experimental results of applying RASA on scheduling independent tasks within grid environments demonstrate the applicability of RASA in achieving schedules with comparatively lower makespan.",
"title": ""
},
{
"docid": "neg:1840356_19",
"text": "In this paper, the design and development of a portable classroom attendance system based on fingerprint biometric is presented. Among the salient aims of implementing a biometric feature into a portable attendance system is security and portability. The circuit of this device is strategically constructed to have an independent source of energy to be operated, as well as its miniature design which made it more efficient in term of its portable capability. Rather than recording the attendance in writing or queuing in front of class equipped with fixed fingerprint or smart card reader. This paper introduces a portable fingerprint based biometric attendance system which addresses the weaknesses of the existing paper based attendance method or long time queuing. In addition, our biometric fingerprint based system is encrypted which preserves data integrity.",
"title": ""
}
] |
1840357 | Double Embeddings and CNN-based Sequence Labeling for Aspect Extraction | [
{
"docid": "pos:1840357_0",
"text": "We propose a novel LSTM-based deep multi-task learning framework for aspect term extraction from user review sentences. Two LSTMs equipped with extended memories and neural memory operations are designed for jointly handling the extraction tasks of aspects and opinions via memory interactions. Sentimental sentence constraint is also added for more accurate prediction via another LSTM. Experiment results over two benchmark datasets demonstrate the effectiveness of our framework.",
"title": ""
}
] | [
{
"docid": "neg:1840357_0",
"text": "Agile software development methodologies have been greeted with enthusiasm by many software developers, yet their widespread adoption has also resulted in closer examination of their strengths and weaknesses. While analyses and evaluations abound, the need still remains for an objective and systematic appraisal of Agile processes specifically aimed at defining strategies for their improvement. We provide a review of the strengths and weaknesses identified in Agile processes, based on which a strengths- weaknesses-opportunities-threats (SWOT) analysis of the processes is performed. We suggest this type of analysis as a useful tool for highlighting and addressing the problem issues in Agile processes, since the results can be used as improvement strategies.",
"title": ""
},
{
"docid": "neg:1840357_1",
"text": "Semi-automatic parking system is a driver convenience system automating steering control required during parking operation. This paper proposes novel monocular-vision based target parking-slot recognition by recognizing parking-slot markings when driver designates a seed-point inside the target parking-slot with touch screen. Proposed method compensates the distortion of fisheye lens and constructs a bird’s eye view image using homography. Because adjacent vehicles are projected along the outward direction from camera in the bird’s eye view image, if marking line-segment distinguishing parking-slots from roadway and front-ends of marking linesegments dividing parking-slots are observed, proposed method successfully recognizes the target parking-slot marking. Directional intensity gradient, utilizing the width of marking line-segment and the direction of seed-point with respect to camera position as a prior knowledge, can detect marking linesegments irrespective of noise and illumination variation. Making efficient use of the structure of parking-slot markings in the bird’s eye view image, proposed method simply recognizes the target parking-slot marking. It is validated by experiments that proposed method can successfully recognize target parkingslot under various situations and illumination conditions.",
"title": ""
},
{
"docid": "neg:1840357_2",
"text": "FinFET devices have been proposed as a promising substitute for conventional bulk CMOS-based devices at the nanoscale due to their extraordinary properties such as improved channel controllability, a high on/off current ratio, reduced short-channel effects, and relative immunity to gate line-edge roughness. This brief builds standard cell libraries for the advanced 7-nm FinFET technology, supporting multiple threshold voltages and supply voltages. The circuit synthesis results of various combinational and sequential circuits based on the presented 7-nm FinFET standard cell libraries forecast 10× and 1000× energy reductions on average in a superthreshold regime and 16× and 3000× energy reductions on average in a near-threshold regime as compared with the results of the 14-nm and 45-nm bulk CMOS technology nodes, respectively.",
"title": ""
},
{
"docid": "neg:1840357_3",
"text": "Event extraction has been well studied for more than two decades, through both the lens of document-level and sentence-level event extraction. However, event extraction methods to date do not yet offer a satisfactory solution to providing concise, structured, document-level summaries of events in news articles. Prior work on document-level event extraction methods have focused on highly specific domains, often with great reliance on handcrafted rules. Such approaches do not generalize well to new domains. In contrast, sentence-level event extraction methods have applied to a much wider variety of domains, but generate output at such fine-grained details that they cannot offer good document-level summaries of events. In this thesis, we propose a new framework for extracting document-level event summaries called macro-events, unifying together aspects of both information extraction and text summarization. The goal of this work is to extract concise, structured representations of documents that can clearly outline the main event of interest and all the necessary argument fillers to describe the event. Unlike work in abstractive and extractive summarization, we seek to create template-based, structured summaries, rather than plain text summaries. We propose three novel methods to address the macro-event extraction task. First, we introduce a structured prediction model based on the Learning to Search framework for jointly learning argument fillers both across and within event argument slots. Second, we propose a multi-layer neural network that is trained directly on macro-event annotated data. Finally, we propose a deep learning method that treats the problem as machine comprehension, which does not require training with any on-domain macro-event labeled data. Our experimental results on a variety of domains show that such algorithms can achieve stronger performance on this task compared to existing baseline approaches. On average across all datasets, neural networks can achieve a 1.76% and 3.96% improvement on micro-averaged and macro-averaged F1 respectively over baseline approaches, while Learning to Search achieves a 3.87% and 5.10% improvement over baseline approaches on the same metrics. Furthermore, under scenarios of limited training data, we find that machine comprehension models can offer very strong performance compared to directly supervised algorithms, while requiring very little human effort to adapt to new domains.",
"title": ""
},
{
"docid": "neg:1840357_4",
"text": "The dominant neural architectures in question answer retrieval are based on recurrent or convolutional encoders configured with complex word matching layers. Given that recent architectural innovations are mostly new word interaction layers or attention-based matching mechanisms, it seems to be a well-established fact that these components are mandatory for good performance. Unfortunately, the memory and computation cost incurred by these complex mechanisms are undesirable for practical applications. As such, this paper tackles the question of whether it is possible to achieve competitive performance with simple neural architectures. We propose a simple but novel deep learning architecture for fast and efficient question-answer ranking and retrieval. More specifically, our proposed model, HyperQA, is a parameter efficient neural network that outperforms other parameter intensive models such as Attentive Pooling BiLSTMs and Multi-Perspective CNNs on multiple QA benchmarks. The novelty behind HyperQA is a pairwise ranking objective that models the relationship between question and answer embeddings in Hyperbolic space instead of Euclidean space. This empowers our model with a self-organizing ability and enables automatic discovery of latent hierarchies while learning embeddings of questions and answers. Our model requires no feature engineering, no similarity matrix matching, no complicated attention mechanisms nor over-parameterized layers and yet outperforms and remains competitive to many models that have these functionalities on multiple benchmarks.",
"title": ""
},
{
"docid": "neg:1840357_5",
"text": "This paper presents the artificial neural network approach namely Back propagation network (BPNs) and probabilistic neural network (PNN). It is used to classify the type of tumor in MRI images of different patients with Astrocytoma type of brain tumor. The image processing techniques have been developed for detection of the tumor in the MRI images. Gray Level Co-occurrence Matrix (GLCM) is used to achieve the feature extraction. The whole system worked in two modes firstly Training/Learning mode and secondly Testing/Recognition mode.",
"title": ""
},
{
"docid": "neg:1840357_6",
"text": "This paper presents security of Internet of things. In the Internet of Things vision, every physical object has a virtual component that can produce and consume services Such extreme interconnection will bring unprecedented convenience and economy, but it will also require novel approaches to ensure its safe and ethical use. The Internet and its users are already under continual attack, and a growing economy-replete with business models that undermine the Internet's ethical use-is fully focused on exploiting the current version's foundational weaknesses.",
"title": ""
},
{
"docid": "neg:1840357_7",
"text": "—Social media and Social Network Analysis (SNA) acquired a huge popularity and represent one of the most important social and computer science phenomena of recent years. One of the most studied problems in this research area is influence and information propagation. The aim of this paper is to analyze the information diffusion process and predict the influence (represented by the rate of infected nodes at the end of the diffusion process) of an initial set of nodes in two networks: Flickr user's contacts and YouTube videos users commenting these videos. These networks are dissimilar in their structure (size, type, diameter, density, components), and the type of the relationships (explicit relationship represented by the contacts links, and implicit relationship created by commenting on videos), they are extracted using NodeXL tool. Three models are used for modeling the dissemination process: Linear Threshold Model (LTM), Independent Cascade Model (ICM) and an extension of this last called Weighted Cascade Model (WCM). Networks metrics and visualization were manipulated by NodeXL as well. Experiments results show that the structure of the network affect the diffusion process directly. Unlike results given in the blog world networks, the information can spread farther through explicit connections than through implicit relations.",
"title": ""
},
{
"docid": "neg:1840357_8",
"text": "We introduce a new deep learning model for semantic role labeling (SRL) that significantly improves the state of the art, along with detailed analyses to reveal its strengths and limitations. We use a deep highway BiLSTM architecture with constrained decoding, while observing a number of recent best practices for initialization and regularization. Our 8-layer ensemble model achieves 83.2 F1 on the CoNLL 2005 test set and 83.4 F1 on CoNLL 2012, roughly a 10% relative error reduction over the previous state of the art. Extensive empirical analysis of these gains show that (1) deep models excel at recovering long-distance dependencies but can still make surprisingly obvious errors, and (2) that there is still room for syntactic parsers to improve these results.",
"title": ""
},
{
"docid": "neg:1840357_9",
"text": "A hard real-time system is usually subject to stringent reliability and timing constraints since failure to produce correct results in a timely manner may lead to a disaster. One way to avoid missing deadlines is to trade the quality of computation results for timeliness, and software fault-tolerance is often achieved with the use of redundant programs. A deadline mechanism which combines these two methods is proposed to provide software faulttolerance in hard real-time periodic task systems. Specifically, we consider the problem of scheduling a set of realtime periodic tasks each of which has two versions:primary and alternate. The primary version contains more functions (thus more complex) and produces good quality results but its correctness is more difficult to verify because of its high level of complexity and resource usage. By contrast, the alternate version contains only the minimum required functions (thus simpler) and produces less precise but acceptable results, and its correctness is easy to verify. We propose a scheduling algorithm which (i) guarantees either the primary or alternate version of each critical task to be completed in time and (ii) attempts to complete as many primaries as possible. Our basic algorithm uses a fixed priority-driven preemptive scheduling scheme to pre-allocate time intervals to the alternates, and at run-time, attempts to execute primaries first. An alternate will be executed only (1) if its primary fails due to lack of time or manifestation of bugs, or (2) when the latest time to start execution of the alternate without missing the corresponding task deadline is reached. This algorithm is shown to be effective and easy to implement. This algorithm is enhanced further to prevent early failures in executing primaries from triggering failures in the subsequent job executions, thus improving efficiency of processor usage.",
"title": ""
},
{
"docid": "neg:1840357_10",
"text": "Zab is a crash-recovery atomic broadcast algorithm we designed for the ZooKeeper coordination service. ZooKeeper implements a primary-backup scheme in which a primary process executes clients operations and uses Zab to propagate the corresponding incremental state changes to backup processes1. Due the dependence of an incremental state change on the sequence of changes previously generated, Zab must guarantee that if it delivers a given state change, then all other changes it depends upon must be delivered first. Since primaries may crash, Zab must satisfy this requirement despite crashes of primaries.",
"title": ""
},
{
"docid": "neg:1840357_11",
"text": "Discovering the author's interest over time from documents has important applications in recommendation systems, authorship identification and opinion extraction. In this paper, we propose an interest drift model (IDM), which monitors the evolution of author interests in time-stamped documents. The model further uses the discovered author interest information to help finding better topics. Unlike traditional topic models, our model is sensitive to the ordering of words, thus it extracts more information from the semantic meaning of the context. The experiment results show that the IDM model learns better topics than state-of-the-art topic models.",
"title": ""
},
{
"docid": "neg:1840357_12",
"text": "Advanced sensing and measurement techniques are key technologies to realize a smart grid. The giant magnetoresistance (GMR) effect has revolutionized the fields of data storage and magnetic measurement. In this work, a design of a GMR current sensor based on a commercial analog GMR chip for applications in a smart grid is presented and discussed. Static, dynamic and thermal properties of the sensor were characterized. The characterizations showed that in the operation range from 0 to ±5 A, the sensor had a sensitivity of 28 mV·A(-1), linearity of 99.97%, maximum deviation of 2.717%, frequency response of −1.5 dB at 10 kHz current measurement, and maximum change of the amplitude response of 0.0335%·°C(-1) with thermal compensation. In the distributed real-time measurement and monitoring of a smart grid system, the GMR current sensor shows excellent performance and is cost effective, making it suitable for applications such as steady-state and transient-state monitoring. With the advantages of having a high sensitivity, high linearity, small volume, low cost, and simple structure, the GMR current sensor is promising for the measurement and monitoring of smart grids.",
"title": ""
},
{
"docid": "neg:1840357_13",
"text": "Literary genres are commonly viewed as being defined in terms of content and style. In this paper, we focus on one particular type of content feature, namely lexical expressions of emotion, and investigate the hypothesis that emotion-related information correlates with particular genres. Using genre classification as a testbed, we compare a model that computes lexiconbased emotion scores globally for complete stories with a model that tracks emotion arcs through stories on a subset of Project Gutenberg with five genres. Our main findings are: (a), the global emotion model is competitive with a largevocabulary bag-of-words genre classifier (80 % F1); (b), the emotion arc model shows a lower performance (59 % F1) but shows complementary behavior to the global model, as indicated by a very good performance of an oracle model (94 % F1) and an improved performance of an ensemble model (84 % F1); (c), genres differ in the extent to which stories follow the same emotional arcs, with particularly uniform behavior for anger (mystery) and fear (adventures, romance, humor, science fiction).",
"title": ""
},
{
"docid": "neg:1840357_14",
"text": "In a previous article, we presented a systematic computational study of the extraction of semantic representations from the word-word co-occurrence statistics of large text corpora. The conclusion was that semantic vectors of pointwise mutual information values from very small co-occurrence windows, together with a cosine distance measure, consistently resulted in the best representations across a range of psychologically relevant semantic tasks. This article extends that study by investigating the use of three further factors--namely, the application of stop-lists, word stemming, and dimensionality reduction using singular value decomposition (SVD)--that have been used to provide improved performance elsewhere. It also introduces an additional semantic task and explores the advantages of using a much larger corpus. This leads to the discovery and analysis of improved SVD-based methods for generating semantic representations (that provide new state-of-the-art performance on a standard TOEFL task) and the identification and discussion of problems and misleading results that can arise without a full systematic study.",
"title": ""
},
{
"docid": "neg:1840357_15",
"text": "As the capabilities of artificial intelligence (AI) systems improve, it becomes important to constrain their actions to ensure their behaviour remains beneficial to humanity. A variety of ethical, legal and safety-based frameworks have been proposed as a basis for designing these constraints. Despite their variations, these frameworks share the common characteristic that decision-making must consider multiple potentially conflicting factors. We demonstrate that these alignment frameworks can be represented as utility functions, but that the widely used Maximum Expected Utility (MEU) paradigm provides insufficient support for such multiobjective decision-making. We show that a Multiobjective Maximum Expected Utility paradigm based on the combination of vector utilities and non-linear action–selection can overcome many of the issues which limit MEU’s effectiveness in implementing aligned AI. We examine existing approaches to multiobjective AI, and identify how these can contribute to the development of human-aligned intelligent agents.",
"title": ""
},
{
"docid": "neg:1840357_16",
"text": "Although great progress has been made in automatic speech recognition, significant performance degradation still exists in noisy environments. Recently, very deep convolutional neural networks CNNs have been successfully applied to computer vision and speech recognition tasks. Based on our previous work on very deep CNNs, in this paper this architecture is further developed to improve recognition accuracy for noise robust speech recognition. In the proposed very deep CNN architecture, we study the best configuration for the sizes of filters, pooling, and input feature maps: the sizes of filters and poolings are reduced and dimensions of input features are extended to allow for adding more convolutional layers. Then the appropriate pooling, padding, and input feature map selection strategies are investigated and applied to the very deep CNN to make it more robust for speech recognition. In addition, an in-depth analysis of the architecture reveals key characteristics, such as compact model scale, fast convergence speed, and noise robustness. The proposed new model is evaluated on two tasks: Aurora4 task with multiple additive noise types and channel mismatch, and the AMI meeting transcription task with significant reverberation. Experiments on both tasks show that the proposed very deep CNNs can significantly reduce word error rate WER for noise robust speech recognition. The best architecture obtains a 10.0% relative reduction over the traditional CNN on AMI, competitive with the long short-term memory recurrent neural networks LSTM-RNN acoustic model. On Aurora4, even without feature enhancement, model adaptation, and sequence training, it achieves a WER of 8.81%, a 17.0% relative improvement over the LSTM-RNN. To our knowledge, this is the best published result on Aurora4.",
"title": ""
},
{
"docid": "neg:1840357_17",
"text": "Artificially structured metamaterials have enabled unprecedented flexibility in manipulating electromagnetic waves and producing new functionalities, including the cloak of invisibility based on coordinate transformation. Unlike other cloaking approaches4–6, which are typically limited to subwavelength objects, the transformation method allows the design of cloaking devices that render a macroscopic object invisible. In addition, the design is not sensitive to the object that is being cloaked. The first experimental demonstration of such a cloak at microwave frequencies was recently reported7. We note, however, that that design cannot be implemented for an optical cloak, which is certainly of particular interest because optical frequencies are where the word ‘invisibility’ is conventionally defined. Here we present the design of a non-magnetic cloak operating at optical frequencies. The principle and structure of the proposed cylindrical cloak are analysed, and the general recipe for the implementation of such a device is provided. The coordinate transformation used in the proposed nonmagnetic optical cloak of cylindrical geometry is similar to that in ref. 7, by which a cylindrical region r , b is compressed into a concentric cylindrical shell a , r , b as shown in Fig. 1a. This transformation results in the following requirements for anisotropic permittivity and permeability in the cloaking shell:",
"title": ""
},
{
"docid": "neg:1840357_18",
"text": "Sentiment analysis has been a major area of interest, for which the existence of highquality resources is crucial. In Arabic, there is a reasonable number of sentiment lexicons but with major deficiencies. The paper presents a large-scale Standard Arabic Sentiment Lexicon (SLSA) that is publicly available for free and avoids the deficiencies in the current resources. SLSA has the highest up-to-date reported coverage. The construction of SLSA is based on linking the lexicon of AraMorph with SentiWordNet along with a few heuristics and powerful back-off. SLSA shows a relative improvement of 37.8% over a state-of-theart lexicon when tested for accuracy. It also outperforms it by an absolute 3.5% of F1-score when tested for sentiment analysis.",
"title": ""
}
] |
1840358 | 3D Point Cloud Semantic Modelling: Integrated Framework for Indoor Spaces and Furniture | [
{
"docid": "pos:1840358_0",
"text": "Two less addressed issues of deep reinforcement learning are (1) lack of generalization capability to new goals, and (2) data inefficiency, i.e., the model requires several (and often costly) episodes of trial and error to converge, which makes it impractical to be applied to real-world scenarios. In this paper, we address these two issues and apply our model to target-driven visual navigation. To address the first issue, we propose an actor-critic model whose policy is a function of the goal as well as the current state, which allows better generalization. To address the second issue, we propose the AI2-THOR framework, which provides an environment with high-quality 3D scenes and a physics engine. Our framework enables agents to take actions and interact with objects. Hence, we can collect a huge number of training samples efficiently. We show that our proposed method (1) converges faster than the state-of-the-art deep reinforcement learning methods, (2) generalizes across targets and scenes, (3) generalizes to a real robot scenario with a small amount of fine-tuning (although the model is trained in simulation), (4) is end-to-end trainable and does not need feature engineering, feature matching between frames or 3D reconstruction of the environment.",
"title": ""
}
] | [
{
"docid": "neg:1840358_0",
"text": "This paper reports our recent result in designing a function for autonomous APs to estimate throughput and delay of its clients in 2.4GHz WiFi channels to support those APs' dynamic channel selection. Our function takes as inputs the traffic volume and strength of signals emitted from nearby interference APs as well as the target AP's traffic volume. By this function, the target AP can estimate throughput and delay of its clients without actually moving to each channel, it is just required to monitor IEEE802.11 MAC frames sent or received by the interference APs. The function is composed of an SVM-based classifier to estimate capacity saturation and a regression function to estimate both throughput and delay in case of saturation in the target channel. The training dataset for the machine learning is created by a highly-precise network simulator. We have conducted over 10,000 simulations to train the model, and evaluated using additional 2,000 simulation results. The result shows that the estimated throughput error is less than 10%.",
"title": ""
},
{
"docid": "neg:1840358_1",
"text": "A Software Defined Network (SDN) is a new network architecture that provides central control over the network. Although central control is the major advantage of SDN, it is also a single point of failure if it is made unreachable by a Distributed Denial of Service (DDoS) Attack. To mitigate this threat, this paper proposes to use the central control of SDN for attack detection and introduces a solution that is effective and lightweight in terms of the resources that it uses. More precisely, this paper shows how DDoS attacks can exhaust controller resources and provides a solution to detect such attacks based on the entropy variation of the destination IP address. This method is able to detect DDoS within the first five hundred packets of the attack traffic.",
"title": ""
},
{
"docid": "neg:1840358_2",
"text": "In this paper, we first introduce the RF performance of Globalfoundries 45RFSOI process. NFET Ft > 290GHz and Fmax >380GHz. Then we present several mm-Wave circuit block designs, i.e., Switch, Power Amplifier, and LNA, based on 45RFSOI process for 5G Front End Module (FEM) applications. For the SPDT switch, insertion loss (IL) < 1dB at 30GHz with 32dBm P1dB and > 25dBm Pmax. For the PA, with a 2.9V power supply, the PA achieves 13.1dB power gain and a saturated output power (Psat) of 16.2dBm with maximum power-added efficiency (PAE) of 41.5% at 24Ghz continuous-wave (CW). With 960Mb/s 64QAM signal, 22.5% average PAE, −29.6dB EVM, and −30.5dBc ACLR are achieved with 9.5dBm average output power.",
"title": ""
},
{
"docid": "neg:1840358_3",
"text": "We introduce the type theory ¿µv, a call-by-value variant of Parigot's ¿µ-calculus, as a Curry-Howard representation theory of classical propositional proofs. The associated rewrite system is Church-Rosser and strongly normalizing, and definitional equality of the type theory is consistent, compatible with cut, congruent and decidable. The attendant call-by-value programming language µPCFv is obtained from ¿µv by augmenting it by basic arithmetic, conditionals and fixpoints. We study the behavioural properties of µPCFv and show that, though simple, it is a very general language for functional computation with control: it can express all the main control constructs such as exceptions and first-class continuations. Proof-theoretically the dual ¿µv-constructs of naming and µ-abstraction witness the introduction and elimination rules of absurdity respectively. Computationally they give succinct expression to a kind of generic (forward) \"jump\" operator, which may be regarded as a unifying control construct for functional computation. Our goal is that ¿µv and µPCFv respectively should be to functional computation with first-class access to the flow of control what ¿-calculus and PCF respectively are to pure functional programming: ¿µv gives the logical basis via the Curry-Howard correspondence, and µPCFv is a prototypical language albeit in purified form.",
"title": ""
},
{
"docid": "neg:1840358_4",
"text": "A typical method to obtain valuable information is to extract the sentiment or opinion from a message. Machine learning technologies are widely used in sentiment classification because of their ability to “learn” from the training dataset to predict or support decision making with relatively high accuracy. However, when the dataset is large, some algorithms might not scale up well. In this paper, we aim to evaluate the scalability of Naïve Bayes classifier (NBC) in large datasets. Instead of using a standard library (e.g., Mahout), we implemented NBC to achieve fine-grain control of the analysis procedure. A Big Data analyzing system is also design for this study. The result is encouraging in that the accuracy of NBC is improved and approaches 82% when the dataset size increases. We have demonstrated that NBC is able to scale up to analyze the sentiment of millions movie reviews with increasing throughput.",
"title": ""
},
{
"docid": "neg:1840358_5",
"text": "Methods based on kernel density estimation have been successfully applied for various data mining tasks. Their natural interpretation together with suitable properties make them an attractive tool among others in clustering problems. In this paper, the Complete Gradient Clustering Algorithm has been used to investigate a real data set of grains. The wheat varieties, Kama, Rosa and Canadian, characterized by measurements of main grain geometric features obtained by X-ray technique, have been analyzed. The proposed algorithm is expected to be an effective tool for recognizing wheat varieties. A comparison between the clustering results obtained from this method and the classical k-means clustering algorithm shows positive practical features of the Complete Gradient Clustering Algorithm.",
"title": ""
},
{
"docid": "neg:1840358_6",
"text": "Advances in deep reinforcement learning have allowed autonomous agents to perform well on Atari games, often outperforming humans, using only raw pixels to make their decisions. However, most of these games take place in 2D environments that are fully observable to the agent. In this paper, we present Arnold, a completely autonomous agent to play First-Person Shooter Games using only screen pixel data and demonstrate its effectiveness on Doom, a classical firstperson shooter game. Arnold is trained with deep reinforcement learning using a recent Action-Navigation architecture, which uses separate deep neural networks for exploring the map and fighting enemies. Furthermore, it utilizes a lot of techniques such as augmenting high-level game features, reward shaping and sequential updates for efficient training and effective performance. Arnold outperforms average humans as well as in-built game bots on different variations of the deathmatch. It also obtained the highest kill-to-death ratio in both the tracks of the Visual Doom AI Competition and placed second in terms of the number of frags.",
"title": ""
},
{
"docid": "neg:1840358_7",
"text": "Social media have become dominant in everyday life during the last few years where users share their thoughts and experiences about their enjoyable events in posts. Most of these posts are related to different categories related to: activities, such as dancing, landscapes, such as beach, people, such as a selfie, and animals such as pets. While some of these posts become popular and get more attention, others are completely ignored. In order to address the desire of users to create popular posts, several researches have studied post popularity prediction. Existing works focus on predicting the popularity without considering the category type of the post. In this paper we propose category specific post popularity prediction using visual and textual content for action, scene, people and animal categories. In this way we aim to answer the question What makes a post belonging to a specific action, scene, people or animal category popular? To answer to this question we perform several experiments on a collection of 65K posts crawled from Instagram.",
"title": ""
},
{
"docid": "neg:1840358_8",
"text": "Robots exhibit life-like behavior by performing intelligent actions. To enhance human-robot interaction it is necessary to investigate and understand how end-users perceive such animate behavior. In this paper, we report an experiment to investigate how people perceived different robot embodiments in terms of animacy and intelligence. iCat and Robovie II were used as the two embodiments in this experiment. We conducted a between-subject experiment where robot type was the independent variable, and perceived animacy and intelligence of the robot were the dependent variables. Our findings suggest that a robots perceived intelligence is significantly correlated with animacy. The correlation between the intelligence and the animacy of a robot was observed to be stronger in the case of the iCat embodiment. Our results also indicate that the more animated the face of the robot, the more likely it is to attract the attention of a user. We also discuss the possible and probable explanations of the results obtained.",
"title": ""
},
{
"docid": "neg:1840358_9",
"text": "The diffusion model for 2-choice decisions (R. Ratcliff, 1978) was applied to data from lexical decision experiments in which word frequency, proportion of high- versus low-frequency words, and type of nonword were manipulated. The model gave a good account of all of the dependent variables--accuracy, correct and error response times, and their distributions--and provided a description of how the component processes involved in the lexical decision task were affected by experimental variables. All of the variables investigated affected the rate at which information was accumulated from the stimuli--called drift rate in the model. The different drift rates observed for the various classes of stimuli can all be explained by a 2-dimensional signal-detection representation of stimulus information. The authors discuss how this representation and the diffusion model's decision process might be integrated with current models of lexical access.",
"title": ""
},
{
"docid": "neg:1840358_10",
"text": "The area under the ROC (Receiver Operating Characteristic) curve, or simply AUC, has been widely used to measure model performance for binary classification tasks. It can be estimated under parametric, semiparametric and nonparametric assumptions. The non-parametric estimate of the AUC, which is calculated from the ranks of predicted scores of instances, does not always sufficiently take advantage of the predicted scores. This problem is tackled in this paper. On the basis of the ranks and the original values of the predicted scores, we introduce a new metric, called a scored AUC or sAUC. Experimental results on 20 UCI data sets empirically demonstrate the validity of the new metric for classifier evaluation and selection.",
"title": ""
},
{
"docid": "neg:1840358_11",
"text": "In Vehicular Ad Hoc Networks (VANETs), anonymity of the nodes sending messages should be preserved, while at the same time the law enforcement agencies should be able to trace the messages to the senders when necessary. It is also necessary that the messages sent are authenticated and delivered to the vehicles in the relevant areas quickly. In this paper, we present an efficient protocol for fast dissemination of authenticated messages in VANETs. It ensures the anonymity of the senders and also provides mechanism for law enforcement agencies to trace the messages to their senders, when necessary.",
"title": ""
},
{
"docid": "neg:1840358_12",
"text": "For large state-space Markovian Decision Problems MonteCarlo planning is one of the few viable approaches to find near-optimal solutions. In this paper we introduce a new algorithm, UCT, that applies bandit ideas to guide Monte-Carlo planning. In finite-horizon or discounted MDPs the algorithm is shown to be consistent and finite sample bounds are derived on the estimation error due to sampling. Experimental results show that in several domains, UCT is significantly more efficient than its alternatives.",
"title": ""
},
{
"docid": "neg:1840358_13",
"text": "Marketing has been criticised from all spheres today since the real worth of all the marketing efforts can hardly be precisely determined. Today consumers are better informed and also misinformed at times due to the bombardment of various pieces of information through a new type of interactive media, i.e., social media (SM). In SM, communication is through dialogue channels wherein consumers pay more attention to SM buzz rather than promotions of marketers. The various forms of SM create a complex set of online social networks (OSN), through which word-of-mouth (WOM) propagates and influence consumer decisions. With the growth of OSN and user generated contents (UGC), WOM metamorphoses to electronic word-of-mouth (eWOM), which spreads in astronomical proportions. Previous works study the effect of external and internal influences in affecting consumer behaviour. However, today the need is to resort to multidisciplinary approaches to find out how SM influence consumers with eWOM and online reviews. This paper reviews the emerging trend of how multiple disciplines viz. Statistics, Data Mining techniques, Network Analysis, etc. are being integrated by marketers today to analyse eWOM and derive actionable intelligence.",
"title": ""
},
{
"docid": "neg:1840358_14",
"text": "In this paper a morphological tagging approach for document image invoice analysis is described. Tokens close by their morphology and confirmed in their location within different similar contexts make apparent some parts of speech representative of the structure elements. This bottom up approach avoids the use of an priori knowledge provided that there are redundant and frequent contexts in the text. The approach is applied on the invoice body text roughly recognized by OCR and automatically segmented. The method makes possible the detection of the invoice articles and their different fields. The regularity of the article composition and its redundancy in the invoice is a good help for its structure. The recognition rate of 276 invoices and 1704 articles, is over than 91.02% for articles and 92.56% for fields.",
"title": ""
},
{
"docid": "neg:1840358_15",
"text": "Human activity recognition using mobile device sensors is an active area of research in pervasive computing. In our work, we aim at implementing activity recognition approaches that are suitable for real life situations. This paper focuses on the problem of recognizing the on-body position of the mobile device which in a real world setting is not known a priori. We present a new real world data set that has been collected from 15 participants for 8 common activities were they carried 7 wearable devices in different positions. Further, we introduce a device localization method that uses random forest classifiers to predict the device position based on acceleration data. We perform the most complete experiment in on-body device location that includes all relevant device positions for the recognition of a variety of different activities. We show that the method outperforms other approaches achieving an F-Measure of 89% across different positions. We also show that the detection of the device position consistently improves the result of activity recognition for common activities.",
"title": ""
},
{
"docid": "neg:1840358_16",
"text": "We propose a fast, parallel, maximum clique algorithm for large, sparse graphs that is designed to exploit characteristics of social and information networks. We observe roughly linear runtime scaling over graphs between 1000 vertices and 100M vertices. In a test with a 1.8 billion-edge social network, the algorithm finds the largest clique in about 20 minutes. For social networks, in particular, we found that using the core number of a vertex in combination with a good heuristic clique finder efficiently removes the vast majority of the search space. In addition, we parallelize the exploration of the search tree. In the algorithm, processes immediately communicate changes to upper and lower bounds on the size of maximum clique, which occasionally results in a super-linear speedup because vertices with especially large search spaces can be pruned by other processes. We use this clique finder to investigate the size of the largest temporal strong components in dynamic networks, which requires finding the largest clique in a particular temporal reachability graph.",
"title": ""
},
{
"docid": "neg:1840358_17",
"text": "Conversation systems are of growing importance since they enable an easy interaction interface between humans and computers: using natural languages. To build a conversation system with adequate intelligence is challenging, and requires abundant resources including an acquisition of big data and interdisciplinary techniques, such as information retrieval and natural language processing. Along with the prosperity of Web 2.0, the massive data available greatly facilitate data-driven methods such as deep learning for human-computer conversation systems. Owing to the diversity of Web resources, a retrieval-based conversation system will come up with at least some results from the immense repository for any user inputs. Given a human issued message, i.e., query, a traditional conversation system would provide a response after adequate training and learning of how to respond. In this paper, we propose a new task for conversation systems: joint learning of response ranking featured with next utterance suggestion. We assume that the new conversation mode is more proactive and keeps user engaging. We examine the assumption in experiments. Besides, to address the joint learning task, we propose a novel Dual-LSTM Chain Model to couple response ranking and next utterance suggestion simultaneously. From the experimental results, we demonstrate the usefulness of the proposed task and the effectiveness of the proposed model.",
"title": ""
},
{
"docid": "neg:1840358_18",
"text": "This paper presents a project that allows the Baxter humanoid robot to play chess against human players autonomously. The complete solution uses three main subsystems: computer vision based on a single camera embedded in Baxter's arm to perceive the game state, an open-source chess engine to compute the next move, and a mechatronics subsystem with a 7-DOF arm to manipulate the pieces. Baxter can play chess successfully in unconstrained environments by dynamically responding to changes in the environment. This implementation demonstrates Baxter's capabilities of vision-based adaptive control and small-scale manipulation, which can be applicable to numerous applications, while also contributing to the computer vision chess analysis literature.",
"title": ""
},
{
"docid": "neg:1840358_19",
"text": "As the world becomes more connected to the cyber world, attackers and hackers are becoming increasingly sophisticated to penetrate computer systems and networks. Intrusion Detection System (IDS) plays a vital role in defending a network against intrusion. Many commercial IDSs are available in marketplace but with high cost. At the same time open source IDSs are also available with continuous support and upgradation from large user community. Each of these IDSs adopts a different approaches thus may target different applications. This paper provides a quick review of six Open Source IDS tools so that one can choose the appropriate Open Source IDS tool as per their organization requirements.",
"title": ""
}
] |
1840359 | Bidirectional Single-Stage Grid-Connected Inverter for a Battery Energy Storage System | [
{
"docid": "pos:1840359_0",
"text": "This paper presents a novel zero-voltage switching (ZVS) approach to a grid-connected single-stage flyback inverter. The soft-switching of the primary switch is achieved by allowing negative current from the grid side through bidirectional switches placed on the secondary side of the transformer. Basically, the negative current discharges the metal-oxide-semiconductor field-effect transistor's output capacitor, thereby allowing turn on of the primary switch under zero voltage. To optimize the amount of reactive current required to achieve ZVS, a variable-frequency control scheme is implemented over the line cycle. In addition, the bidirectional switches on the secondary side of the transformer have ZVS during the turn- on times. Therefore, the switching losses of the bidirectional switches are negligible. A 250-W prototype has been implemented to validate the proposed scheme. Experimental results confirm the feasibility and superior performance of the converter compared with the conventional flyback inverter.",
"title": ""
},
{
"docid": "pos:1840359_1",
"text": "This paper presents an energy sharing state-of-charge (SOC) balancing control scheme based on a distributed battery energy storage system architecture where the cell balancing system and the dc bus voltage regulation system are combined into a single system. The battery cells are decoupled from one another by connecting each cell with a small lower power dc-dc power converter. The small power converters are utilized to achieve both SOC balancing between the battery cells and dc bus voltage regulation at the same time. The battery cells' SOC imbalance issue is addressed from the root by using the energy sharing concept to automatically adjust the discharge/charge rate of each cell while maintaining a regulated dc bus voltage. Consequently, there is no need to transfer the excess energy between the cells for SOC balancing. The theoretical basis and experimental prototype results are provided to illustrate and validate the proposed energy sharing controller.",
"title": ""
}
] | [
{
"docid": "neg:1840359_0",
"text": "A rank-r matrix X ∈ Rm×n can be written as a product UV >, where U ∈ Rm×r and V ∈ Rn×r. One could exploit this observation in optimization: e.g., consider the minimization of a convex function f(X) over rank-r matrices, where the scaffold of rank-r matrices is modeled via the factorization in U and V variables. Such heuristic has been widely used before for specific problem instances, where the solution sought is (approximately) low-rank. Though such parameterization reduces the number of variables and is more efficient in computational speed and memory requirement (of particular interest is the case r min{m,n}), it comes at a cost: f(UV >) becomes a non-convex function w.r.t. U and V . In this paper, we study such parameterization in optimization of generic convex f and focus on first-order, gradient descent algorithmic solutions. We propose an algorithm we call the Bi-Factored Gradient Descent (BFGD) algorithm, an efficient first-order method that operates on the U, V factors. We show that when f is smooth, BFGD has local sublinear convergence, and linear convergence when f is both smooth and strongly convex. Moreover, for several key applications, we provide simple and efficient initialization schemes that provide approximate solutions good enough for the above convergence results to hold.",
"title": ""
},
{
"docid": "neg:1840359_1",
"text": "Targeted cyberattacks play an increasingly significant role in disrupting the online social and economic model, not to mention the threat they pose to nation-states. A variety of components and techniques come together to bring about such attacks.",
"title": ""
},
{
"docid": "neg:1840359_2",
"text": "This paper describes a methodology for semi-supervised learning of dialogue acts using the similarity between sentences. We suppose that the dialogue sentences with the same dialogue act are more similar in terms of semantic and syntactic information. However, previous work on sentence similarity mainly modeled a sentence as bag-of-words and then compared different groups of words using corpus-based or knowledge-based measurements of word semantic similarity. Novelly, we present a vector-space sentence representation, composed of word embeddings, that is, the related word distributed representations, and these word embeddings are organised in a sentence syntactic structure. Given the vectors of the dialogue sentences, a distance measurement can be well-defined to compute the similarity between them. Finally, a seeded k-means clustering algorithm is implemented to classify the dialogue sentences into several categories corresponding to particular dialogue acts. This constitutes the semi-supervised nature of the approach, which aims to ameliorate the reliance of the availability of annotated corpora. Experiments with Switchboard Dialog Act corpus show that classification accuracy is improved by 14%, compared to the state-of-art methods based on Support Vector Machine.",
"title": ""
},
{
"docid": "neg:1840359_3",
"text": "We describe an approach for testing a software system for possible securi ty flaws. Traditionally, security testing is done using penetration analysis and formal methods. Based on the observation that most security flaws are triggered due to a flawed interaction with the envi ronment, we view the security testing problem as the problem of testing for the fault-tolerance prop erties of a software system. We consider each environment perturbation as a fault and the resulting security ompromise a failure in the toleration of such faults. Our approach is based on the well known techn ique of fault-injection. Environment faults are injected into the system under test and system beha vior observed. The failure to tolerate faults is an indicator of a potential security flaw in the syst em. An Environment-Application Interaction (EAI) fault model is proposed. EAI allows us to decide what f aults to inject. Based on EAI, we present a security-flaw classification scheme. This scheme was used to classif y 142 security flaws in a vulnerability database. This classification revealed that 91% of the security flaws in the database are covered by the EAI model.",
"title": ""
},
{
"docid": "neg:1840359_4",
"text": "This paper describes progress toward a prototype implementation of a tool which aims to improve literacy in deaf high school and college students who are native (or near native) signers of American Sign Language (ASL). We envision a system that will take a piece of text written by a deaf student, analyze that text for grammatical errors, and engage that student in a tutorial dialogue, enabling the student to generate appropriate corrections to the text. A strong focus of this work is to develop a system which adapts this process to the knowledge level and learning strengths of the user and which has the flexibility to engage in multi-modal, multilingual tutorial instruction utilizing both English and the native language of the user.",
"title": ""
},
{
"docid": "neg:1840359_5",
"text": "designed shapes incorporating typedesign tradition, the rules related to visual appearance, and the design ideas of a skilled character designer. The typographic design process is structured and systematic: letterforms are visually related in weight, contrast, space, alignment, and style. To create a new typeface family, type designers generally start by designing a few key characters—such as o, h, p, and v— incorporating the most important structure elements such as vertical stems, round parts, diagonal bars, arches, and serifs (see Figure 1). They can then use the design features embedded into these structure elements (stem width, behavior of curved parts, contrast between thick and thin shape parts, and so on) to design the font’s remaining characters. Today’s industrial font description standards such as Adobe Type 1 or TrueType represent typographic characters by their shape outlines, because of the simplicity of digitizing the contours of well-designed, large-size master characters. However, outline characters only implicitly incorporate the designer’s intentions. Because their structure elements aren’t explicit, creating aesthetically appealing derived designs requiring coherent changes in character width, weight (boldness), and contrast is difficult. Outline characters aren’t suitable for optical scaling, which requires relatively fatter letter shapes at small sizes. Existing approaches for creating derived designs from outline fonts require either specifying constraints to maintain the coherence of structure elements across different characters or creating multiple master designs for the interpolation of derived designs. We present a new approach for describing and synthesizing typographic character shapes. Instead of describing characters by their outlines, we conceive each character as an assembly of structure elements (stems, bars, serifs, round parts, and arches) implemented by one or several shape components. We define the shape components by typeface-category-dependent global parameters such as the serif and junction types, by global font-dependent metrics such as the location of reference lines and the width of stems and curved parts, and by group and local parameters. (See the sidebar “Previous Work” for background information on the field of parameterizable fonts.)",
"title": ""
},
{
"docid": "neg:1840359_6",
"text": "Securing collaborative filtering systems from malicious attack has become an important issue with increasing popularity of recommender Systems. Since recommender systems are entirely based on the input provided by the users or customers, they tend to become highly vulnerable to outside attacks. Prior research has shown that attacks can significantly affect the robustness of the systems. To prevent such attacks, researchers proposed several unsupervised detection mechanisms. While these approaches produce satisfactory results in detecting some well studied attacks, they are not suitable for all types of attacks studied recently. In this paper, we show that the unsupervised clustering can be used effectively for attack detection by computing detection attributes modeled on basic descriptive statistics. We performed extensive experiments and discussed different approaches regarding their performances. Our experimental results showed that attribute-based unsupervised clustering algorithm can detect spam users with a high degree of accuracy and fewer misclassified genuine users regardless of attack strategies.",
"title": ""
},
{
"docid": "neg:1840359_7",
"text": "Improving science, technology, engineering, and mathematics (STEM) education, especially for traditionally disadvantaged groups, is widely recognized as pivotal to the U.S.'s long-term economic growth and security. In this article, we review and discuss current research on STEM education in the U.S., drawing on recent research in sociology and related fields. The reviewed literature shows that different social factors affect the two major components of STEM education attainment: (1) attainment of education in general, and (2) attainment of STEM education relative to non-STEM education conditional on educational attainment. Cognitive and social psychological characteristics matter for both major components, as do structural influences at the neighborhood, school, and broader cultural levels. However, while commonly used measures of socioeconomic status (SES) predict the attainment of general education, social psychological factors are more important influences on participation and achievement in STEM versus non-STEM education. Domestically, disparities by family SES, race, and gender persist in STEM education. Internationally, American students lag behind those in some countries with less economic resources. Explanations for group disparities within the U.S. and the mediocre international ranking of US student performance require more research, a task that is best accomplished through interdisciplinary approaches.",
"title": ""
},
{
"docid": "neg:1840359_8",
"text": "We consider the isolated spelling error correction problem as a specific subproblem of the more general string-to-string translation problem. In this context, we investigate four general string-to-string transformation models that have been suggested in recent years and apply them within the spelling error correction paradigm. In particular, we investigate how a simple ‘k-best decoding plus dictionary lookup’ strategy performs in this context and find that such an approach can significantly outdo baselines such as edit distance, weighted edit distance, and the noisy channel Brill and Moore model to spelling error correction. We also consider elementary combination techniques for our models such as language model weighted majority voting and center string combination. Finally, we consider real-world OCR post-correction for a dataset sampled from medieval Latin texts.",
"title": ""
},
{
"docid": "neg:1840359_9",
"text": "A novel compact dual-polarized unidirectional wideband antenna based on two crossed magneto-electric dipoles is proposed. The proposed miniaturization method consist in transforming the electrical filled square dipoles into vertical folded square loops. The surface of the radiating element is reduced to 0.23λ0∗0.23λ0, where λ0 is the wavelength at the lowest operation frequency for a standing wave ratio (SWR) <2.5, which corresponds to a reduction factor of 48%. The antenna has been prototyped using 3D printing technology. The measured input impedance bandwidth is 51.2% from 1.7 GHz to 2.9 GHz with a Standing wave ratio (SWR) <2.",
"title": ""
},
{
"docid": "neg:1840359_10",
"text": "There is a large tradition of work in moral psychology that explores the capacity for moral judgment by focusing on the basic capacity to distinguish moral violations (e.g. hitting another person) from conventional violations (e.g. playing with your food). However, only recently have there been attempts to characterize the cognitive mechanisms underlying moral judgment (e.g. Cognition 57 (1995) 1; Ethics 103 (1993) 337). Recent evidence indicates that affect plays a crucial role in mediating the capacity to draw the moral/conventional distinction. However, the prevailing account of the role of affect in moral judgment is problematic. This paper argues that the capacity to draw the moral/conventional distinction depends on both a body of information about which actions are prohibited (a Normative Theory) and an affective mechanism. This account leads to the prediction that other normative prohibitions that are connected to an affective mechanism might be treated as non-conventional. An experiment is presented that indicates that \"disgust\" violations (e.g. spitting at the table), are distinguished from conventional violations along the same dimensions as moral violations.",
"title": ""
},
{
"docid": "neg:1840359_11",
"text": "BACKGROUND\nIntimate partner violence (IPV) is a major public health problem with serious consequences for women's physical, mental, sexual and reproductive health. Reproductive health outcomes such as unwanted and terminated pregnancies, fetal loss or child loss during infancy, non-use of family planning methods, and high fertility are increasingly recognized. However, little is known about the role of community influences on women's experience of IPV and its effect on terminated pregnancy, given the increased awareness of IPV being a product of social context. This study sought to examine the role of community-level norms and characteristics in the association between IPV and terminated pregnancy in Nigeria.\n\n\nMETHODS\nMultilevel logistic regression analyses were performed on nationally-representative cross-sectional data including 19,226 women aged 15-49 years in Nigeria. Data were collected by a stratified two-stage sampling technique, with 888 primary sampling units (PSUs) selected in the first sampling stage, and 7,864 households selected through probability sampling in the second sampling stage.\n\n\nRESULTS\nWomen who had experienced physical IPV, sexual IPV, and any IPV were more likely to have terminated a pregnancy compared to women who had not experienced these IPV types.IPV types were significantly associated with factors reflecting relationship control, relationship inequalities, and socio-demographic characteristics. Characteristics of the women aggregated at the community level (mean education, justifying wife beating, mean age at first marriage, and contraceptive use) were significantly associated with IPV types and terminated pregnancy.\n\n\nCONCLUSION\nFindings indicate the role of community influence in the association between IPV-exposure and terminated pregnancy, and stress the need for screening women seeking abortions for a history of abuse.",
"title": ""
},
{
"docid": "neg:1840359_12",
"text": "Crowding, the inability to recognize objects in clutter, sets a fundamental limit on conscious visual perception and object recognition throughout most of the visual field. Despite how widespread and essential it is to object recognition, reading and visually guided action, a solid operational definition of what crowding is has only recently become clear. The goal of this review is to provide a broad-based synthesis of the most recent findings in this area, to define what crowding is and is not, and to set the stage for future work that will extend our understanding of crowding well beyond low-level vision. Here we define six diagnostic criteria for what counts as crowding, and further describe factors that both escape and break crowding. All of these lead to the conclusion that crowding occurs at multiple stages in the visual hierarchy.",
"title": ""
},
{
"docid": "neg:1840359_13",
"text": "B-trees are used by many file systems to represent files and directories. They provide guaranteed logarithmic time key-search, insert, and remove. File systems like WAFL and ZFS use shadowing, or copy-on-write, to implement snapshots, crash recovery, write-batching, and RAID. Serious difficulties arise when trying to use b-trees and shadowing in a single system.\n This article is about a set of b-tree algorithms that respects shadowing, achieves good concurrency, and implements cloning (writeable snapshots). Our cloning algorithm is efficient and allows the creation of a large number of clones.\n We believe that using our b-trees would allow shadowing file systems to better scale their on-disk data structures.",
"title": ""
},
{
"docid": "neg:1840359_14",
"text": "In many applications of wireless sensor networks (WSNs), node location is required to locate the monitored event once occurs. Mobility-assisted localization has emerged as an efficient technique for node localization. It works on optimizing a path planning of a location-aware mobile node, called mobile anchor (MA). The task of the MA is to traverse the area of interest (network) in a way that minimizes the localization error while maximizing the number of successful localized nodes. For simplicity, many path planning models assume that the MA has a sufficient source of energy and time, and the network area is obstacle-free. However, in many real-life applications such assumptions are rare. When the network area includes many obstacles, which need to be avoided, and the MA itself has a limited movement distance that cannot be exceeded, a dynamic movement approach is needed. In this paper, we propose two novel dynamic movement techniques that offer obstacle-avoidance path planning for mobility-assisted localization in WSNs. The movement planning is designed in a real-time using two swarm intelligence based algorithms, namely grey wolf optimizer and whale optimization algorithm. Both of our proposed models, grey wolf optimizer-based path planning and whale optimization algorithm-based path planning, provide superior outcomes in comparison to other existing works in several metrics including both localization ratio and localization error rate.",
"title": ""
},
{
"docid": "neg:1840359_15",
"text": "Ghrelin is an endogenous ligand for the growth hormone secretagogue receptor and a well-characterized food intake regulatory peptide. Hypothalamic ghrelin-, neuropeptide Y (NPY)-, and orexin-containing neurons form a feeding regulatory circuit. Orexins and NPY are also implicated in sleep-wake regulation. Sleep responses and motor activity after central administration of 0.2, 1, or 5 microg ghrelin in free-feeding rats as well as in feeding-restricted rats (1 microg dose) were determined. Food and water intake and behavioral responses after the light onset injection of saline or 1 microg ghrelin were also recorded. Light onset injection of ghrelin suppressed non-rapid-eye-movement sleep (NREMS) and rapid-eye-movement sleep (REMS) for 2 h. In the first hour, ghrelin induced increases in behavioral activity including feeding, exploring, and grooming and stimulated food and water intake. Ghrelin administration at dark onset also elicited NREMS and REMS suppression in hours 1 and 2, but the effect was not as marked as that, which occurred in the light period. In hours 3-12, a secondary NREMS increase was observed after some doses of ghrelin. In the feeding-restricted rats, ghrelin suppressed NREMS in hours 1 and 2 and REMS in hours 3-12. Data are consistent with the notion that ghrelin has a role in the integration of feeding, metabolism, and sleep regulation.",
"title": ""
},
{
"docid": "neg:1840359_16",
"text": "This paper reports results from a study on the adoption of an information visualization system by administrative data analysts. Despite the fact that the system was neither fully integrated with their current software tools nor with their existing data analysis practices, analysts identified a number of key benefits that visualization systems provide to their work. These benefits for the most part occurred when analysts went beyond their habitual and well-mastered data analysis routines and engaged in creative discovery processes. We analyze the conditions under which these benefits arose, to inform the design of visualization systems that can better assist the work of administrative data analysts.",
"title": ""
},
{
"docid": "neg:1840359_17",
"text": "The purpose of this study was to assess the perceived discomfort of patrol officers related to equipment and vehicle design and whether there were discomfort differences between day and night shifts. A total of 16 participants were recruited (10 males, 6 females) from a local police force to participate for one full day shift and one full night shift. A series of questionnaires were administered to acquire information regarding comfort with specific car features and occupational gear, body part discomfort and health and lifestyle. The discomfort questionnaires were administered three times during each shift to monitor discomfort progression within a shift. Although there were no significant discomfort differences reported between the day and night shifts, perceived discomfort was identified for specific equipment, vehicle design and vehicle configuration, within each 12-h shift.",
"title": ""
},
{
"docid": "neg:1840359_18",
"text": "We present NeuroLinear, a system for extracting oblique decision rules from neural networks that have been trained for classiication of patterns. Each condition of an oblique decision rule corresponds to a partition of the attribute space by a hyperplane that is not necessarily axis-parallel. Allowing a set of such hyperplanes to form the boundaries of the decision regions leads to a signiicant reduction in the number of rules generated while maintaining the accuracy rates of the networks. We describe the components of NeuroLinear in detail by way of two examples using artiicial datasets. Our experimental results on real-world datasets show that the system is eeective in extracting compact and comprehensible rules with high predictive accuracy from neural networks.",
"title": ""
},
{
"docid": "neg:1840359_19",
"text": "Parkinson's disease (PD) is a neurodegenerative disorder with symptoms that progressively worsen with age. Pathologically, PD is characterized by the aggregation of α-synuclein in cells of the substantia nigra in the brain and loss of dopaminergic neurons. This pathology is associated with impaired movement and reduced cognitive function. The etiology of PD can be attributed to a combination of environmental and genetic factors. A popular animal model, the nematode roundworm Caenorhabditis elegans, has been frequently used to study the role of genetic and environmental factors in the molecular pathology and behavioral phenotypes associated with PD. The current review summarizes cellular markers and behavioral phenotypes in transgenic and toxin-induced PD models of C. elegans.",
"title": ""
}
] |
1840360 | What Is the Evidence to Support the Use of Therapeutic Gardens for the Elderly? | [
{
"docid": "pos:1840360_0",
"text": "OBJECTIVE\nThe purpose of this study was to investigate the effect of antidepressant treatment on hippocampal volumes in patients with major depression.\n\n\nMETHOD\nFor 38 female outpatients, the total time each had been in a depressive episode was divided into days during which the patient was receiving antidepressant medication and days during which no antidepressant treatment was received. Hippocampal gray matter volumes were determined by high resolution magnetic resonance imaging and unbiased stereological measurement.\n\n\nRESULTS\nLonger durations during which depressive episodes went untreated with antidepressant medication were associated with reductions in hippocampal volume. There was no significant relationship between hippocampal volume loss and time depressed while taking antidepressant medication or with lifetime exposure to antidepressants.\n\n\nCONCLUSIONS\nAntidepressants may have a neuroprotective effect during depression.",
"title": ""
}
] | [
{
"docid": "neg:1840360_0",
"text": "In a grid connected photovoltaic system, the main aim is to design an efficient solar inverter with higher efficiency and which also controls the power that the inverter injects into the grid. The effectiveness of the general PV system anticipate on the productivity by which the direct current of the solar module is changed over into alternating current. The fundamental requirement to interface the solar module to the grid with increased productivity includes: Low THD of current injected to the grid, maximum power point, and high power factor. In this paper, a two stage topology without galvanic isolation is been carried out for a single phase grid connected photovoltaic inverter. The output from the PV panel is given to the DC/DC boost converter, maximum power point tracking (MPPT) control technique is being used to control the gate pulse of the IGBT of boost converter. The boosted output is fed to the highly efficient and reliable inverter concept (HERIC) inverter in order to convert DC into AC with higher efficiency.",
"title": ""
},
{
"docid": "neg:1840360_1",
"text": "Computing a curve to approximate data points is a problem encountered frequently in many applications in computer graphics, computer vision, CAD/CAM, and image processing. We present a novel and efficient method, called squared distance minimization (SDM), for computing a planar B-spline curve, closed or open, to approximate a target shape defined by a point cloud, that is, a set of unorganized, possibly noisy data points. We show that SDM significantly outperforms other optimization methods used currently in common practice of curve fitting. In SDM, a B-spline curve starts from some properly specified initial shape and converges towards the target shape through iterative quadratic minimization of the fitting error. Our contribution is the introduction of a new fitting error term, called the squared distance (SD) error term, defined by a curvature-based quadratic approximant of squared distances from data points to a fitting curve. The SD error term faithfully measures the geometric distance between a fitting curve and a target shape, thus leading to faster and more stable convergence than the point distance (PD) error term, which is commonly used in computer graphics and CAGD, and the tangent distance (TD) error term, which is often adopted in the computer vision community. To provide a theoretical explanation of the superior performance of SDM, we formulate the B-spline curve fitting problem as a nonlinear least squares problem and conclude that SDM is a quasi-Newton method which employs a curvature-based positive definite approximant to the true Hessian of the objective function. Furthermore, we show that the method based on the TD error term is a Gauss-Newton iteration, which is unstable for target shapes with high curvature variations, whereas optimization based on the PD error term is the alternating method that is known to have linear convergence.",
"title": ""
},
{
"docid": "neg:1840360_2",
"text": "1 Multisensor Data Fusion for Next Generation Distributed Intrusion Detection Systems Tim Bass ERIM International & Silk Road Ann Arbor, MI 48113 Abstract| Next generation cyberspace intrusion detection systems will fuse data from heterogeneous distributed network sensors to create cyberspace situational awareness. This paper provides a few rst steps toward developing the engineering requirements using the art and science of multisensor data fusion as the underlying model. Current generation internet-based intrusion detection systems and basic multisensor data fusion constructs are summarized. The TCP/IP model is used to develop framework sensor and database models. The SNMP ASN.1 MIB construct is recommended for the representation of context-dependent threat & vulnerabilities databases.",
"title": ""
},
{
"docid": "neg:1840360_3",
"text": "Online safety is everyone's responsibility---a concept much easier to preach than to practice.",
"title": ""
},
{
"docid": "neg:1840360_4",
"text": "We present a system to detect passenger cars in aerial images along the road directions where cars appear as small objects. We pose this as a 3D object recognition problem to account for the variation in viewpoint and the shadow. We started from psychological tests to find important features for human detection of cars. Based on these observations, we selected the boundary of the car body, the boundary of the front windshield, and the shadow as the features. Some of these features are affected by the intensity of the car and whether or not there is a shadow along it. This information is represented in the structure of the Bayesian network that we use to integrate all features. Experiments show very promising results even on some very challenging images.",
"title": ""
},
{
"docid": "neg:1840360_5",
"text": "We explore story generation: creative systems that can build coherent and fluent passages of text about a topic. We collect a large dataset of 300K human-written stories paired with writing prompts from an online forum. Our dataset enables hierarchical story generation, where the model first generates a premise, and then transforms it into a passage of text. We gain further improvements with a novel form of model fusion that improves the relevance of the story to the prompt, and adding a new gated multi-scale self-attention mechanism to model long-range context. Experiments show large improvements over strong baselines on both automated and human evaluations. Human judges prefer stories generated by our approach to those from a strong non-hierarchical model by a factor of two to one.",
"title": ""
},
{
"docid": "neg:1840360_6",
"text": "This paper deals with the problem of gender classification using fingerprint images. Our attempt to gender identification follows the use of machine learning to determine the differences between fingerprint images. Each image in the database was represented by a feature vector consisting of ridge thickness to valley thickness ratio (RTVTR) and the ridge density values. By using a support vector machine trained on a set of 150 male and 125 female images, we obtain a robust classifying function for male and female feature vector patterns.",
"title": ""
},
{
"docid": "neg:1840360_7",
"text": "For a dynamic network based large vocabulary continuous speech recognizer, this paper proposes a fast language model (LM) look-ahead method using extended N -gram model. The extended N -gram model unifies the representations and score computations of the LM and the LM look-ahead tree, and thus greatly simplifies the decoder implementation and improves the LM look-ahead speed significantly, which makes higher-order LM look-ahead possible. The extended N -gram model is generated off-line before decoding starts. The generation procedure makes use of sparseness of backing-off N -gram models for efficient look-ahead score computation, and uses word-end node pushing and score quantitation to compact the model′s storage space. Experiments showed that with the same character error rate, the proposed method speeded up the overall recognition speed by a factor of 5∼ 9 than the traditional dynamic programming method which computes LM look-ahead scores on-line during the decoding process, and that using higher-order LM look-ahead algorithm can achieve a faster decoding speed and better accuracy than using the lower-order look-ahead ones.",
"title": ""
},
{
"docid": "neg:1840360_8",
"text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/page/info/about/policies/terms.jsp. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.",
"title": ""
},
{
"docid": "neg:1840360_9",
"text": "A number of issues can affect sample size in qualitative research; however, the guiding principle should be the concept of saturation. This has been explored in detail by a number of authors but is still hotly debated, and some say little understood. A sample of PhD studies using qualitative approaches, and qualitative interviews as the method of data collection was taken from theses.com and contents analysed for their sample sizes. Five hundred and sixty studies were identified that fitted the inclusion criteria. Results showed that the mean sample size was 31; however, the distribution was non-random, with a statistically significant proportion of studies, presenting sample sizes that were multiples of ten. These results are discussed in relation to saturation. They suggest a pre-meditated approach that is not wholly congruent with the principles of qualitative research.",
"title": ""
},
{
"docid": "neg:1840360_10",
"text": "This article presents a new method to illustrate the feasibility of 3D topology creation. We base the 3D construction process on testing real cases of implementation of 3D parcels construction in a 3D cadastral system. With the utilization and development of dense urban space, true 3D geometric volume primitives are needed to represent 3D parcels with the adjacency and incidence relationship. We present an effective straightforward approach to identifying and constructing the valid volumetric cadastral object from the given faces, and build the topological relationships among 3D cadastral objects on-thefly, based on input consisting of loose boundary 3D faces made by surveyors. This is drastically different from most existing methods, which focus on the validation of single volumetric objects after the assumption of the object’s creation. Existing methods do not support the needed types of geometry/ topology (e.g. non 2-manifold, singularities) and how to create and maintain valid 3D parcels is still a challenge in practice. We will show that the method does not change the faces themselves and faces in a given input are independently specified. Various volumetric objects, including non-manifold 3D cadastral objects (legal spaces), can be constructed correctly by this method, as will be shown from the",
"title": ""
},
{
"docid": "neg:1840360_11",
"text": "This paper presents a new sensing system for home-based rehabilitation based on optical linear encoder (OLE), in which the motion of an optical encoder on a code strip is converted to the limb joints' goniometric data. A body sensing module was designed, integrating the OLE and an accelerometer. A sensor network of three sensing modules was established via controller area network bus to capture human arm motion. Experiments were carried out to compare the performance of the OLE module with that of commercial motion capture systems such as electrogoniometers and fiber-optic sensors. The results show that the inexpensive and simple-design OLE's performance is comparable to that of expensive systems. Moreover, a statistical study was conducted to confirm the repeatability and reliability of the sensing system. The OLE-based system has strong potential as an inexpensive tool for motion capture and arm-function evaluation for short-term as well as long-term home-based monitoring.",
"title": ""
},
{
"docid": "neg:1840360_12",
"text": "Directly adding the knowledge triples obtained from open information extraction systems into a knowledge base is often impractical due to a vocabulary gap between natural language (NL) expressions and knowledge base (KB) representation. This paper aims at learning to map relational phrases in triples from natural-language-like statement to knowledge base predicate format. We train a word representation model on a vector space and link each NL relational pattern to the semantically equivalent KB predicate. Our mapping result shows not only high quality, but also promising coverage on relational phrases compared to previous research.",
"title": ""
},
{
"docid": "neg:1840360_13",
"text": "IT Leader Sample SIC Code Control Sample SIC Code Consol Energy Inc 1220 Walter Energy Inc 1220 Halliburton Co 1389 Schlumberger Ltd 1389 Standard Pacific Corp 1531 M/I Homes Inc 1531 Beazer Homes USA Inc 1531 Hovnanian Entrprs Inc -Cl A 1531 Toll Brothers Inc 1531 MDC Holdings Inc 1531 D R Horton Inc 1531 Ryland Group Inc 1531 Lennar Corp 1531 KB Home 1531 Granite Construction Inc 1600 Empresas Ica Soc Ctl ADR 1600 Fluor Corp 1600 Alstom ADR 1600 Gold Kist Inc 2015 Sadia Sa ADR 2015 Kraft Foods Inc 2000 ConAgra Foods Inc 2000 Smithfield Foods Inc 2011 Hormel Foods Corp 2011 Campbell Soup Co 2030 Heinz (H J) Co 2030 General Mills Inc 2040 Kellogg Co 2040 Imperial Sugar Co 2060 Wrigley (Wm) Jr Co 2060 Hershey Co 2060 Tate & Lyle Plc ADR 2060 Molson Coors Brewing Co 2082 Comp Bebidas Americas ADR 2082 Constellation Brands Cl A 2084 Gruma S.A.B. de C.V. ADR B 2040 Brown-Forman Cl B 2085 Coca Cola Hellenic Bttlg ADR 2086",
"title": ""
},
{
"docid": "neg:1840360_14",
"text": "Bone tissue is continuously remodeled through the concerted actions of bone cells, which include bone resorption by osteoclasts and bone formation by osteoblasts, whereas osteocytes act as mechanosensors and orchestrators of the bone remodeling process. This process is under the control of local (e.g., growth factors and cytokines) and systemic (e.g., calcitonin and estrogens) factors that all together contribute for bone homeostasis. An imbalance between bone resorption and formation can result in bone diseases including osteoporosis. Recently, it has been recognized that, during bone remodeling, there are an intricate communication among bone cells. For instance, the coupling from bone resorption to bone formation is achieved by interaction between osteoclasts and osteoblasts. Moreover, osteocytes produce factors that influence osteoblast and osteoclast activities, whereas osteocyte apoptosis is followed by osteoclastic bone resorption. The increasing knowledge about the structure and functions of bone cells contributed to a better understanding of bone biology. It has been suggested that there is a complex communication between bone cells and other organs, indicating the dynamic nature of bone tissue. In this review, we discuss the current data about the structure and functions of bone cells and the factors that influence bone remodeling.",
"title": ""
},
{
"docid": "neg:1840360_15",
"text": "Existing methods for single document keyphrase extraction usually make use of only the information contained in the specified document. This paper proposes to use a small number of nearest neighbor documents to provide more knowledge to improve single document keyphrase extraction. A specified document is expanded to a small document set by adding a few neighbor documents close to the document, and the graph-based ranking algorithm is then applied on the expanded document set to make use of both the local information in the specified document and the global information in the neighbor documents. Experimental results demonstrate the good effectiveness and robustness of our proposed approach.",
"title": ""
},
{
"docid": "neg:1840360_16",
"text": "Handling imbalanced datasets is a challenging problem that if not treated correctly results in reduced classification performance. Imbalanced datasets are commonly handled using minority oversampling, whereas the SMOTE algorithm is a successful oversampling algorithm with numerous extensions. SMOTE extensions do not have a theoretical guarantee during training to work better than SMOTE and in many instances their performance is data dependent. In this paper we propose a novel extension to the SMOTE algorithm with a theoretical guarantee for improved classification performance. The proposed approach considers the classification performance of both the majority and minority classes. In the proposed approach CGMOS (Certainty Guided Minority OverSampling) new data points are added by considering certainty changes in the dataset. The paper provides a proof that the proposed algorithm is guaranteed to work better than SMOTE for training data. Further, experimental results on 30 real-world datasets show that CGMOS works better than existing algorithms when using 6 different classifiers.",
"title": ""
},
{
"docid": "neg:1840360_17",
"text": "Satellite remote sensing is a valuable tool for monitoring flooding. Microwave sensors are especially appropriate instruments, as they allow the differentiation of inundated from non-inundated areas, regardless of levels of solar illumination or frequency of cloud cover in regions experiencing substantial rainy seasons. In the current study we present the longest synthetic aperture radar-based time series of flood and inundation information derived for the Mekong Delta that has been analyzed for this region so far. We employed overall 60 Envisat ASAR Wide Swath Mode data sets at a spatial resolution of 150 meters acquired during the years 2007–2011 to facilitate a thorough understanding of the flood regime in the Mekong Delta. The Mekong Delta in southern Vietnam comprises 13 provinces and is home to 18 million inhabitants. Extreme dry seasons from late December to May and wet seasons from June to December characterize people’s rural life. In this study, we show which areas of the delta are frequently affected by floods and which regions remain dry all year round. Furthermore, we present which areas are flooded at which frequency and elucidate the patterns of flood progression over the course of the rainy season. In this context, we also examine the impact of dykes on floodwater emergence and assess the relationship between retrieved flood occurrence patterns and land use. In addition, the advantages and shortcomings of ENVISAT ASAR-WSM based flood mapping are discussed. The results contribute to a comprehensive understanding of Mekong Delta flood OPEN ACCESS Remote Sens. 2013, 5 688 dynamics in an environment where the flow regime is influenced by the Mekong River, overland water-flow, anthropogenic floodwater control, as well as the tides.",
"title": ""
},
{
"docid": "neg:1840360_18",
"text": "Most of neural language models use different kinds of embeddings for word prediction. While word embeddings can be associated to each word in the vocabulary or derived from characters as well as factored morphological decomposition, these word representations are mainly used to parametrize the input, i.e. the context of prediction. This work investigates the effect of using subword units (character and factored morphological decomposition) to build output representations for neural language modeling. We present a case study on Czech, a morphologically-rich language, experimenting with different input and output representations. When working with the full training vocabulary, despite unstable training, our experiments show that augmenting the output word representations with character-based embeddings can significantly improve the performance of the model. Moreover, reducing the size of the output look-up table, to let the character-based embeddings represent rare words, brings further improvement.",
"title": ""
},
{
"docid": "neg:1840360_19",
"text": "Proper formulation of features plays an important role in shorttext classification tasks as the amount of text available is very little. In literature, Term Frequency Inverse Document Frequency (TF-IDF) is commonly used to create feature vectors for such tasks. However, TF-IDF formulation does not utilize the class information available in supervised learning. For classification problems, if it is possible to identify terms that can strongly distinguish among classes, then more weight can be given to those terms during feature construction phase. This may result in improved classifier performance with the incorporation of extra class label related information. We propose a supervised feature construction method to classify tweets, based on the actionable information that might be present, posted during different disaster scenarios. Improved classifier performance for such classification tasks can be helpful in the rescue and relief operations. We used three benchmark datasets containing tweets posted during Nepal and Italy earthquakes in 2015 and 2016 respectively. Experimental results show that the proposed method obtains better classification performance on these benchmark datasets.",
"title": ""
}
] |
1840361 | Novel Cellular Active Array Antenna System at Base Station for Beyond 4G | [
{
"docid": "pos:1840361_0",
"text": "In this work, a new base station antenna is proposed. Two separate frequency bands with separate radiating elements are used in each band. The frequency band separation ratio is about 1.3:1. These elements are arranged with different spacing (wider spacing for the lower frequency band, and narrower spacing for the higher frequency band). Isolation between bands inherently exists in this approach. This avoids the grating lobe effect, and mitigates the beam narrowing (dispersion) seen with fixed element spacing covering the whole wide bandwidth. A new low-profile cross dipole is designed, which is integrated in the array with an EBG/AMC structure for reducing the size of low band elements and decreasing coupling at high band.",
"title": ""
},
{
"docid": "pos:1840361_1",
"text": "Currently, many operators worldwide are deploying Long Term Evolution (LTE) to provide much faster access with lower latency and higher efficiency than its predecessors 3G and 3.5G. Meanwhile, the service rollout of LTE-Advanced, which is an evolution of LTE and a “true 4G” mobile broadband, is being underway to further enhance LTE performance. However, the anticipated challenges of the next decade (2020s) are so tremendous and diverse that there is a vastly increased need for a new generation mobile communications system with even further enhanced capabilities and new functionalities, namely a fifth generation (5G) system. Envisioning the development of a 5G system by 2020, at DOCOMO we started studies on future radio access as early as 2010, just after the launch of LTE service. The aim at that time was to anticipate the future user needs and the requirements of 10 years later (2020s) in order to identify the right concept and radio access technologies for the next generation system. The identified 5G concept consists of an efficient integration of existing spectrum bands for current cellular mobile and future new spectrum bands including higher frequency bands, e.g., millimeter wave, with a set of spectrum specific and spectrum agnostic technologies. Since a few years ago, we have been conducting several proof-of-concept activities and investigations on our 5G concept and its key technologies, including the development of a 5G real-time simulator, experimental trials of a wide range of frequency bands and technologies and channel measurements for higher frequency bands. In this paper, we introduce an overview of our views on the requirements, concept and promising technologies for 5G radio access, in addition to our ongoing activities for paving the way toward the realization of 5G by 2020. key words: next generation mobile communications system, 5G, 4G, LTE, LTE-advanced",
"title": ""
}
] | [
{
"docid": "neg:1840361_0",
"text": "When building intelligent spaces, the knowledge representation for encapsulating rooms, users, groups, roles, and other information is a fundamental design question. We present a semantic network as such a representation, and demonstrate its utility as a basis for ongoing work.",
"title": ""
},
{
"docid": "neg:1840361_1",
"text": "We have previously shown that, while the intrinsic quality of the oocyte is the main factor affecting blastocyst yield during bovine embryo development in vitro, the main factor affecting the quality of the blastocyst is the postfertilization culture conditions. Therefore, any improvement in the quality of blastocysts produced in vitro is likely to derive from the modification of the postfertilization culture conditions. The objective of this study was to examine the effect of the presence or absence of serum and the concentration of BSA during the period of embryo culture in vitro on 1) cleavage rate, 2) the kinetics of embryo development, 3) blastocyst yield, and 4) blastocyst quality, as assessed by cryotolerance and gene expression patterns. The quantification of all gene transcripts was carried out by real-time quantitative reverse transcription-polymerase chain reaction. Bovine blastocysts from four sources were used: 1) in vitro culture in synthetic oviduct fluid (SOF) supplemented with 3 mg/ml BSA and 10% fetal calf serum (FCS), 2) in vitro culture in SOF + 3 mg/ml BSA in the absence of serum, 3) in vitro culture in SOF + 16 mg/ml BSA in the absence of serum, and 4) in vivo blastocysts. There was no difference in overall blastocyst yield at Day 9 between the groups. However, significantly more blastocysts were present by Day 6 in the presence of 10% serum (20.0%) compared with 3 mg/ml BSA (4.6%, P < 0.001) or 16 mg/ml BSA (11.6%, P < 0.01). By Day 7, however, this difference had disappeared. Following vitrification, there was no difference in survival between blastocysts produced in the presence of 16 mg/ml BSA or those produced in the presence of 10% FCS; the survival of both groups was significantly lower than the in vivo controls at all time points and in terms of hatching rate. In contrast, survival of blastocysts produced in SOF + 3 mg/ml BSA in the absence of serum was intermediate, with no difference remaining at 72 h when compared with in vivo embryos. Differences in relative mRNA abundance among the two groups of blastocysts analyzed were found for genes related to apoptosis (Bax), oxidative stress (MnSOD, CuZnSOD, and SOX), communication through gap junctions (Cx31 and Cx43), maternal recognition of pregnancy (IFN-tau), and differentiation and implantation (LIF and LR-beta). The presence of serum during the culture period resulted in a significant increase in the level of expression of MnSOD, SOX, Bax, LIF, and LR-beta. The level of expression of Cx31 and Cu/ZnSOD also tended to be increased, although the difference was not significant. In contrast, the level of expression of Cx43 and IFN-tau was decreased in the presence of serum. In conclusion, using a combination of measures of developmental competence (cleavage and blastocyst rates) and qualitative measures such as cryotolerance and relative mRNA abundance to give a more complete picture of the consequences of modifying medium composition on the embryo, we have shown that conditions of postfertilization culture, in particular, the presence of serum in the medium, can affect the speed of embryo development and the quality of the resulting blastocysts. The reduced cryotolerance of blastocysts generated in the presence of serum is accompanied by deviations in the relative abundance of developmentally important gene transcripts. Omission of serum during the postfertilization culture period can significantly improve the cryotolerance of the blastocysts to a level intermediate between serum-generated blastocysts and those derived in vivo. The challenge now is to try and bridge this gap.",
"title": ""
},
{
"docid": "neg:1840361_2",
"text": "In this paper, an improved multimodel optimal quadratic control structure for variable speed, pitch regulated wind turbines (operating at high wind speeds) is proposed in order to integrate high levels of wind power to actively provide a primary reserve for frequency control. On the basis of the nonlinear model of the studied plant, and taking into account the wind speed fluctuations, and the electrical power variation, a multimodel linear description is derived for the wind turbine, and is used for the synthesis of an optimal control law involving a state feedback, an integral action and an output reference model. This new control structure allows a rapid transition of the wind turbine generated power between different desired set values. This electrical power tracking is ensured with a high-performance behavior for all other state variables: turbine and generator rotational speeds and mechanical shaft torque; and smooth and adequate evolution of the control variables.",
"title": ""
},
{
"docid": "neg:1840361_3",
"text": "Hydrokinetic turbines can provide a source of electricity for remote areas located near a river or stream. The objective of this paper is to describe the design, simulation, build, and testing of a novel hydrokinetic turbine. The main components of the system are a permanent magnet synchronous generator (PMSG), a machined H-Darrieus rotor, an embedded controls system, and a cataraft. The design and construction of this device was conducted at the Oregon Institute of Technology in Wilsonville, Oregon.",
"title": ""
},
{
"docid": "neg:1840361_4",
"text": "A novel coupling technique for circularly polarized annular-ring patch antenna is developed and discussed. The circular polarization (CP) radiation of the annular-ring patch antenna is achieved by a simple microstrip feed line through the coupling of a fan-shaped patch on the same plane of the antenna. Proper positioning of the coupling fan-shaped patch excites two orthogonal resonant modes with 90 phase difference, and a pure circular polarization is obtained. The dielectric material is a cylindrical block of ceramic with a permittivity of 25 and that reduces the size of the antenna. The prototype has been designed and fabricated and found to have an impedance bandwidth of 2.3% and a 3 dB axial-ratio bandwidth of about 0.6% at the center frequency of 2700 MHz. The characteristics of the proposed antenna have been by simulation software HFSS and experiment. The measured and simulated results are in good agreement.",
"title": ""
},
{
"docid": "neg:1840361_5",
"text": "BACKGROUND\nMedication and adverse drug event (ADE) information extracted from electronic health record (EHR) notes can be a rich resource for drug safety surveillance. Existing observational studies have mainly relied on structured EHR data to obtain ADE information; however, ADEs are often buried in the EHR narratives and not recorded in structured data.\n\n\nOBJECTIVE\nTo unlock ADE-related information from EHR narratives, there is a need to extract relevant entities and identify relations among them. In this study, we focus on relation identification. This study aimed to evaluate natural language processing and machine learning approaches using the expert-annotated medical entities and relations in the context of drug safety surveillance, and investigate how different learning approaches perform under different configurations.\n\n\nMETHODS\nWe have manually annotated 791 EHR notes with 9 named entities (eg, medication, indication, severity, and ADEs) and 7 different types of relations (eg, medication-dosage, medication-ADE, and severity-ADE). Then, we explored 3 supervised machine learning systems for relation identification: (1) a support vector machines (SVM) system, (2) an end-to-end deep neural network system, and (3) a supervised descriptive rule induction baseline system. For the neural network system, we exploited the state-of-the-art recurrent neural network (RNN) and attention models. We report the performance by macro-averaged precision, recall, and F1-score across the relation types.\n\n\nRESULTS\nOur results show that the SVM model achieved the best average F1-score of 89.1% on test data, outperforming the long short-term memory (LSTM) model with attention (F1-score of 65.72%) as well as the rule induction baseline system (F1-score of 7.47%) by a large margin. The bidirectional LSTM model with attention achieved the best performance among different RNN models. With the inclusion of additional features in the LSTM model, its performance can be boosted to an average F1-score of 77.35%.\n\n\nCONCLUSIONS\nIt shows that classical learning models (SVM) remains advantageous over deep learning models (RNN variants) for clinical relation identification, especially for long-distance intersentential relations. However, RNNs demonstrate a great potential of significant improvement if more training data become available. Our work is an important step toward mining EHRs to improve the efficacy of drug safety surveillance. Most importantly, the annotated data used in this study will be made publicly available, which will further promote drug safety research in the community.",
"title": ""
},
{
"docid": "neg:1840361_6",
"text": "Battery thermal management system (BTMS) is essential for electric-vehicle (EV) and hybrid-vehicle (HV) battery packs to operate effectively in all climates. Lithium-ion (Li-ion) batteries offer many advantages to the EV such as high power and high specific energy. However, temperature affects their performance, safety, and productive life. This paper is about the design and evaluation of a BTMS based on the Peltier effect heat pumps. The discharge efficiency of a 60-Ah prismatic Li-ion pouch cell was measured under different rates and different ambient temperature values. The obtained results were used to design a solid-state BTMS based on Peltier thermoelectric coolers (TECs). The proposed BTMS is then modeled and evaluated at constant current discharge in the laboratory. In addition, The BTMS was installed in an EV that was driven in the US06 cycle. The thermal response and the energy consumption of the proposed BTMS were satisfactory.",
"title": ""
},
{
"docid": "neg:1840361_7",
"text": "Breast cancer is often treated with radiotherapy (RT), with two opposing tangential fields. When indicated, supraclavicular lymph nodes have to be irradiated, and a third anterior field is applied. The junction region has the potential to be over or underdosed. To overcome this problem, many techniques have been proposed. A literature review of 3 Dimensional Conformal RT (3D CRT) and older 3-field techniques was carried out. Intensity Modulated RT (IMRT) techniques are also briefly discussed. Techniques are categorized, few characteristic examples are presented and a comparison is attempted. Three-field techniques can be divided in monoisocentric and two-isocentric. Two-isocentric techniques can be further divided in full field and half field techniques. Monoisocentric techniques show certain great advantages over two-isocentric techniques. However, they are not always applicable and they require extra caution as they are characterized by high dose gradient in the junction region. IMRT has been proved to give better dosimetric results. Three-field matching is a complicated procedure, with potential of over or undredosage in the junction region. Many techniques have been proposed, each with advantages and disadvantages. Among them, monoisocentric techniques, when carefully applied, are the ideal choice, provided IMRT facility is not available. Otherwise, a two-isocentric half beam technique is recommended.",
"title": ""
},
{
"docid": "neg:1840361_8",
"text": "CLCWeb: Comparative Literature and Culture, the peer-reviewed, full-text, and open-access learned journal in the humanities and social sciences, publishes new scholarship following tenets of the discipline of comparative literature and the field of cultural studies designated as \"comparative cultural studies.\" Publications in the journal are indexed in the Annual Bibliography of English Language and Literature (Chadwyck-Healey), the Arts and Humanities Citation Index (Thomson Reuters ISI), the Humanities Index (Wilson), Humanities International Complete (EBSCO), the International Bibliography of the Modern Language Association of America, and Scopus (Elsevier). The journal is affiliated with the Purdue University Press monograph series of Books in Comparative Cultural Studies. Contact: <clcweb@purdue.edu>",
"title": ""
},
{
"docid": "neg:1840361_9",
"text": "Knowledge embedding, which projects triples in a given knowledge base to d-dimensional vectors, has attracted considerable research efforts recently. Most existing approaches treat the given knowledge base as a set of triplets, each of whose representation is then learned separately. However, as a fact, triples are connected and depend on each other. In this paper, we propose a graph aware knowledge embedding method (GAKE), which formulates knowledge base as a directed graph, and learns representations for any vertices or edges by leveraging the graph’s structural information. We introduce three types of graph context for embedding: neighbor context, path context, and edge context, each reflects properties of knowledge from different perspectives. We also design an attention mechanism to learn representative power of different vertices or edges. To validate our method, we conduct several experiments on two tasks. Experimental results suggest that our method outperforms several state-of-art knowledge embedding models.",
"title": ""
},
{
"docid": "neg:1840361_10",
"text": "This paper presents a novel ensemble classifier generation technique RotBoost, which is constructed by combining Rotation Forest and AdaBoost. The experiments conducted with 36 real-world data sets available from the UCI repository, among which a classification tree is adopted as the base learning algorithm, demonstrate that RotBoost can generate ensemble classifiers with significantly lower prediction error than either Rotation Forest or AdaBoost more often than the reverse. Meanwhile, RotBoost is found to perform much better than Bagging and MultiBoost. Through employing the bias and variance decompositions of error to gain more insight of the considered classification methods, RotBoost is seen to simultaneously reduce the bias and variance terms of a single tree and the decrement achieved by it is much greater than that done by the other ensemble methods, which leads RotBoost to perform best among the considered classification procedures. Furthermore, RotBoost has a potential advantage over AdaBoost of suiting parallel execution. 2008 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "neg:1840361_11",
"text": "I n the early 1960s, the average American adult male weighed 168 pounds. Today, he weighs nearly 180 pounds. Over the same time period, the average female adult weight rose from 143 pounds to over 155 pounds (U.S. Department of Health and Human Services, 1977, 1996). In the early 1970s, 14 percent of the population was classified as medically obese. Today, obesity rates are two times higher (Centers for Disease Control, 2003). Weights have been rising in the United States throughout the twentieth century, but the rise in obesity since 1980 is fundamentally different from past changes. For most of the twentieth century, weights were below levels recommended for maximum longevity (Fogel, 1994), and the increase in weight represented an increase in health, not a decrease. Today, Americans are fatter than medical science recommends, and weights are still increasing. While many other countries have experienced significant increases in obesity, no other developed country is quite as heavy as the United States. What explains this growth in obesity? Why is obesity higher in the United States than in any other developed country? The available evidence suggests that calories expended have not changed significantly since 1980, while calories consumed have risen markedly. But these facts just push the puzzle back a step: why has there been an increase in calories consumed? We propose a theory based on the division of labor in food preparation. In the 1960s, the bulk of food preparation was done by families that cooked their own food and ate it at home. Since then, there has been a revolution in the mass preparation of food that is roughly comparable to the mass",
"title": ""
},
{
"docid": "neg:1840361_12",
"text": "The purpose of the present study was to examine the relationship between workplace friendship and social loafing effect among employees in Certified Public Accounting (CPA) firms. Previous studies showed that workplace friendship has both positive and negative effects, meaning that there is an inconsistent relationship between workplace friendship and social loafing. The present study investigated the correlation between workplace friendship and social loafing effect among employees from CPA firms in Taiwan. The study results revealed that there was a negative relationship between workplace friendship and social loafing effect among CPA employees. In other words, the better the workplace friendship, the lower the social loafing effect. An individual would not put less effort in work when there was a low social loafing effect.",
"title": ""
},
{
"docid": "neg:1840361_13",
"text": "Symmetric positive semidefinite (SPSD) matrix approximation is an important problem with applications in kernel methods. However, existing SPSD matrix approximation methods such as the Nyström method only have weak error bounds. In this paper we conduct in-depth studies of an SPSD matrix approximation model and establish strong relative-error bounds. We call it the prototype model for it has more efficient and effective extensions, and some of its extensions have high scalability. Though the prototype model itself is not suitable for large-scale data, it is still useful to study its properties, on which the analysis of its extensions relies. This paper offers novel theoretical analysis, efficient algorithms, and a highly accurate extension. First, we establish a lower error bound for the prototype model, and we improve the error bound of an existing column selection algorithm to match the lower bound. In this way, we obtain the first optimal column selection algorithm for the prototype model. We also prove that the prototype model is exact under certain conditions. Second, we develop a simple column selection algorithm with a provable error bound. Third, we propose a socalled spectral shifting model to make the approximation more accurate when the spectrum of the matrix decays slowly, and the improvement is theoretically quantified. The spectral shifting method can also be applied to improve other SPSD matrix approximation models.",
"title": ""
},
{
"docid": "neg:1840361_14",
"text": "Humans prefer to interact with each other using speech. Since this is the most natural mode of communication, the humans also want to interact with machines using speech only. So, automatic speech recognition has gained a lot of popularity. Different approaches for speech recognition exists like Hidden Markov Model (HMM), Dynamic Time Warping (DTW), Vector Quantization (VQ), etc. This paper uses Neural Network (NN) along with Mel Frequency Cepstrum Coefficients (MFCC) for speech recognition. Mel Frequency Cepstrum Coefiicients (MFCC) has been used for the feature extraction of speech. This gives the feature of the waveform. For pattern matching FeedForward Neural Network with Back propagation algorithm has been applied. The paper analyzes the various training algorithms present for training the Neural Network and uses train scg for the experiment. The work has been done on MATLAB and experimental results show that system is able to recognize words at sufficiently high accuracy.",
"title": ""
},
{
"docid": "neg:1840361_15",
"text": "Using Blockchain seems a promising approach for Business Process Reengineering (BPR) to alleviate trust issues among stakeholders, by providing decentralization, transparency, traceability, and immutability of information along with its business logic. However, little work seems to be available on utilizing Blockchain for supporting BPR in a systematic and rational way, potentially leading to disappointments and even doubts on the utility of Blockchain. In this paper, as ongoing research, we outline Fides - a framework for exploiting Blockchain towards enhancing the trustworthiness for BPR. Fides supports diagnosing trust issues with AS-IS business processes, exploring TO-BE business process alternatives using Blockchain, and selecting among the alternatives. A business process of a retail chain for a food supply chain is used throughout the paper to illustrate Fides concepts.",
"title": ""
},
{
"docid": "neg:1840361_16",
"text": "In this paper, we propose the factorized hidden layer FHL approach to adapt the deep neural network DNN acoustic models for automatic speech recognition ASR. FHL aims at modeling speaker dependent SD hidden layers by representing an SD affine transformation as a linear combination of bases. The combination weights are low-dimensional speaker parameters that can be initialized using speaker representations like i-vectors and then reliably refined in an unsupervised adaptation fashion. Therefore, our method provides an efficient way to perform both adaptive training and test-time adaptation. Experimental results have shown that the FHL adaptation improves the ASR performance significantly, compared to the standard DNN models, as well as other state-of-the-art DNN adaptation approaches, such as training with the speaker-normalized CMLLR features, speaker-aware training using i-vector and learning hidden unit contributions LHUC. For Aurora 4, FHL achieves 3.8% and 2.3% absolute improvements over the standard DNNs trained on the LDA + STC and CMLLR features, respectively. It also achieves 1.7% absolute performance improvement over a system that combines the i-vector adaptive training with LHUC adaptation. For the AMI dataset, FHL achieved 1.4% and 1.9% absolute improvements over the sequence-trained CMLLR baseline systems, for the IHM and SDM tasks, respectively.",
"title": ""
},
{
"docid": "neg:1840361_17",
"text": "A popular approach to solving large probabilistic systems relies on aggregating states based on a measure of similarity. Many approaches in the literature are heuristic. A number of recent methods rely instead on metrics based on the notion of bisimulation, or behavioral equivalence between states (Givan et al., 2003; Ferns et al., 2004). An integral component of such metrics is the Kantorovich metric between probability distributions. However, while this metric enables many satisfying theoretical properties, it is costly to compute in practice. In this paper, we use techniques from network optimization and statistical sampling to overcome this problem. We obtain in this manner a variety of distance functions for MDP state aggregation that differ in the tradeoff between time and space complexity, as well as the quality of the aggregation. We provide an empirical evaluation of these tradeoffs.",
"title": ""
},
{
"docid": "neg:1840361_18",
"text": "Non-lethal dose of 70% ethanol extract of the Nerium oleander dry leaves (1000 mg/kg body weight) was subcutaneously injected into male and female mice once a week for 9 weeks (total 10 doses). One day after the last injection, final body weight gain (relative percentage to the initial body weight) had a tendency, in both males and females, towards depression suggesting a metabolic insult at other sites than those involved in myocardial function. Multiple exposure of the mice to the specified dose failed to express a significant influence on blood parameters (WBC, RBC, Hb, HCT, PLT) as well as myocardium. On the other hand, a lethal dose (4000 mg/kg body weight) was capable of inducing progressive changes in myocardial electrical activity ending up in cardiac arrest. The electrocardiogram abnormalities could be brought about by the expected Na+, K(+)-ATPase inhibition by the cardiac glycosides (cardenolides) content of the lethal dose.",
"title": ""
},
{
"docid": "neg:1840361_19",
"text": "Cloud computing is one of the most useful technology that is been widely used all over the world. It generally provides on demand IT services and products. Virtualization plays a major role in cloud computing as it provides a virtual storage and computing services to the cloud clients which is only possible through virtualization. Cloud computing is a new business computing paradigm that is based on the concepts of virtualization, multi-tenancy, and shared infrastructure. This paper discusses about cloud computing, how virtualization is done in cloud computing, virtualization basic architecture, its advantages and effects [1].",
"title": ""
}
] |
1840362 | Chainsaw: Chained Automated Workflow-based Exploit Generation | [
{
"docid": "pos:1840362_0",
"text": "STRANGER is an automata-based string analysis tool for finding and eliminating string-related security vulnerabilities in P H applications. STRANGER uses symbolic forward and backward reachability analyses t o compute the possible values that the string expressions can take during progr am execution. STRANGER can automatically (1) prove that an application is free from specified attacks or (2) generate vulnerability signatures that c racterize all malicious inputs that can be used to generate attacks.",
"title": ""
}
] | [
{
"docid": "neg:1840362_0",
"text": "It is well accepted that pain is a multidimensional experience, but little is known of how the brain represents these dimensions. We used positron emission tomography (PET) to indirectly measure pain-evoked cerebral activity before and after hypnotic suggestions were given to modulate the perceived intensity of a painful stimulus. These techniques were similar to those of a previous study in which we gave suggestions to modulate the perceived unpleasantness of a noxious stimulus. Ten volunteers were scanned while tonic warm and noxious heat stimuli were presented to the hand during four experimental conditions: alert control, hypnosis control, hypnotic suggestions for increased-pain intensity and hypnotic suggestions for decreased-pain intensity. As shown in previous brain imaging studies, noxious thermal stimuli presented during the alert and hypnosis-control conditions reliably activated contralateral structures, including primary somatosensory cortex (S1), secondary somatosensory cortex (S2), anterior cingulate cortex, and insular cortex. Hypnotic modulation of the intensity of the pain sensation led to significant changes in pain-evoked activity within S1 in contrast to our previous study in which specific modulation of pain unpleasantness (affect), independent of pain intensity, produced specific changes within the ACC. This double dissociation of cortical modulation indicates a relative specialization of the sensory and the classical limbic cortical areas in the processing of the sensory and affective dimensions of pain.",
"title": ""
},
{
"docid": "neg:1840362_1",
"text": "A boomerang-shaped alar base excision is described to narrow the nasal base and correct the excessive alar flare. The boomerang excision combined the external alar wedge resection with an internal vestibular floor excision. The internal excision was inclined 30 to 45 degrees laterally to form the inner limb of the boomerang. The study included 46 patients presenting with wide nasal base and excessive alar flaring. All cases were followed for a mean period of 18 months (range, 8 to 36 months). The laterally oriented vestibular floor excision allowed for maximum preservation of the natural curvature of the alar rim where it meets the nostril floor and upon its closure resulted in a considerable medialization of alar lobule, which significantly reduced the amount of alar flare and the amount of external alar excision needed. This external alar excision measured, on average, 3.8 mm (range, 2 to 8 mm), which is significantly less than that needed when a standard vertical internal excision was used ( P < 0.0001). Such conservative external excisions eliminated the risk of obliterating the natural alar-facial crease, which did not occur in any of our cases. No cases of postoperative bleeding, infection, or vestibular stenosis were encountered. Keloid or hypertrophic scar formation was not encountered; however, dermabrasion of the scars was needed in three (6.5%) cases to eliminate apparent suture track marks. The boomerang alar base excision proved to be a safe and effective technique for narrowing the nasal base and elimination of the excessive flaring and resulted in a natural, well-proportioned nasal base with no obvious scarring.",
"title": ""
},
{
"docid": "neg:1840362_2",
"text": "This work discusses a mix of challenges arising from Watson Discovery Advisor (WDA), an industrial strength descendant of the Watson Jeopardy! Question Answering system currently used in production in industry settings. Typical challenges include generation of appropriate training questions, adaptation to new industry domains, and iterative improvement of the system through manual error analyses.",
"title": ""
},
{
"docid": "neg:1840362_3",
"text": "A novel chaotic time-series prediction method based on support vector machines (SVMs) and echo-state mechanisms is proposed. The basic idea is replacing \"kernel trick\" with \"reservoir trick\" in dealing with nonlinearity, that is, performing linear support vector regression (SVR) in the high-dimension \"reservoir\" state space, and the solution benefits from the advantages from structural risk minimization principle, and we call it support vector echo-state machines (SVESMs). SVESMs belong to a special kind of recurrent neural networks (RNNs) with convex objective function, and their solution is global, optimal, and unique. SVESMs are especially efficient in dealing with real life nonlinear time series, and its generalization ability and robustness are obtained by regularization operator and robust loss function. The method is tested on the benchmark prediction problem of Mackey-Glass time series and applied to some real life time series such as monthly sunspots time series and runoff time series of the Yellow River, and the prediction results are promising",
"title": ""
},
{
"docid": "neg:1840362_4",
"text": "from its introductory beginning and across its 446 pages, centered around the notion that computer simulations and games are not at all disparate but very much aligning concepts. This not only makes for an interesting premise but also an engaging book overall which offers a resource into an educational subject (for it is educational simulations that the authors predominantly address) which is not overly saturated. The aim of the book as a result of this decision, which is explained early on, but also because of its subsequent structure, is to enlighten its intended audience in the way that effective and successful simulations/games operate (on a theoretical/conceptual and technical level, although in the case of the latter the book intentionally never delves into the realms of software programming specifics per se), can be designed, built and, finally, evaluated. The book is structured in three different and distinct parts, with four chapters in the first, six chapters in the second and six chapters in the third and final one. The first chapter is essentially a \" teaser \" , according to the authors. There are a couple of more traditional simulations described, a couple of well-known mainstream games (Mario Kart and Portal 2, interesting choices, especially the first one) and then the authors proceed to present applications which show the simulation and game convergence. These applications have a strong educational outlook (covering on this occasion very diverse topics, from flood prevention to drink driving awareness, amongst others). This chapter works very well in initiating the audience in the subject matter and drawing the necessary parallels. With all of the simula-tions/games/educational applications included BOOK REVIEW",
"title": ""
},
{
"docid": "neg:1840362_5",
"text": "User stories are a widely used notation for formulating requirements in agile development projects. Despite their popularity in industry, little to no academic work is available on assessing their quality. The few existing approaches are too generic or employ highly qualitative metrics. We propose the Quality User Story Framework, consisting of 14 quality criteria that user story writers should strive to conform to. Additionally, we introduce the conceptual model of a user story, which we rely on to design the AQUSA software tool. AQUSA aids requirements engineers in turning raw user stories into higher-quality ones by exposing defects and deviations from good practice in user stories. We evaluate our work by applying the framework and a prototype implementation to three user story sets from industry.",
"title": ""
},
{
"docid": "neg:1840362_6",
"text": "Reinforcement learning has recently gained popularity due to its many successful applications in various fields. In this project reinforcement learning is implemented in a simple warehouse situation where robots have to learn to interact with each other while performing specific tasks. The aim is to study whether reinforcement learning can be used to train multiple agents. Two different methods have been used to achieve this aim, Q-learning and deep Q-learning. Due to practical constraints, this paper cannot provide a comprehensive review of real life robot interactions. Both methods are tested on single-agent and multi-agent models in Python computer simulations. The results show that the deep Q-learning model performed better in the multiagent simulations than the Q-learning model and it was proven that agents can learn to perform their tasks to some degree. Although, the outcome of this project cannot yet be considered sufficient for moving the simulation into reallife, it was concluded that reinforcement learning and deep learning methods can be seen as suitable for modelling warehouse robots and their interactions.",
"title": ""
},
{
"docid": "neg:1840362_7",
"text": "Pregabalin is a substance which modulates monoamine release in \"hyper-excited\" neurons. It binds potently to the α2-δ subunit of calcium channels. Pilotstudies on alcohol- and benzodiazepine dependent patients reported a reduction of withdrawal symptoms through Pregabalin. To our knowledge, no studies have been conducted so far assessing this effect in opiate dependent patients. We report the case of a 43-year-old patient with Pregabalin intake during opiate withdrawal. Multiple inpatient and outpatient detoxifications from maintenance replacement therapy with Buprenorphine in order to reach complete abstinence did not show success because of extended withdrawal symptoms and repeated drug intake. Finally he disrupted his heroine intake with a simultaneously self administration of 300 mg Pregabaline per day and was able to control the withdrawal symptoms. In this time we did control the Pregabalin level in serum and urine in our outpatient clinic. In the course the patient reported that he could treat further relapse with opiate or opioids with Pregabalin successful. This case shows first details for Pregabalin to relief withdrawal symptoms in opiate withdrawal.",
"title": ""
},
{
"docid": "neg:1840362_8",
"text": "Deep neural networks have proven to be particularly eective in visual and audio recognition tasks. Existing models tend to be computationally expensive and memory intensive, however, and so methods for hardwareoriented approximation have become a hot topic. Research has shown that custom hardware-based neural network accelerators can surpass their general-purpose processor equivalents in terms of both throughput and energy eciency. Application-tailored accelerators, when co-designed with approximation-based network training methods, transform large, dense and computationally expensive networks into small, sparse and hardware-ecient alternatives, increasing the feasibility of network deployment. In this article, we provide a comprehensive evaluation of approximation methods for high-performance network inference along with in-depth discussion of their eectiveness for custom hardware implementation. We also include proposals for future research based on a thorough analysis of current trends. is article represents the rst survey providing detailed comparisons of custom hardware accelerators featuring approximation for both convolutional and recurrent neural networks, through which we hope to inspire exciting new developments in the eld.",
"title": ""
},
{
"docid": "neg:1840362_9",
"text": "The sharp increase of plastic wastes results in great social and environmental pressures, and recycling, as an effective way currently available to reduce the negative impacts of plastic wastes, represents one of the most dynamic areas in the plastics industry today. Froth flotation is a promising method to solve the key problem of recycling process, namely separation of plastic mixtures. This review surveys recent literature on plastics flotation, focusing on specific features compared to ores flotation, strategies, methods and principles, flotation equipments, and current challenges. In terms of separation methods, plastics flotation is divided into gamma flotation, adsorption of reagents, surface modification and physical regulation.",
"title": ""
},
{
"docid": "neg:1840362_10",
"text": "This paper presents the DeepCD framework which learns a pair of complementary descriptors jointly for image patch representation by employing deep learning techniques. It can be achieved by taking any descriptor learning architecture for learning a leading descriptor and augmenting the architecture with an additional network stream for learning a complementary descriptor. To enforce the complementary property, a new network layer, called data-dependent modulation (DDM) layer, is introduced for adaptively learning the augmented network stream with the emphasis on the training data that are not well handled by the leading stream. By optimizing the proposed joint loss function with late fusion, the obtained descriptors are complementary to each other and their fusion improves performance. Experiments on several problems and datasets show that the proposed method1 is simple yet effective, outperforming state-of-the-art methods.",
"title": ""
},
{
"docid": "neg:1840362_11",
"text": "Real-world tasks are often highly structured. Hierarchical reinforcement learning (HRL) has attracted research interest as an approach for leveraging the hierarchical structure of a given task in reinforcement learning (RL). However, identifying the hierarchical policy structure that enhances the performance of RL is not a trivial task. In this paper, we propose an HRL method that learns a latent variable of a hierarchical policy using mutual information maximization. Our approach can be interpreted as a way to learn a discrete and latent representation of the state-action space. To learn option policies that correspond to modes of the advantage function, we introduce advantage-weighted importance sampling. In our HRL method, the gating policy learns to select option policies based on an option-value function, and these option policies are optimized based on the deterministic policy gradient method. This framework is derived by leveraging the analogy between a monolithic policy in standard RL and a hierarchical policy in HRL by using a deterministic option policy. Experimental results indicate that our HRL approach can learn a diversity of options and that it can enhance the performance of RL in continuous control tasks.",
"title": ""
},
{
"docid": "neg:1840362_12",
"text": "Data warehouses are users driven; that is, they allow end-users to be in control of the data. As user satisfaction is commonly acknowledged as the most useful measurement of system success, we identify the underlying factors of end-user satisfaction with data warehouses and develop an instrument to measure these factors. The study demonstrates that most of the items in classic end-user satisfaction measure are still valid in the data warehouse environment, and that end-user satisfaction with data warehouses depends heavily on the roles and performance of organizational information centers. # 2000 Elsevier Science B.V. All rights reserved.",
"title": ""
},
{
"docid": "neg:1840362_13",
"text": "Relational databases are queried using database query languages such as SQL. Natural language interfaces to databases (NLIDB) are systems that translate a natural language sentence into a database query. In this modern techno-crazy world, as more and more laymen access various systems and applications through their smart phones and tablets, the need for Natural Language Interfaces (NLIs) has increased manifold. The challenges in Natural language Query processing are interpreting the sentence correctly, removal of various ambiguity and mapping to the appropriate context. Natural language access problem is actually composed of two stages Linguistic processing and Database processing. NLIDB techniques encompass a wide variety of approaches. The approaches include traditional methods such as Pattern Matching, Syntactic Parsing and Semantic Grammar to modern systems such as Intermediate Query Generation, Machine Learning and Ontologies. In this report, various approaches to build NLIDB systems have been analyzed and compared along with their advantages, disadvantages and application areas. Also, a natural language interface to a flight reservation system has been implemented comprising of flight and booking inquiry systems.",
"title": ""
},
{
"docid": "neg:1840362_14",
"text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/about/terms.html. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.",
"title": ""
},
{
"docid": "neg:1840362_15",
"text": "Despite decades of research attempting to establish conversational interaction between humans and computers, the capabilities of automated conversational systems are still limited. In this paper, we introduce Chorus, a crowd-powered conversational assistant. When using Chorus, end users converse continuously with what appears to be a single conversational partner. Behind the scenes, Chorus leverages multiple crowd workers to propose and vote on responses. A shared memory space helps the dynamic crowd workforce maintain consistency, and a game-theoretic incentive mechanism helps to balance their efforts between proposing and voting. Studies with 12 end users and 100 crowd workers demonstrate that Chorus can provide accurate, topical responses, answering nearly 93% of user queries appropriately, and staying on-topic in over 95% of responses. We also observed that Chorus has advantages over pairing an end user with a single crowd worker and end users completing their own tasks in terms of speed, quality, and breadth of assistance. Chorus demonstrates a new future in which conversational assistants are made usable in the real world by combining human and machine intelligence, and may enable a useful new way of interacting with the crowds powering other systems.",
"title": ""
},
{
"docid": "neg:1840362_16",
"text": "Based on imbalanced data, the predictive models for 5year survivability of breast cancer using decision tree are proposed. After data preprocessing from SEER breast cancer datasets, it is obviously that the category of data distribution is imbalanced. Under-sampling is taken to make up the disadvantage of the performance of models caused by the imbalanced data. The performance of the models is evaluated by AUC under ROC curve, accuracy, specificity and sensitivity with 10-fold stratified cross-validation. The performance of models is best while the distribution of data is approximately equal. Bagging algorithm is used to build an integration decision tree model for predicting breast cancer survivability. Keywords-imbalanced data;decision tree;predictive breast cancer survivability;10-fold stratified cross-validation;bagging algorithm",
"title": ""
},
{
"docid": "neg:1840362_17",
"text": "Matrix rank minimization problem is in general NP-hard. The nuclear norm is used to substitute the rank function in many recent studies. Nevertheless, the nuclear norm approximation adds all singular values together and the approximation error may depend heavily on the magnitudes of singular values. This might restrict its capability in dealing with many practical problems. In this paper, an arctangent function is used as a tighter approximation to the rank function. We use it on the challenging subspace clustering problem. For this nonconvex minimization problem, we develop an effective optimization procedure based on a type of augmented Lagrange multipliers (ALM) method. Extensive experiments on face clustering and motion segmentation show that the proposed method is effective for rank approximation.",
"title": ""
},
{
"docid": "neg:1840362_18",
"text": "Datasets in the LOD cloud are far from being static in their nature and how they are exposed. As resources are added and new links are set, applications consuming the data should be able to deal with these changes. In this paper we investigate how LOD datasets change and what sensible measures there are to accommodate dataset dynamics. We compare our findings with traditional, document-centric studies concerning the “freshness” of the document collections and propose metrics for LOD datasets.",
"title": ""
},
{
"docid": "neg:1840362_19",
"text": "A detailed study on the mechanism of band-to-band tunneling in carbon nanotube field-effect transistors (CNFETs) is presented. Through a dual-gated CNFET structure tunneling currents from the valence into the conduction band and vice versa can be enabled or disabled by changing the gate potential. Different from a conventional device where the Fermi distribution ultimately limits the gate voltage range for switching the device on or off, current flow is controlled here by the valence and conduction band edges in a bandpass-filter-like arrangement. We discuss how the structure of the nanotube is the key enabler of this particular one-dimensional tunneling effect.",
"title": ""
}
] |
1840363 | Linux kernel vulnerabilities: state-of-the-art defenses and open problems | [
{
"docid": "pos:1840363_0",
"text": "This paper presents SUD, a system for running existing Linux device drivers as untrusted user-space processes. Even if the device driver is controlled by a malicious adversary, it cannot compromise the rest of the system. One significant challenge of fully isolating a driver is to confine the actions of its hardware device. SUD relies on IOMMU hardware, PCI express bridges, and messagesignaled interrupts to confine hardware devices. SUD runs unmodified Linux device drivers, by emulating a Linux kernel environment in user-space. A prototype of SUD runs drivers for Gigabit Ethernet, 802.11 wireless, sound cards, USB host controllers, and USB devices, and it is easy to add a new device class. SUD achieves the same performance as an in-kernel driver on networking benchmarks, and can saturate a Gigabit Ethernet link. SUD incurs a CPU overhead comparable to existing runtime driver isolation techniques, while providing much stronger isolation guarantees for untrusted drivers. Finally, SUD requires minimal changes to the kernel—just two kernel modules comprising 4,000 lines of code—which may at last allow the adoption of these ideas in practice.",
"title": ""
},
{
"docid": "pos:1840363_1",
"text": "This article presents a new mechanism that enables applications to run correctly when device drivers fail. Because device drivers are the principal failing component in most systems, reducing driver-induced failures greatly improves overall reliability. Earlier work has shown that an operating system can survive driver failures [Swift et al. 2005], but the applications that depend on them cannot. Thus, while operating system reliability was greatly improved, application reliability generally was not.To remedy this situation, we introduce a new operating system mechanism called a shadow driver. A shadow driver monitors device drivers and transparently recovers from driver failures. Moreover, it assumes the role of the failed driver during recovery. In this way, applications using the failed driver, as well as the kernel itself, continue to function as expected.We implemented shadow drivers for the Linux operating system and tested them on over a dozen device drivers. Our results show that applications and the OS can indeed survive the failure of a variety of device drivers. Moreover, shadow drivers impose minimal performance overhead. Lastly, they can be introduced with only modest changes to the OS kernel and with no changes at all to existing device drivers.",
"title": ""
}
] | [
{
"docid": "neg:1840363_0",
"text": "Most languages have no formal writing system and at best a limited written record. However, textual data is critical to natural language processing and particularly important for the training of language models that would facilitate speech recognition of such languages. Bilingual phonetic dictionaries are often available in some form, since lexicon creation is a fundamental task of documentary linguistics. We investigate the use of such dictionaries to improve language models when textual training data is limited to as few as 1k sentences. The method involves learning cross-lingual word embeddings as a pretraining step in the training of monolingual language models. Results across a number of languages show that language models are improved by such pre-training.",
"title": ""
},
{
"docid": "neg:1840363_1",
"text": "Previous research on online media popularity prediction concluded that the rise in popularity of online videos maintains a conventional logarithmic distribution. However, recent studies have shown that a significant portion of online videos exhibit bursty/sudden rise in popularity, which cannot be accounted for by video domain features alone. In this paper, we propose a novel transfer learning framework that utilizes knowledge from social streams (e.g., Twitter) to grasp sudden popularity bursts in online content. We develop a transfer learning algorithm that can learn topics from social streams allowing us to model the social prominence of video content and improve popularity predictions in the video domain. Our transfer learning framework has the ability to scale with incoming stream of tweets, harnessing physical world event information in real-time. Using data comprising of 10.2 million tweets and 3.5 million YouTube videos, we show that social prominence of the video topic (context) is responsible for the sudden rise in its popularity where social trends have a ripple effect as they spread from the Twitter domain to the video domain. We envision that our cross-domain popularity prediction model will be substantially useful for various media applications that could not be previously solved by traditional multimedia techniques alone.",
"title": ""
},
{
"docid": "neg:1840363_2",
"text": "The recently introduced Galois/Counter Mode (GCM) of operation for block ciphers provides both encryption and message authentication, using universal hashing based on multiplication in a binary finite field. We analyze its security and performance, and show that it is the most efficient mode of operation for high speed packet networks, by using a realistic model of a network crypto module and empirical data from studies of Internet traffic in conjunction with software experiments and hardware designs. GCM has several useful features: it can accept IVs of arbitrary length, can act as a stand-alone message authentication code (MAC), and can be used as an incremental MAC. We show that GCM is secure in the standard model of concrete security, even when these features are used. We also consider several of its important system-security aspects.",
"title": ""
},
{
"docid": "neg:1840363_3",
"text": "We report a male infant with iris coloboma, choanal atresia, postnatal retardation of growth and psychomotor development, genital anomaly, ear anomaly, and anal atresia. In addition, there was cutaneous syndactyly and nail hypoplasia of the second and third fingers on the right and hypoplasia of the left second finger nail. Comparable observations have rarely been reported and possibly represent genetic heterogeneity.",
"title": ""
},
{
"docid": "neg:1840363_4",
"text": "A commonly observed neural correlate of working memory is firing that persists after the triggering stimulus disappears. Substantial effort has been devoted to understanding the many potential mechanisms that may underlie memory-associated persistent activity. These rely either on the intrinsic properties of individual neurons or on the connectivity within neural circuits to maintain the persistent activity. Nevertheless, it remains unclear which mechanisms are at play in the many brain areas involved in working memory. Herein, we first summarize the palette of different mechanisms that can generate persistent activity. We then discuss recent work that asks which mechanisms underlie persistent activity in different brain areas. Finally, we discuss future studies that might tackle this question further. Our goal is to bridge between the communities of researchers who study either single-neuron biophysical, or neural circuit, mechanisms that can generate the persistent activity that underlies working memory.",
"title": ""
},
{
"docid": "neg:1840363_5",
"text": "Fundamental observations and principles derived from traditional physiological studies of multisensory integration have been difficult to reconcile with computational and psychophysical studies that share the foundation of probabilistic (Bayesian) inference. We review recent work on multisensory integration, focusing on experiments that bridge single-cell electrophysiology, psychophysics, and computational principles. These studies show that multisensory (visual-vestibular) neurons can account for near-optimal cue integration during the perception of self-motion. Unlike the nonlinear (superadditive) interactions emphasized in some previous studies, visual-vestibular neurons accomplish near-optimal cue integration through subadditive linear summation of their inputs, consistent with recent computational theories. Important issues remain to be resolved, including the observation that variations in cue reliability appear to change the weights that neurons apply to their different sensory inputs.",
"title": ""
},
{
"docid": "neg:1840363_6",
"text": "Virtual learning environments facilitate online learning, generating and storing large amounts of data during the learning/teaching process. This stored data enables extraction of valuable information using data mining. In this article, we present a systematic mapping, containing 42 papers, where data mining techniques are applied to predict students performance using Moodle data. Results show that decision trees are the most used classification approach. Furthermore, students interactions in forums are the main Moodle attribute analyzed by researchers.",
"title": ""
},
{
"docid": "neg:1840363_7",
"text": "Neural network techniques are widely used in network embedding, boosting the result of node classification, link prediction, visualization and other tasks in both aspects of efficiency and quality. All the state of art algorithms put effort on the neighborhood information and try to make full use of it. However, it is hard to recognize core periphery structures simply based on neighborhood. In this paper, we first discuss the influence brought by random-walk based sampling strategies to the embedding results. Theoretical and experimental evidences show that random-walk based sampling strategies fail to fully capture structural equivalence. We present a new method, SNS, that performs network embeddings using structural information (namely graphlets) to enhance its quality. SNS effectively utilizes both neighbor information and local-subgraphs similarity to learn node embeddings. This is the first framework that combines these two aspects as far as we know, positively merging two important areas in graph mining and machine learning. Moreover, we investigate what kinds of local-subgraph features matter the most on the node classification task, which enables us to further improve the embedding quality. Experiments show that our algorithm outperforms other unsupervised and semi-supervised neural network embedding algorithms on several real-world datasets.",
"title": ""
},
{
"docid": "neg:1840363_8",
"text": "Categorical models of emotions posit neurally and physiologically distinct human basic emotions. We tested this assumption by using multivariate pattern analysis (MVPA) to classify brain activity patterns of 6 basic emotions (disgust, fear, happiness, sadness, anger, and surprise) in 3 experiments. Emotions were induced with short movies or mental imagery during functional magnetic resonance imaging. MVPA accurately classified emotions induced by both methods, and the classification generalized from one induction condition to another and across individuals. Brain regions contributing most to the classification accuracy included medial and inferior lateral prefrontal cortices, frontal pole, precentral and postcentral gyri, precuneus, and posterior cingulate cortex. Thus, specific neural signatures across these regions hold representations of different emotional states in multimodal fashion, independently of how the emotions are induced. Similarity of subjective experiences between emotions was associated with similarity of neural patterns for the same emotions, suggesting a direct link between activity in these brain regions and the subjective emotional experience.",
"title": ""
},
{
"docid": "neg:1840363_9",
"text": "OBJECTIVE\nTo report an ataxic variant of Alzheimer disease expressing a novel molecular phenotype.\n\n\nDESIGN\nDescription of a novel phenotype associated with a presenilin 1 mutation.\n\n\nSETTING\nThe subject was an outpatient who was diagnosed at the local referral center.\n\n\nPATIENT\nA 28-year-old man presented with psychiatric symptoms and cerebellar signs, followed by cognitive dysfunction. Severe beta-amyloid (Abeta) deposition was accompanied by neurofibrillary tangles and cell loss in the cerebral cortex and by Purkinje cell dendrite loss in the cerebellum. A presenilin 1 gene (PSEN1) S170F mutation was detected.\n\n\nMAIN OUTCOME MEASURES\nWe analyzed the processing of Abeta precursor protein in vitro as well as the Abeta species in brain tissue.\n\n\nRESULTS\nThe PSEN1 S170F mutation induced a 3-fold increase of both secreted Abeta(42) and Abeta(40) species and a 60% increase of secreted Abeta precursor protein in transfected cells. Soluble and insoluble fractions isolated from brain tissue showed a prevalence of N-terminally truncated Abeta species ending at both residues 40 and 42.\n\n\nCONCLUSION\nThese findings define a new Alzheimer disease molecular phenotype and support the concept that the phenotypic variability associated with PSEN1 mutations may be dictated by the Abeta aggregates' composition.",
"title": ""
},
{
"docid": "neg:1840363_10",
"text": "We discuss the use of a double exponentially tapered slot antenna (DETSA) fabricated on flexible liquid crystal polymer (LCP) as a candidate for ultrawideband (UWB) communications systems. The features of the antenna and the effect of the antenna on a transmitted pulse are investigated. Return loss and E and H plane radiation pattern measurements are presented in several frequencies covering the whole ultra wide band. The return loss remains below -10 dB and the shape of the radiation pattern remains fairly constant in the whole UWB range (3.1 to 10.6 GHz). The main lobe characteristic of the radiation pattern remains stable even when the antenna is significantly conformed. The major effect of the conformation is an increase in the cross polarization component amplitude. The system: transmitter DETSA-channel receiver DETSA is measured in frequency domain and shows that the antenna adds very little distortion on a transmitted pulse. The distortion remains small even when both transmitter and receiver antennas are folded, although it increases slightly.",
"title": ""
},
{
"docid": "neg:1840363_11",
"text": "Face and eye detection algorithms are deployed in a wide variety of applications. Unfortunately, there has been no quantitative comparison of how these detectors perform under difficult circumstances. We created a dataset of low light and long distance images which possess some of the problems encountered by face and eye detectors solving real world problems. The dataset we created is composed of reimaged images (photohead) and semi-synthetic heads imaged under varying conditions of low light, atmospheric blur, and distances of 3m, 50m, 80m, and 200m. This paper analyzes the detection and localization performance of the participating face and eye algorithms compared with the Viola Jones detector and four leading commercial face detectors. Performance is characterized under the different conditions and parameterized by per-image brightness and contrast. In localization accuracy for eyes, the groups/companies focusing on long-range face detection outperform leading commercial applications.",
"title": ""
},
{
"docid": "neg:1840363_12",
"text": "We introduce TensorFlow Agents, an efficient infrastructure paradigm for building parallel reinforcement learning algorithms in TensorFlow. We simulate multiple environments in parallel, and group them to perform the neural network computation on a batch rather than individual observations. This allows the TensorFlow execution engine to parallelize computation, without the need for manual synchronization. Environments are stepped in separate Python processes to progress them in parallel without interference of the global interpreter lock. As part of this project, we introduce BatchPPO, an efficient implementation of the proximal policy optimization algorithm. By open sourcing TensorFlow Agents, we hope to provide a flexible starting point for future projects that accelerates future research in the field.",
"title": ""
},
{
"docid": "neg:1840363_13",
"text": "We investigate the use of Deep Q-Learning to control a simulated car via reinforcement learning. We start by implementing the approach of [5] ourselves, and then experimenting with various possible alterations to improve performance on our selected task. In particular, we experiment with various reward functions to induce specific driving behavior, double Q-learning, gradient update rules, and other hyperparameters. We find we are successfully able to train an agent to control the simulated car in JavaScript Racer [3] in some respects. Our agent successfully learned the turning operation, progressively gaining the ability to navigate larger sections of the simulated raceway without crashing. In obstacle avoidance, however, our agent faced challenges which we suspect are due to insufficient training time.",
"title": ""
},
{
"docid": "neg:1840363_14",
"text": "We present, GEM, the first heterogeneous graph neural network approach for detecting malicious accounts at Alipay, one of the world's leading mobile cashless payment platform. Our approach, inspired from a connected subgraph approach, adaptively learns discriminative embeddings from heterogeneous account-device graphs based on two fundamental weaknesses of attackers, i.e. device aggregation and activity aggregation. For the heterogeneous graph consists of various types of nodes, we propose an attention mechanism to learn the importance of different types of nodes, while using the sum operator for modeling the aggregation patterns of nodes in each type. Experiments show that our approaches consistently perform promising results compared with competitive methods over time.",
"title": ""
},
{
"docid": "neg:1840363_15",
"text": "Understanding foreign speech is difficult, in part because of unusual mappings between sounds and words. It is known that listeners in their native language can use lexical knowledge (about how words ought to sound) to learn how to interpret unusual speech-sounds. We therefore investigated whether subtitles, which provide lexical information, support perceptual learning about foreign speech. Dutch participants, unfamiliar with Scottish and Australian regional accents of English, watched Scottish or Australian English videos with Dutch, English or no subtitles, and then repeated audio fragments of both accents. Repetition of novel fragments was worse after Dutch-subtitle exposure but better after English-subtitle exposure. Native-language subtitles appear to create lexical interference, but foreign-language subtitles assist speech learning by indicating which words (and hence sounds) are being spoken.",
"title": ""
},
{
"docid": "neg:1840363_16",
"text": "The content-based image retrieval methods are developed to help people find what they desire based on preferred images instead of linguistic information. This paper focuses on capturing the image features representing details of the collar designs, which is important for people to choose clothing. The quality of the feature extraction methods is important for the queries. This paper presents several new methods for the collar-design feature extraction. A prototype of clothing image retrieval system based on relevance feedback approach and optimum-path forest algorithm is also developed to improve the query results and allows users to find clothing image of more preferred design. A series of experiments are conducted to test the qualities of the feature extraction methods and validate the effectiveness and efficiency of the RF-OPF prototype from multiple aspects. The evaluation scores of initial query results are used to test the qualities of the feature extraction methods. The average scores of all RF steps, the average numbers of RF iterations taken before achieving desired results and the score transition of RF iterations are used to validate the effectiveness and efficiency of the proposed RF-OPF prototype.",
"title": ""
},
{
"docid": "neg:1840363_17",
"text": "This research analyzed the perception of Makassar’s teenagers toward Korean drama and music and their influences to them. Interviews and digital recorder were provided as instruments of the research to ten respondents who are members of Makassar Korean Lover Community. Then, in analyzing data the researchers used descriptive qualitative method that aimed to get deep information about Korean wave in Makassar. The Results of the study found that Makassar’s teenagers put enormous interest in Korean culture especially Korean drama and music. However, most respondents also realize that the presence of Korean culture has a great negative impact to them and their environments. Korean culture itself gives effect in several aspects such as the influence on behavior, Influence on the taste and Influence on the environment as well.",
"title": ""
},
{
"docid": "neg:1840363_18",
"text": "Although there have been many decades of research and commercial presence on high performance general purpose processors, there are still many applications that require fully customized hardware architectures for further computational acceleration. Recently, deep learning has been successfully used to learn in a wide variety of applications, but their heavy computation demand has considerably limited their practical applications. This paper proposes a fully pipelined acceleration architecture to alleviate high computational demand of an artificial neural network (ANN) which is restricted Boltzmann machine (RBM) ANNs. The implemented RBM ANN accelerator (integrating $1024\\times 1024$ network size, using 128 input cases per batch, and running at a 303-MHz clock frequency) integrated in a state-of-the art field-programmable gate array (FPGA) (Xilinx Virtex 7 XC7V-2000T) provides a computational performance of 301-billion connection-updates-per-second and about 193 times higher performance than a software solution running on general purpose processors. Most importantly, the architecture enables over 4 times (12 times in batch learning) higher performance compared with a previous work when both are implemented in an FPGA device (XC2VP70).",
"title": ""
},
{
"docid": "neg:1840363_19",
"text": "Wearable orthoses can function both as assistive devices, which allow the user to live independently, and as rehabilitation devices, which allow the user to regain use of an impaired limb. To be fully wearable, such devices must have intuitive controls, and to improve quality of life, the device should enable the user to perform Activities of Daily Living. In this context, we explore the feasibility of using electromyography (EMG) signals to control a wearable exotendon device to enable pick and place tasks. We use an easy to don, commodity forearm EMG band with 8 sensors to create an EMG pattern classification control for an exotendon device. With this control, we are able to detect a user's intent to open, and can thus enable extension and pick and place tasks. In experiments with stroke survivors, we explore the accuracy of this control in both non-functional and functional tasks. Our results support the feasibility of developing wearable devices with intuitive controls which provide a functional context for rehabilitation.",
"title": ""
}
] |
1840364 | Pain catastrophizing and kinesiophobia: predictors of chronic low back pain. | [
{
"docid": "pos:1840364_0",
"text": "Two studies are presented that investigated 'fear of movement/(re)injury' in chronic musculoskeletal pain and its relation to behavioral performance. The 1st study examines the relation among fear of movement/(re)injury (as measured with the Dutch version of the Tampa Scale for Kinesiophobia (TSK-DV)) (Kori et al. 1990), biographical variables (age, pain duration, gender, use of supportive equipment, compensation status), pain-related variables (pain intensity, pain cognitions, pain coping) and affective distress (fear and depression) in a group of 103 chronic low back pain (CLBP) patients. In the 2nd study, motoric, psychophysiologic and self-report measures of fear are taken from 33 CLBP patients who are exposed to a single and relatively simple movement. Generally, findings demonstrated that the fear of movement/(re)injury is related to gender and compensation status, and more closely to measures of catastrophizing and depression, but in a much lesser degree to pain coping and pain intensity. Furthermore, subjects who report a high degree of fear of movement/(re)injury show more fear and escape/avoidance when exposed to a simple movement. The discussion focuses on the clinical relevance of the construct of fear of movement/(re)injury and research questions that remain to be answered.",
"title": ""
}
] | [
{
"docid": "neg:1840364_0",
"text": "Difficulties in the social domain and motor anomalies have been widely investigated in Autism Spectrum Disorder (ASD). However, they have been generally considered as independent, and therefore tackled separately. Recent advances in neuroscience have hypothesized that the cortical motor system can play a role not only as a controller of elementary physical features of movement, but also in a complex domain as social cognition. Here, going beyond previous studies on ASD that described difficulties in the motor and in the social domain separately, we focus on the impact of motor mechanisms anomalies on social functioning. We consider behavioral, electrophysiological and neuroimaging findings supporting the idea that motor cognition is a critical \"intermediate phenotype\" for ASD. Motor cognition anomalies in ASD affect the processes of extraction, codification and subsequent translation of \"external\" social information into the motor system. Intriguingly, this alternative \"motor\" approach to the social domain difficulties in ASD may be promising to bridge the gap between recent experimental findings and clinical practice, potentially leading to refined preventive approaches and successful treatments.",
"title": ""
},
{
"docid": "neg:1840364_1",
"text": "Information communications technology systems are facing an increasing number of cyber security threats, the majority of which are originated by insiders. As insiders reside behind the enterprise-level security defence mechanisms and often have privileged access to the network, detecting and preventing insider threats is a complex and challenging problem. In fact, many schemes and systems have been proposed to address insider threats from different perspectives, such as intent, type of threat, or available audit data source. This survey attempts to line up these works together with only three most common types of insider namely traitor, masquerader, and unintentional perpetrator, while reviewing the countermeasures from a data analytics perspective. Uniquely, this survey takes into account the early stage threats which may lead to a malicious insider rising up. When direct and indirect threats are put on the same page, all the relevant works can be categorised as host, network, or contextual data-based according to audit data source and each work is reviewed for its capability against insider threats, how the information is extracted from the engaged data sources, and what the decision-making algorithm is. The works are also compared and contrasted. Finally, some issues are raised based on the observations from the reviewed works and new research gaps and challenges identified.",
"title": ""
},
{
"docid": "neg:1840364_2",
"text": "68 AI MAGAZINE Adaptive graphical user interfaces (GUIs) automatically tailor the presentation of functionality to better fit an individual user’s tasks, usage patterns, and abilities. A familiar example of an adaptive interface is the Windows XP start menu, where a small set of applications from the “All Programs” submenu is replicated in the top level of the “Start” menu for easier access, saving users from navigating through multiple levels of the menu hierarchy (figure 1). The potential of adaptive interfaces to reduce visual search time, cognitive load, and motor movement is appealing, and when the adaptation is successful an adaptive interface can be faster and preferred in comparison to a nonadaptive counterpart (for example, Gajos et al. [2006], Greenberg and Witten [1985]). In practice, however, many challenges exist, and, thus far, evaluation results of adaptive interfaces have been mixed. For an adaptive interface to be successful, the benefits of correct adaptations must outweigh the costs, or usability side effects, of incorrect adaptations. Often, an adaptive mechanism designed to improve one aspect of the interaction, typically motor movement or visual search, inadvertently increases effort along another dimension, such as cognitive or perceptual load. The result is that many adaptive designs that were expected to confer a benefit along one of these dimensions have failed in practice. For example, a menu that tracks how frequently each item is used and adaptively reorders itself so that items appear in order from most to least frequently accessed should improve motor performance, but in reality this design can slow users down and reduce satisfaction because of the constantly changing layout (Mitchell and Schneiderman [1989]; for example, figure 2b). Commonly cited issues with adaptive interfaces include the lack of control the user has over the adaptive process and the difficulty that users may have in predicting what the system’s response will be to a user action (Höök 2000). User evaluation of adaptive GUIs is more complex than eval-",
"title": ""
},
{
"docid": "neg:1840364_3",
"text": "We construct a family of extremely simple bijections that yield Cayley’s famous formula for counting trees. The weight preserving properties of these bijections furnish a number of multivariate generating functions for weighted Cayley trees. Essentially the same idea is used to derive bijective proofs and q-analogues for the number of spanning trees of other graphs, including the complete bipartite and complete tripartite graphs. These bijections also allow the calculation of explicit formulas for the expected number of various statistics on Cayley trees.",
"title": ""
},
{
"docid": "neg:1840364_4",
"text": "Extracting biomedical entities and their relations from text has important applications on biomedical research. Previous work primarily utilized feature-based pipeline models to process this task. Many efforts need to be made on feature engineering when feature-based models are employed. Moreover, pipeline models may suffer error propagation and are not able to utilize the interactions between subtasks. Therefore, we propose a neural joint model to extract biomedical entities as well as their relations simultaneously, and it can alleviate the problems above. Our model was evaluated on two tasks, i.e., the task of extracting adverse drug events between drug and disease entities, and the task of extracting resident relations between bacteria and location entities. Compared with the state-of-the-art systems in these tasks, our model improved the F1 scores of the first task by 5.1% in entity recognition and 8.0% in relation extraction, and that of the second task by 9.2% in relation extraction. The proposed model achieves competitive performances with less work on feature engineering. We demonstrate that the model based on neural networks is effective for biomedical entity and relation extraction. In addition, parameter sharing is an alternative method for neural models to jointly process this task. Our work can facilitate the research on biomedical text mining.",
"title": ""
},
{
"docid": "neg:1840364_5",
"text": "Class imbalance is a common problem in the case of real-world object detection and classification tasks. Data of some classes are abundant, making them an overrepresented majority, and data of other classes are scarce, making them an underrepresented minority. This imbalance makes it challenging for a classifier to appropriately learn the discriminating boundaries of the majority and minority classes. In this paper, we propose a cost-sensitive (CoSen) deep neural network, which can automatically learn robust feature representations for both the majority and minority classes. During training, our learning procedure jointly optimizes the class-dependent costs and the neural network parameters. The proposed approach is applicable to both binary and multiclass problems without any modification. Moreover, as opposed to data-level approaches, we do not alter the original data distribution, which results in a lower computational cost during the training process. We report the results of our experiments on six major image classification data sets and show that the proposed approach significantly outperforms the baseline algorithms. Comparisons with popular data sampling techniques and CoSen classifiers demonstrate the superior performance of our proposed method.",
"title": ""
},
{
"docid": "neg:1840364_6",
"text": "INTRODUCTION\nTumeric is a spice that comes from the root Curcuma longa, a member of the ginger family, Zingaberaceae. In Ayurveda (Indian traditional medicine), tumeric has been used for its medicinal properties for various indications and through different routes of administration, including topically, orally, and by inhalation. Curcuminoids are components of tumeric, which include mainly curcumin (diferuloyl methane), demethoxycurcumin, and bisdemethoxycurcmin.\n\n\nOBJECTIVES\nThe goal of this systematic review of the literature was to summarize the literature on the safety and anti-inflammatory activity of curcumin.\n\n\nMETHODS\nA search of the computerized database MEDLINE (1966 to January 2002), a manual search of bibliographies of papers identified through MEDLINE, and an Internet search using multiple search engines for references on this topic was conducted. The PDR for Herbal Medicines, and four textbooks on herbal medicine and their bibliographies were also searched.\n\n\nRESULTS\nA large number of studies on curcumin were identified. These included studies on the antioxidant, anti-inflammatory, antiviral, and antifungal properties of curcuminoids. Studies on the toxicity and anti-inflammatory properties of curcumin have included in vitro, animal, and human studies. A phase 1 human trial with 25 subjects using up to 8000 mg of curcumin per day for 3 months found no toxicity from curcumin. Five other human trials using 1125-2500 mg of curcumin per day have also found it to be safe. These human studies have found some evidence of anti-inflammatory activity of curcumin. The laboratory studies have identified a number of different molecules involved in inflammation that are inhibited by curcumin including phospholipase, lipooxygenase, cyclooxygenase 2, leukotrienes, thromboxane, prostaglandins, nitric oxide, collagenase, elastase, hyaluronidase, monocyte chemoattractant protein-1 (MCP-1), interferon-inducible protein, tumor necrosis factor (TNF), and interleukin-12 (IL-12).\n\n\nCONCLUSIONS\nCurcumin has been demonstrated to be safe in six human trials and has demonstrated anti-inflammatory activity. It may exert its anti-inflammatory activity by inhibition of a number of different molecules that play a role in inflammation.",
"title": ""
},
{
"docid": "neg:1840364_7",
"text": "Suggesting that empirical work in the field of reading has advanced sufficiently to allow substantial agreed-upon results and conclusions, this literature review cuts through the detail of partially convergent, sometimes discrepant research findings to provide an integrated picture of how reading develops and how reading instruction should proceed. The focus of the review is prevention. Sketched is a picture of the conditions under which reading is most likely to develop easily--conditions that include stimulating preschool environments, excellent reading instruction, and the absence of any of a wide array of risk factors. It also provides recommendations for practice as well as recommendations for further research. After a preface and executive summary, chapters are (1) Introduction; (2) The Process of Learning to Read; (3) Who Has Reading Difficulties; (4) Predic:ors of Success and Failure in Reading; (5) Preventing Reading Difficulties before Kindergarten; (6) Instructional Strategies for Kindergarten and the Primary Grades; (7) Organizational Strategies for Kindergarten and the Primary Grades; (8) Helping Children with Reading Difficulties in Grades 1 to 3; (9) The Agents of Change; and (10) Recommendations for Practice and Research. Contains biographical sketches of the committee members and an index. Contains approximately 800 references.",
"title": ""
},
{
"docid": "neg:1840364_8",
"text": "Creating short summaries of documents with respect to a query has applications in for example search engines, where it may help inform users of the most relevant results. Constructing such a summary automatically, with the potential expressiveness of a human-written summary, is a difficult problem yet to be fully solved. In this thesis, a neural network model for this task is presented. We adapt an existing dataset of news article summaries for the task and train a pointer-generator model using this dataset to summarize such articles. The generated summaries are then evaluated by measuring similarity to reference summaries. We observe that the generated summaries exhibit abstractive properties, but also that they have issues, such as rarely being truthful. However, we show that a neural network summarization model, similar to existing neural network models for abstractive summarization, can be constructed to make use of queries for more targeted summaries.",
"title": ""
},
{
"docid": "neg:1840364_9",
"text": "It is said that there’s nothing so practical as good theory. It may also be said that there’s nothing so theoretically interesting as good practice1. This is particularly true of efforts to relate constructivism as a theory of learning to the practice of instruction. Our goal in this paper is to provide a clear link between the theoretical principles of constructivism, the practice of instructional design, and the practice of teaching. We will begin with a basic characterization of constructivism identifying what we believe to be the central principles in learning and understanding. We will then identify and elaborate on eight instructional principles for the design of a constructivist learning environment. Finally, we will examine what we consider to be one of the best exemplars of a constructivist learning environment -Problem Based Learning as described by Barrows (1985, 1986, 1992).",
"title": ""
},
{
"docid": "neg:1840364_10",
"text": "This paper discusses the trust related issues and arguments (evidence) Internet stores need to provide in order to increase consumer trust. Based on a model of trust from academic literature, in addition to a model of the customer service life cycle, the paper develops a framework that identifies key trust-related issues and organizes them into four categories: personal information, product quality and price, customer service, and store presence. It is further validated by comparing the issues it raises to issues identified in a review of academic studies, and to issues of concern identified in two consumer surveys. The framework is also applied to ten well-known web sites to demonstrate its applicability. The proposed framework will benefit both practitioners and researchers by identifying important issues regarding trust, which need to be accounted for in Internet stores. For practitioners, it provides a guide to the issues Internet stores need to address in their use of arguments. For researchers, it can be used as a foundation for future empirical studies investigating the effects of trust-related arguments on consumers’ trust in Internet stores.",
"title": ""
},
{
"docid": "neg:1840364_11",
"text": "This work proposes a novel deep network architecture to solve the camera ego-motion estimation problem. A motion estimation network generally learns features similar to optical flow (OF) fields starting from sequences of images. This OF can be described by a lower dimensional latent space. Previous research has shown how to find linear approximations of this space. We propose to use an autoencoder network to find a nonlinear representation of the OF manifold. In addition, we propose to learn the latent space jointly with the estimation task, so that the learned OF features become a more robust description of the OF input. We call this novel architecture latent space visual odometry (LS-VO). The experiments show that LS-VO achieves a considerable increase in performances with respect to baselines, while the number of parameters of the estimation network only slightly increases.",
"title": ""
},
{
"docid": "neg:1840364_12",
"text": "BACKGROUND\nUnilateral spatial neglect causes difficulty attending to one side of space. Various rehabilitation interventions have been used but evidence of their benefit is lacking.\n\n\nOBJECTIVES\nTo assess whether cognitive rehabilitation improves functional independence, neglect (as measured using standardised assessments), destination on discharge, falls, balance, depression/anxiety and quality of life in stroke patients with neglect measured immediately post-intervention and at longer-term follow-up; and to determine which types of interventions are effective and whether cognitive rehabilitation is more effective than standard care or an attention control.\n\n\nSEARCH METHODS\nWe searched the Cochrane Stroke Group Trials Register (last searched June 2012), MEDLINE (1966 to June 2011), EMBASE (1980 to June 2011), CINAHL (1983 to June 2011), PsycINFO (1974 to June 2011), UK National Research Register (June 2011). We handsearched relevant journals (up to 1998), screened reference lists, and tracked citations using SCISEARCH.\n\n\nSELECTION CRITERIA\nWe included randomised controlled trials (RCTs) of cognitive rehabilitation specifically aimed at spatial neglect. We excluded studies of general stroke rehabilitation and studies with mixed participant groups, unless more than 75% of their sample were stroke patients or separate stroke data were available.\n\n\nDATA COLLECTION AND ANALYSIS\nTwo review authors independently selected studies, extracted data, and assessed study quality. For subgroup analyses, review authors independently categorised the approach underlying the cognitive intervention as either 'top-down' (interventions that encourage awareness of the disability and potential compensatory strategies) or 'bottom-up' (interventions directed at the impairment but not requiring awareness or behavioural change, e.g. wearing prisms or patches).\n\n\nMAIN RESULTS\nWe included 23 RCTs with 628 participants (adding 11 new RCTs involving 322 new participants for this update). Only 11 studies were assessed to have adequate allocation concealment, and only four studies to have a low risk of bias in all categories assessed. Most studies measured outcomes using standardised neglect assessments: 15 studies measured effect on activities of daily living (ADL) immediately after the end of the intervention period, but only six reported persisting effects on ADL. One study (30 participants) reported discharge destination and one study (eight participants) reported the number of falls.Eighteen of the 23 included RCTs compared cognitive rehabilitation with any control intervention (placebo, attention or no treatment). Meta-analyses demonstrated no statistically significant effect of cognitive rehabilitation, compared with control, for persisting effects on either ADL (five studies, 143 participants) or standardised neglect assessments (eight studies, 172 participants), or for immediate effects on ADL (10 studies, 343 participants). In contrast, we found a statistically significant effect in favour of cognitive rehabilitation compared with control, for immediate effects on standardised neglect assessments (16 studies, 437 participants, standardised mean difference (SMD) 0.35, 95% confidence interval (CI) 0.09 to 0.62). However, sensitivity analyses including only studies of high methodological quality removed evidence of a significant effect of cognitive rehabilitation.Additionally, five of the 23 included RCTs compared one cognitive rehabilitation intervention with another. These included three studies comparing a visual scanning intervention with another cognitive rehabilitation intervention, and two studies (three comparison groups) comparing a visual scanning intervention plus another cognitive rehabilitation intervention with a visual scanning intervention alone. Only two small studies reported a measure of functional disability and there was considerable heterogeneity within these subgroups (I² > 40%) when we pooled standardised neglect assessment data, limiting the ability to draw generalised conclusions.Subgroup analyses exploring the effect of having an attention control demonstrated some evidence of a statistically significant difference between those comparing rehabilitation with attention control and those with another control or no treatment group, for immediate effects on standardised neglect assessments (test for subgroup differences, P = 0.04).\n\n\nAUTHORS' CONCLUSIONS\nThe effectiveness of cognitive rehabilitation interventions for reducing the disabling effects of neglect and increasing independence remains unproven. As a consequence, no rehabilitation approach can be supported or refuted based on current evidence from RCTs. However, there is some very limited evidence that cognitive rehabilitation may have an immediate beneficial effect on tests of neglect. This emerging evidence justifies further clinical trials of cognitive rehabilitation for neglect. However, future studies need to have appropriate high quality methodological design and reporting, to examine persisting effects of treatment and to include an attention control comparator.",
"title": ""
},
{
"docid": "neg:1840364_13",
"text": "To address concerns of TREC-style relevance judgments, we explore two improvements. The first one seeks to make relevance judgments contextual, collecting in situ feedback of users in an interactive search session and embracing usefulness as the primary judgment criterion. The second one collects multidimensional assessments to complement relevance or usefulness judgments, with four distinct alternative aspects examined in this paper - novelty, understandability, reliability, and effort.\n We evaluate different types of judgments by correlating them with six user experience measures collected from a lab user study. Results show that switching from TREC-style relevance criteria to usefulness is fruitful, but in situ judgments do not exhibit clear benefits over the judgments collected without context. In contrast, combining relevance or usefulness with the four alternative judgments consistently improves the correlation with user experience measures, suggesting future IR systems should adopt multi-aspect search result judgments in development and evaluation.\n We further examine implicit feedback techniques for predicting these judgments. We find that click dwell time, a popular indicator of search result quality, is able to predict some but not all dimensions of the judgments. We enrich the current implicit feedback methods using post-click user interaction in a search session and achieve better prediction for all six dimensions of judgments.",
"title": ""
},
{
"docid": "neg:1840364_14",
"text": "A hierarchical scheme for clustering data is presented which applies to spaces with a high number of dimensions ( 3 D N > ). The data set is first reduced to a smaller set of partitions (multi-dimensional bins). Multiple clustering techniques are used, including spectral clustering; however, new techniques are also introduced based on the path length between partitions that are connected to one another. A Line-of-Sight algorithm is also developed for clustering. A test bank of 12 data sets with varying properties is used to expose the strengths and weaknesses of each technique. Finally, a robust clustering technique is discussed based on reaching a consensus among the multiple approaches, overcoming the weaknesses found individually.",
"title": ""
},
{
"docid": "neg:1840364_15",
"text": "We propose a novel decomposition framework for the distributed optimization of general nonconvex sum-utility functions arising naturally in the system design of wireless multi-user interfering systems. Our main contributions are i) the development of the first class of (inexact) Jacobi best-response algorithms with provable convergence, where all the users simultaneously and iteratively solve a suitably convexified version of the original sum-utility optimization problem; ii) the derivation of a general dynamic pricing mechanism that provides a unified view of existing pricing schemes that are based, instead, on heuristics; and iii) a framework that can be easily particularized to well-known applications, giving rise to very efficient practical (Jacobi or Gauss-Seidel) algorithms that outperform existing ad hoc methods proposed for very specific problems. Interestingly, our framework contains as special cases well-known gradient algorithms for nonconvex sum-utility problems, and many block-coordinate descent schemes for convex functions.",
"title": ""
},
{
"docid": "neg:1840364_16",
"text": "Word embedding models learn a distributed vectorial representation for words, which can be used as the basis for (deep) learning models to solve a variety of natural language processing tasks. One of the main disadvantages of current word embedding models is that they learn a single representation for each word in a metric space, as a result of which they cannot appropriately model polysemous words. In this work, we develop a new word embedding model that can accurately represent such words by automatically learning multiple representations for each word, whilst remaining computationally efficient. Without any supervision, our model learns multiple, complementary embeddings that all capture different semantic structure. We demonstrate the potential merits of our model by training it on large text corpora, and evaluating it on word similarity tasks. Our proposed embedding model is competitive with the state of the art and can easily scale to large corpora due to its computational simplicity.",
"title": ""
},
{
"docid": "neg:1840364_17",
"text": "Recently, Uber has emerged as a leader in the \"sharing economy\". Uber is a \"ride sharing\" service that matches willing drivers with customers looking for rides. However, unlike other open marketplaces (e.g., AirBnB), Uber is a black-box: they do not provide data about supply or demand, and prices are set dynamically by an opaque \"surge pricing\" algorithm. The lack of transparency has led to concerns about whether Uber artificially manipulate prices, and whether dynamic prices are fair to customers and drivers. In order to understand the impact of surge pricing on passengers and drivers, we present the first in-depth investigation of Uber. We gathered four weeks of data from Uber by emulating 43 copies of the Uber smartphone app and distributing them throughout downtown San Francisco (SF) and midtown Manhattan. Using our dataset, we are able to characterize the dynamics of Uber in SF and Manhattan, as well as identify key implementation details of Uber's surge price algorithm. Our observations about Uber's surge price algorithm raise important questions about the fairness and transparency of this system.",
"title": ""
},
{
"docid": "neg:1840364_18",
"text": "Current Web search engines are built to serve all users, independent of the special needs of any individual user. Personalization of Web search is to carry out retrieval for each user incorporating his/her interests. We propose a novel technique to learn user profiles from users' search histories. The user profiles are then used to improve retrieval effectiveness in Web search. A user profile and a general profile are learned from the user's search history and a category hierarchy, respectively. These two profiles are combined to map a user query into a set of categories which represent the user's search intention and serve as a context to disambiguate the words in the user's query. Web search is conducted based on both the user query and the set of categories. Several profile learning and category mapping algorithms and a fusion algorithm are provided and evaluated. Experimental results indicate that our technique to personalize Web search is both effective and efficient.",
"title": ""
},
{
"docid": "neg:1840364_19",
"text": "Various powerful people detection methods exist. Surprisingly, most approaches rely on static image features only despite the obvious potential of motion information for people detection. This paper systematically evaluates different features and classifiers in a sliding-window framework. First, our experiments indicate that incorporating motion information improves detection performance significantly. Second, the combination of multiple and complementary feature types can also help improve performance. And third, the choice of the classifier-feature combination and several implementation details are crucial to reach best performance. In contrast to many recent papers experimental results are reported for four different datasets rather than using a single one. Three of them are taken from the literature allowing for direct comparison. The fourth dataset is newly recorded using an onboard camera driving through urban environment. Consequently this dataset is more realistic and more challenging than any currently available dataset.",
"title": ""
}
] |
1840365 | Oruta: Privacy-Preserving Public Auditing for Shared Data in the Cloud | [
{
"docid": "pos:1840365_0",
"text": "Cloud computing is an emerging computing paradigm in which resources of the computing infrastructure are provided as services over the Internet. As promising as it is, this paradigm also brings forth many new challenges for data security and access control when users outsource sensitive data for sharing on cloud servers, which are not within the same trusted domain as data owners. To keep sensitive user data confidential against untrusted servers, existing solutions usually apply cryptographic methods by disclosing data decryption keys only to authorized users. However, in doing so, these solutions inevitably introduce a heavy computation overhead on the data owner for key distribution and data management when fine-grained data access control is desired, and thus do not scale well. The problem of simultaneously achieving fine-grainedness, scalability, and data confidentiality of access control actually still remains unresolved. This paper addresses this challenging open issue by, on one hand, defining and enforcing access policies based on data attributes, and, on the other hand, allowing the data owner to delegate most of the computation tasks involved in fine-grained data access control to untrusted cloud servers without disclosing the underlying data contents. We achieve this goal by exploiting and uniquely combining techniques of attribute-based encryption (ABE), proxy re-encryption, and lazy re-encryption. Our proposed scheme also has salient properties of user access privilege confidentiality and user secret key accountability. Extensive analysis shows that our proposed scheme is highly efficient and provably secure under existing security models.",
"title": ""
}
] | [
{
"docid": "neg:1840365_0",
"text": "Evolutionary population dynamics (EPD) deal with the removal of poor individuals in nature. It has been proven that this operator is able to improve the median fitness of the whole population, a very effective and cheap method for improving the performance of meta-heuristics. This paper proposes the use of EPD in the grey wolf optimizer (GWO). In fact, EPD removes the poor search agents of GWO and repositions them around alpha, beta, or delta wolves to enhance exploitation. The GWO is also required to randomly reinitialize its worst search agents around the search space by EPD to promote exploration. The proposed GWO–EPD algorithm is benchmarked on six unimodal and seven multi-modal test functions. The results are compared to the original GWO algorithm for verification. It is demonstrated that the proposed operator is able to significantly improve the performance of the GWO algorithm in terms of exploration, local optima avoidance, exploitation, local search, and convergence rate.",
"title": ""
},
{
"docid": "neg:1840365_1",
"text": "Boredom and low levels of task engagement while driving can pose road safety risks, e.g., inattention during low traffic, routine trips, or semi-automated driving. Digital technology interventions that increase task engagement, e.g., through performance feedback, increased challenge, and incentives (often referred to as ‘gamification’), could therefore offer safety benefits. To explore the impact of such interventions, we conducted experiments in a highfidelity driving simulator with thirty-two participants. In two counterbalanced conditions (control and intervention), we compared driving behaviour, physiological arousal, and subjective experience. Results indicate that the gamified boredom intervention reduced unsafe coping mechanisms such as speeding while promoting anticipatory driving. We can further infer that the intervention not only increased one’s attention and arousal during the intermittent gamification challenges, but that these intermittent stimuli may also help sustain one’s attention and arousal in between challenges and throughout a drive. At the same time, the gamified condition led to slower hazard reactions and short off-road glances. Our contributions deepen our understanding of driver boredom and pave the way for engaging interventions for safety critical tasks.",
"title": ""
},
{
"docid": "neg:1840365_2",
"text": "We propose an approach for the static analysis of probabilistic programs that sense, manipulate, and control based on uncertain data. Examples include programs used in risk analysis, medical decision making and cyber-physical systems. Correctness properties of such programs take the form of queries that seek the probabilities of assertions over program variables. We present a static analysis approach that provides guaranteed interval bounds on the values (assertion probabilities) of such queries. First, we observe that for probabilistic programs, it is possible to conclude facts about the behavior of the entire program by choosing a finite, adequate set of its paths. We provide strategies for choosing such a set of paths and verifying its adequacy. The queries are evaluated over each path by a combination of symbolic execution and probabilistic volume-bound computations. Each path yields interval bounds that can be summed up with a \"coverage\" bound to yield an interval that encloses the probability of assertion for the program as a whole. We demonstrate promising results on a suite of benchmarks from many different sources including robotic manipulators and medical decision making programs.",
"title": ""
},
{
"docid": "neg:1840365_3",
"text": "Lithium-based battery technology offers performance advantages over traditional battery technologies at the cost of increased monitoring and controls overhead. Multiple-cell Lead-Acid battery packs can be equalized by a controlled overcharge, eliminating the need to periodically adjust individual cells to match the rest of the pack. Lithium-based based batteries cannot be equalized by an overcharge, so alternative methods are required. This paper discusses several cell-balancing methodologies. Active cell balancing methods remove charge from one or more high cells and deliver the charge to one or more low cells. Dissipative techniques find the high cells in the pack, and remove excess energy through a resistive element until their charges match the low cells. This paper presents the theory of charge balancing techniques and the advantages and disadvantages of the presented methods. INTRODUCTION Lithium Ion and Lithium Polymer battery chemistries cannot be overcharged without damaging active materials [1-5]. The electrolyte breakdown voltage is precariously close to the fully charged terminal voltage, typically in the range of 4.1 to 4.3 volts/cell. Therefore, careful monitoring and controls must be implemented to avoid any single cell from experiencing an overvoltage due to excessive charging. Single lithium-based cells require monitoring so that cell voltage does not exceed predefined limits of the chemistry. Series connected lithium cells pose a more complex problem: each cell in the string must be monitored and controlled. Even though the pack voltage may appear to be within acceptable limits, one cell of the series string may be experiencing damaging voltage due to cell-to-cell imbalances. Traditionally, cell-to-cell imbalances in lead-acid batteries have been solved by controlled overcharging [6,7]. Leadacid batteries can be brought into overcharge conditions without permanent cell damage, as the excess energy is released by gassing. This gassing mechanism is the natural method for balancing a series string of lead acid battery cells. Other chemistries, such as NiMH, exhibit similar natural cell-to-cell balancing mechanisms [8]. Because a Lithium battery cannot be overcharged, there is no natural mechanism for cell equalization. Therefore, an alternative method must be employed. This paper discusses three categories of cell balancing methodologies: charging methods, active methods, and passive methods. Cell balancing is necessary for highly transient lithium battery applications, especially those applications where charging occurs frequently, such as regenerative braking in electric vehicle (EV) or hybrid electric vehicle (HEV) applications. Regenerative braking can cause problems for Lithium Ion batteries because the instantaneous regenerative braking current inrush can cause battery voltage to increase suddenly, possibly over the electrolyte breakdown threshold voltage. Deviations in cell behaviors generally occur because of two phenomenon: changes in internal impedance or cell capacity reduction due to aging. In either case, if one cell in a battery pack experiences deviant cell behavior, that cell becomes a likely candidate to overvoltage during high power charging events. Cells with reduced capacity or high internal impedance tend to have large voltage swings when charging and discharging. For HEV applications, it is necessary to cell balance lithium chemistry because of this overvoltage potential. For EV applications, cell balancing is desirable to obtain maximum usable capacity from the battery pack. During charging, an out-of-balance cell may prematurely approach the end-of-charge voltage (typically 4.1 to 4.3 volts/cell) and trigger the charger to turn off. Cell balancing is useful to control the higher voltage cells until the rest of the cells can catch up. In this way, the charger is not turned off until the cells simultaneously reach the end-of-charge voltage. END-OF-CHARGE CELL BALANCING METHODS Typically, cell-balancing methods employed during and at end-of-charging are useful only for electric vehicle purposes. This is because electric vehicle batteries are generally fully charged between each use cycle. Hybrid electric vehicle batteries may or may not be maintained fully charged, resulting in unpredictable end-of-charge conditions to enact the balancing mechanism. Hybrid vehicle batteries also require both high power charge (regenerative braking) and discharge (launch assist or boost) capabilities. For this reason, their batteries are usually maintained at a SOC that can discharge the required power but still have enough headroom to accept the necessary regenerative power. To fully charge the HEV battery for cell balancing would diminish charge acceptance capability (regenerative braking). CHARGE SHUNTING The charge-shunting cell balancing method selectively shunts the charging current around each cell as they become fully charged (Figure 1). This method is most efficiently employed on systems with known charge rates. The shunt resistor R is sized to shunt exactly the charging current I when the fully charged cell voltage V is reached. If the charging current decreases, resistor R will discharge the shunted cell. To avoid extremely large power dissipations due to R, this method is best used with stepped-current chargers with a small end-of-charge current.",
"title": ""
},
{
"docid": "neg:1840365_4",
"text": "Greater trochanter pain syndrome due to tendinopathy or bursitis is a common cause of hip pain. The previously reported magnetic resonance (MR) findings of trochanteric tendinopathy and bursitis are peritrochanteric fluid and abductor tendon abnormality. We have often noted peritrochanteric high T2 signal in patients without trochanteric symptoms. The purpose of this study was to determine whether the MR findings of peritrochanteric fluid or hip abductor tendon pathology correlate with trochanteric pain. We retrospectively reviewed 131 consecutive MR examinations of the pelvis (256 hips) for T2 peritrochanteric signal and abductor tendon abnormalities without knowledge of the clinical symptoms. Any T2 peritrochanteric abnormality was characterized by size as tiny, small, medium, or large; by morphology as feathery, crescentic, or round; and by location as bursal or intratendinous. The clinical symptoms of hip pain and trochanteric pain were compared to the MR findings on coronal, sagittal, and axial T2 sequences using chi-square or Fisher’s exact test with significance assigned as p < 0.05. Clinical symptoms of trochanteric pain syndrome were present in only 16 of the 256 hips. All 16 hips with trochanteric pain and 212 (88%) of 240 without trochanteric pain had peritrochanteric abnormalities (p = 0.15). Eighty-eight percent of hips with trochanteric symptoms had gluteus tendinopathy while 50% of those without symptoms had such findings (p = 0.004). Other than tendinopathy, there was no statistically significant difference between hips with or without trochanteric symptoms and the presence of peritrochanteric T2 abnormality, its size or shape, and the presence of gluteus medius or minimus partial thickness tears. Patients with trochanteric pain syndrome always have peritrochanteric T2 abnormalities and are significantly more likely to have abductor tendinopathy on magnetic resonance imaging (MRI). However, although the absence of peritrochanteric T2 MR abnormalities makes trochanteric pain syndrome unlikely, detection of these abnormalities on MRI is a poor predictor of trochanteric pain syndrome as these findings are present in a high percentage of patients without trochanteric pain.",
"title": ""
},
{
"docid": "neg:1840365_5",
"text": "The circadian timing system drives daily rhythmic changes in drug metabolism and controls rhythmic events in cell cycle, DNA repair, apoptosis, and angiogenesis in both normal tissue and cancer. Rodent and human studies have shown that the toxicity and anticancer activity of common cancer drugs can be significantly modified by the time of administration. Altered sleep/activity rhythms are common in cancer patients and can be disrupted even more when anticancer drugs are administered at their most toxic time. Disruption of the sleep/activity rhythm accelerates cancer growth. The complex circadian time-dependent connection between host, cancer and therapy is further impacted by other factors including gender, inter-individual differences and clock gene polymorphism and/or down regulation. It is important to take circadian timing into account at all stages of new drug development in an effort to optimize the therapeutic index for new cancer drugs. Better measures of the individual differences in circadian biology of host and cancer are required to further optimize the potential benefit of chronotherapy for each individual patient.",
"title": ""
},
{
"docid": "neg:1840365_6",
"text": "We suggest an approach to exploratory analysis of diverse types of spatiotemporal data with the use of clustering and interactive visual displays. We can apply the same generic clustering algorithm to different types of data owing to the separation of the process of grouping objects from the process of computing distances between the objects. In particular, we apply the densitybased clustering algorithm OPTICS to events (i.e. objects having spatial and temporal positions), trajectories of moving entities, and spatial distributions of events or moving entities in different time intervals. Distances are computed in a specific way for each type of objects; moreover, it may be useful to have several different distance functions for the same type of objects. Thus, multiple distance functions available for trajectories support different analysis tasks. We demonstrate the use of our approach by example of two datasets from the VAST Challenge 2008: evacuation traces (trajectories of moving entities) and landings and interdictions of migrant boats (events).",
"title": ""
},
{
"docid": "neg:1840365_7",
"text": "A class-E synchronous rectifier has been designed and implemented using 0.13-μm CMOS technology. A design methodology based on the theory of time-reversal duality has been used where a class-E amplifier circuit is transformed into a class-E rectifier circuit. The methodology is distinctly different from other CMOS RF rectifier designs which use voltage multiplier techniques. Power losses in the rectifier are analyzed including saturation resistance in the switch, inductor losses, and current/voltage overlap losses. The rectifier circuit includes a 50-Ω single-ended RF input port with on-chip matching. The circuit is self-biased and completely powered from the RF input signal. Experimental results for the rectifier show a peak RF-to-dc conversion efficiency of 30% measured at a frequency of 2.4 GHz.",
"title": ""
},
{
"docid": "neg:1840365_8",
"text": "The development of powerful imaging tools, editing images for changing their data content is becoming a mark to undertake. Tempering image contents by adding, removing, or copying/moving without leaving a trace or unable to be discovered by the investigation is an issue in the computer forensic world. The protection of information shared on the Internet like images and any other con?dential information is very signi?cant. Nowadays, forensic image investigation tools and techniques objective is to reveal the tempering strategies and restore the firm belief in the reliability of digital media. This paper investigates the challenges of detecting steganography in computer forensics. Open source tools were used to analyze these challenges. The experimental investigation focuses on using steganography applications that use same algorithms to hide information exclusively within an image. The research finding denotes that, if a certain steganography tool A is used to hide some information within a picture, and then tool B which uses the same procedure would not be able to recover the embedded image.",
"title": ""
},
{
"docid": "neg:1840365_9",
"text": "Sentence simplification aims to make sentences easier to read and understand. Most recent approaches draw on insights from machine translation to learn simplification rewrites from monolingual corpora of complex and simple sentences. We address the simplification problem with an encoder-decoder model coupled with a deep reinforcement learning framework. Our model, which we call DRESS (as shorthand for Deep REinforcement Sentence Simplification), explores the space of possible simplifications while learning to optimize a reward function that encourages outputs which are simple, fluent, and preserve the meaning of the input. Experiments on three datasets demonstrate that our model outperforms competitive simplification systems.1",
"title": ""
},
{
"docid": "neg:1840365_10",
"text": "Extracting facial feature is a key step in facial expression recognition (FER). Inaccurate feature extraction very often results in erroneous categorizing of facial expressions. Especially in robotic application, environmental factors such as illumination variation may cause FER system to extract feature inaccurately. In this paper, we propose a robust facial feature point extraction method to recognize facial expression in various lighting conditions. Before extracting facial features, a face is localized and segmented from a digitized image frame. Face preprocessing stage consists of face normalization and feature region localization steps to extract facial features efficiently. As regions of interest corresponding to relevant features are determined, Gabor jets are applied based on Gabor wavelet transformation to extract the facial points. Gabor jets are more invariable and reliable than gray-level values, which suffer from ambiguity as well as illumination variation while representing local features. Each feature point can be matched by a phase-sensitivity similarity function in the relevant regions of interest. Finally, the feature values are evaluated from the geometric displacement of facial points. After tested using the AR face database and the database built in our lab, average facial expression recognition rates of 84.1% and 81.3% are obtained respectively.",
"title": ""
},
{
"docid": "neg:1840365_11",
"text": "Two-dimensional (2-D) analytical permanent-magnet (PM) eddy-current loss calculations are presented for slotless PM synchronous machines (PMSMs) with surface-inset PMs considering the current penetration effect. In this paper, the term slotless implies that either the stator is originally slotted but the slotting effects are neglected or the stator is originally slotless. The analytical magnetic field distribution is computed in polar coordinates from the 2-D subdomain method (i.e., based on formal resolution of Maxwell's equation applied in subdomain). Based on the predicted magnetic field distribution, the eddy-currents induced in the PMs are analytically obtained and the PM eddy-current losses considering eddy-current reaction field are calculated. The analytical expressions can be used for slotless PMSMs with any number of phases and any form of current and overlapping winding distribution. The effects of stator slotting are neglected and the current density distribution is modeled by equivalent current sheets located on the slot opening. To evaluate the efficacy of the proposed technique, the 2-D PM eddy-current losses for two slotless PMSMs are analytically calculated and compared with those obtained by 2-D finite-element analysis (FEA). The effects of the rotor rotational speed and the initial rotor mechanical angular position are investigated. The analytical results are in good agreement with those obtained by the 2-D FEA.",
"title": ""
},
{
"docid": "neg:1840365_12",
"text": "This paper addresses the property requirements of repair materials for high durability performance for concrete structure repair. It is proposed that the high tensile strain capacity of High Performance Fiber Reinforced Cementitious Composites (HPFRCC) makes such materials particularly suitable for repair applications, provided that the fresh properties are also adaptable to those required in placement techniques in typical repair applications. A specific version of HPFRCC, known as Engineered Cementitious Composites (ECC), is described. It is demonstrated that the fresh and hardened properties of ECC meet many of the requirements for durable repair performance. Recent experience in the use of this material in a bridge deck patch repair is highlighted. The origin of this article is a summary of a keynote lecture with the same title given at the Conference on Fiber Composites, High-Performance Concretes and Smart Materials, Chennai, India, Jan., 2004. It is only slightly updated here.",
"title": ""
},
{
"docid": "neg:1840365_13",
"text": "Pretraining with language modeling and related unsupervised tasks has recently been shown to be a very effective enabling technology for the development of neural network models for language understanding tasks. In this work, we show that although language model-style pretraining is extremely effective at teaching models about language, it does not yield an ideal starting point for efficient transfer learning. By supplementing language model-style pretraining with further training on data-rich supervised tasks, we are able to achieve substantial additional performance improvements across the nine target tasks in the GLUE benchmark. We obtain an overall score of 76.9 on GLUE—a 2.3 point improvement over our baseline system adapted from Radford et al. (2018) and a 4.1 point improvement over Radford et al.’s reported score. We further use training data downsampling to show that the benefits of this supplementary training are even more pronounced in data-constrained regimes.",
"title": ""
},
{
"docid": "neg:1840365_14",
"text": "Within the past few years, organizations in diverse industries have adopted MapReduce-based systems for large-scale data processing. Along with these new users, important new workloads have emerged which feature many small, short, and increasingly interactive jobs in addition to the large, long-running batch jobs for which MapReduce was originally designed. As interactive, large-scale query processing is a strength of the RDBMS community, it is important that lessons from that field be carried over and applied where possible in this new domain. However, these new workloads have not yet been described in the literature. We fill this gap with an empirical analysis of MapReduce traces from six separate business-critical deployments inside Facebook and at Cloudera customers in e-commerce, telecommunications, media, and retail. Our key contribution is a characterization of new MapReduce workloads which are driven in part by interactive analysis, and which make heavy use of querylike programming frameworks on top of MapReduce. These workloads display diverse behaviors which invalidate prior assumptions about MapReduce such as uniform data access, regular diurnal patterns, and prevalence of large jobs. A secondary contribution is a first step towards creating a TPC-like data processing benchmark for MapReduce.",
"title": ""
},
{
"docid": "neg:1840365_15",
"text": "Estimating action quality, the process of assigning a \"score\" to the execution of an action, is crucial in areas such as sports and health care. Unlike action recognition, which has millions of examples to learn from, the action quality datasets that are currently available are small-typically comprised of only a few hundred samples. This work presents three frameworks for evaluating Olympic sports which utilize spatiotemporal features learned using 3D convolutional neural networks (C3D) and perform score regression with i) SVR ii) LSTM and iii) LSTM followed by SVR. An efficient training mechanism for the limited data scenarios is presented for clip-based training with LSTM. The proposed systems show significant improvement over existing quality assessment approaches on the task of predicting scores of diving, vault, figure skating. SVR-based frameworks yield better results, LSTM-based frameworks are more natural for describing an action and can be used for improvement feedback.",
"title": ""
},
{
"docid": "neg:1840365_16",
"text": "Our hypothesis is that the video game industry, in the attempt to simulate a realistic experience, has inadvertently collected very accurate data which can be used to solve problems in the real world. In this paper we describe a novel approach to soccer match prediction that makes use of only virtual data collected from a video game(FIFA 2015). Our results were comparable and in some places better than results achieved by predictors that used real data. We also use the data provided for each player and the players present in the squad, to analyze the team strategy. Based on our analysis, we were able to suggest better strategies for weak teams",
"title": ""
},
{
"docid": "neg:1840365_17",
"text": "The ability to recognize facial expressions automatically enables novel applications in human-computer interaction and other areas. Consequently, there has been active research in this field, with several recent works utilizing Convolutional Neural Networks (CNNs) for feature extraction and inference. These works differ significantly in terms of CNN architectures and other factors. Based on the reported results alone, the performance impact of these factors is unclear. In this paper, we review the state of the art in image-based facial expression recognition using CNNs and highlight algorithmic differences and their performance impact. On this basis, we identify existing bottlenecks and consequently directions for advancing this research field. Furthermore, we demonstrate that overcoming one of these bottlenecks – the comparatively basic architectures of the CNNs utilized in this field – leads to a substantial performance increase. By forming an ensemble of modern deep CNNs, we obtain a FER2013 test accuracy of 75.2%, outperforming previous works without requiring auxiliary training data or face registration.",
"title": ""
},
{
"docid": "neg:1840365_18",
"text": "Multivariate data sets including hundreds of variables are increasingly common in many application areas. Most multivariate visualization techniques are unable to display such data effectively, and a common approach is to employ dimensionality reduction prior to visualization. Most existing dimensionality reduction systems focus on preserving one or a few significant structures in data. For many analysis tasks, however, several types of structures can be of high significance and the importance of a certain structure compared to the importance of another is often task-dependent. This paper introduces a system for dimensionality reduction by combining user-defined quality metrics using weight functions to preserve as many important structures as possible. The system aims at effective visualization and exploration of structures within large multivariate data sets and provides enhancement of diverse structures by supplying a range of automatic variable orderings. Furthermore it enables a quality-guided reduction of variables through an interactive display facilitating investigation of trade-offs between loss of structure and the number of variables to keep. The generality and interactivity of the system is demonstrated through a case scenario.",
"title": ""
}
] |
1840366 | Measuring the Effect of Conversational Aspects on Machine Translation Quality | [
{
"docid": "pos:1840366_0",
"text": "Conversational participants tend to immediately and unconsciously adapt to each other’s language styles: a speaker will even adjust the number of articles and other function words in their next utterance in response to the number in their partner’s immediately preceding utterance. This striking level of coordination is thought to have arisen as a way to achieve social goals, such as gaining approval or emphasizing difference in status. But has the adaptation mechanism become so deeply embedded in the language-generation process as to become a reflex? We argue that fictional dialogs offer a way to study this question, since authors create the conversations but don’t receive the social benefits (rather, the imagined characters do). Indeed, we find significant coordination across many families of function words in our large movie-script corpus. We also report suggestive preliminary findings on the effects of gender and other features; e.g., surprisingly, for articles, on average, characters adapt more to females than to males.",
"title": ""
},
{
"docid": "pos:1840366_1",
"text": "This paper describes Champollion, a lexicon-based sentence aligner designed for robust alignment of potential noisy parallel text. Champollion increases the robustness of the alignment by assigning greater weights to less frequent translated words. Experiments on a manually aligned Chinese – English parallel corpus show that Champollion achieves high precision and recall on noisy data. Champollion can be easily ported to new language pairs. It’s freely available to the public.",
"title": ""
}
] | [
{
"docid": "neg:1840366_0",
"text": "High torque density and low torque ripple are crucial for traction applications, which allow electrified powertrains to perform properly during start-up, acceleration, and cruising. High-quality anisotropic magnetic materials such as cold-rolled grain-oriented electrical steels can be used for achieving higher efficiency, torque density, and compactness in synchronous reluctance motors equipped with transverse laminated rotors. However, the rotor cylindrical geometry makes utilization of these materials with pole numbers higher than two more difficult. From a reduced torque ripple viewpoint, particular attention to the rotor slot pitch angle design can lead to improvements. This paper presents an innovative rotor lamination design and assembly using cold-rolled grain-oriented electrical steel to achieve higher torque density along with an algorithm for rotor slot pitch angle design for reduced torque ripple. The design methods and prototyping process are discussed, finite-element analyses and experimental examinations are carried out, and the results are compared to verify and validate the proposed methods.",
"title": ""
},
{
"docid": "neg:1840366_1",
"text": "The BioTac® is a biomimetic tactile sensor for grip control and object characterization. It has three sensing modalities: thermal flux, microvibration and force. In this paper, we discuss feature extraction and interpretation of the force modality data. The data produced by this force sensing modality during sensor-object interaction are monotonic but non-linear. Algorithms and machine learning techniques were developed and validated for extracting the radius of curvature (ROC), point of application of force (PAF) and force vector (FV). These features have varying degrees of usefulness in extracting object properties using only cutaneous information; most robots can also provide the equivalent of proprioceptive sensing. For example, PAF and ROC is useful for extracting contact points for grasp and object shape as the finger depresses and moves along an object; magnitude of FV is useful in evaluating compliance from reaction forces when a finger is pushed into an object at a given velocity while direction is important for maintaining stable grip.",
"title": ""
},
{
"docid": "neg:1840366_2",
"text": "Data Aggregation is an important topic and a suitable technique in reducing the energy consumption of sensors nodes in wireless sensor networks (WSN’s) for affording secure and efficient big data aggregation. The wireless sensor networks have been broadly applied, such as target tracking and environment remote monitoring. However, data can be easily compromised by a vast of attacks, such as data interception and tampering of data. Data integrity protection is proposed, gives an identity-based aggregate signature scheme for wireless sensor networks with a designated verifier. The aggregate signature scheme keeps data integrity, can reduce bandwidth and storage cost. Furthermore, the security of the scheme is effectively presented based on the computation of Diffie-Hellman random oracle model.",
"title": ""
},
{
"docid": "neg:1840366_3",
"text": "Description: The field of data mining lies at the confluence of predictive analytics, statistical analysis, and business intelligence. Due to the ever–increasing complexity and size of data sets and the wide range of applications in computer science, business, and health care, the process of discovering knowledge in data is more relevant than ever before. This book provides the tools needed to thrive in today s big data world. The author demonstrates how to leverage a company s existing databases to increase profits and market share, and carefully explains the most current data science methods and techniques. The reader will learn data mining by doing data mining . By adding chapters on data modelling preparation, imputation of missing data, and multivariate statistical analysis, Discovering Knowledge in Data, Second Edition remains the eminent reference on data mining .",
"title": ""
},
{
"docid": "neg:1840366_4",
"text": "In this paper we argue for a broader view of ontology patterns and therefore present different use-cases where drawbacks of the current declarative pattern languages can be seen. We also discuss usecases where a declarative pattern approach can replace procedural-coded ontology patterns. With previous work on an ontology pattern language in mind we argue for a general pattern language.",
"title": ""
},
{
"docid": "neg:1840366_5",
"text": "This paper presents a learning and scoring framework based on neural networks for speaker verification. The framework employs an autoencoder as its primary structure while three factors are jointly considered in the objective function for speaker discrimination. The first one, relating to the sample reconstruction error, makes the structure essentially a generative model, which benefits to learn most salient and useful properties of the data. Functioning in the middlemost hidden layer, the other two attempt to ensure that utterances spoken by the same speaker are mapped into similar identity codes in the speaker discriminative subspace, where the dispersion of all identity codes are maximized to some extent so as to avoid the effect of over-concentration. Finally, the decision score of each utterance pair is simply computed by cosine similarity of their identity codes. Dealing with utterances represented by i-vectors, the results of experiments conducted on the male portion of the core task in the NIST 2010 Speaker Recognition Evaluation (SRE) significantly demonstrate the merits of our approach over the conventional PLDA method.",
"title": ""
},
{
"docid": "neg:1840366_6",
"text": "The value of depth-first search or \"bacltracking\" as a technique for solving problems is illustrated by two examples. An improved version of an algorithm for finding the strongly connected components of a directed graph and ar algorithm for finding the biconnected components of an undirect graph are presented. The space and time requirements of both algorithms are bounded by k1V + k2E dk for some constants kl, k2, and ka, where Vis the number of vertices and E is the number of edges of the graph being examined.",
"title": ""
},
{
"docid": "neg:1840366_7",
"text": "In this paper we present a novel system for sketching the motion of a character. The process begins by sketching a character to be animated. An animated motion is then created for the character by drawing a continuous sequence of lines, arcs, and loops. These are parsed and mapped to a parameterized set of output motions that further reflect the location and timing of the input sketch. The current system supports a repertoire of 18 different types of motions in 2D and a subset of these in 3D. The system is unique in its use of a cursive motion specification, its ability to allow for fast experimentation, and its ease of use for non-experts.",
"title": ""
},
{
"docid": "neg:1840366_8",
"text": "R. Cropanzano, D. E. Rupp, and Z. S. Byrne (2003) found that emotional exhaustion (i.e., 1 dimension of burnout) negatively affects organizational citizenship behavior (OCB). The authors extended this research by investigating relationships among 3 dimensions of burnout (emotional exhaustion, depersonalization, and diminished personal accomplishment) and OCB. They also affirmed the mediating effect of job involvement on these relationships. Data were collected from 296 paired samples of service employees and their supervisors from 12 hotels and restaurants in Taiwan. Findings demonstrated that emotional exhaustion and diminished personal accomplishment were related negatively to OCB, whereas depersonalization had no independent effect on OCB. Job involvement mediated the relationships among emotional exhaustion, diminished personal accomplishment, and OCB.",
"title": ""
},
{
"docid": "neg:1840366_9",
"text": "This paper describes a small-size buck-type dc–dc converter for cellular phones. Output power MOSFETs and control circuitry are monolithically integrated. The newly developed pulse frequency modulation control integrated circuit, mounted on a planar inductor within the converter package, has a low quiescent current below 10 μA and a small chip size of 1.4 mm × 1.1 mm in a 0.35-μm CMOS process. The converter achieves a maximum efficiency of 90% and a power density above 100 W/cm<formula formulatype=\"inline\"> <tex Notation=\"TeX\">$^3$</tex></formula>.",
"title": ""
},
{
"docid": "neg:1840366_10",
"text": "In recent years, distributed intelligent microelectromechanical systems (DiMEMSs) have appeared as a new form of distributed embedded systems. DiMEMSs contain thousands or millions of removable autonomous devices, which will collaborate with each other to achieve the final target of the whole system. Programming such systems is becoming an extremely difficult problem. The difficulty is due not only to their inherent nature of distributed collaboration, mobility, large scale, and limited resources of their devices (e.g., in terms of energy, memory, communication, and computation) but also to the requirements of real-time control and tolerance for uncertainties such as inaccurate actuation and unreliable communications. As a result, existing programming languages for traditional distributed and embedded systems are not suitable for DiMEMSs. In this article, we first introduce the origin and characteristics of DiMEMSs and then survey typical implementations of DiMEMSs and related research hotspots. Finally, we propose a real-time programming framework that can be used to design new real-time programming languages for DiMEMSs. The framework is composed of three layers: a real-time programming model layer, a compilation layer, and a runtime system layer. The design challenges and requirements of these layers are investigated. The framework is then discussed in further detail and suggestions for future research are given.",
"title": ""
},
{
"docid": "neg:1840366_11",
"text": "Machine learning systems trained on user-provided data are susceptible to data poisoning attacks, whereby malicious users inject false training data with the aim of corrupting the learned model. While recent work has proposed a number of attacks and defenses, little is understood about the worst-case loss of a defense in the face of a determined attacker. We address this by constructing approximate upper bounds on the loss across a broad family of attacks, for defenders that first perform outlier removal followed by empirical risk minimization. Our approximation relies on two assumptions: (1) that the dataset is large enough for statistical concentration between train and test error to hold, and (2) that outliers within the clean (nonpoisoned) data do not have a strong effect on the model. Our bound comes paired with a candidate attack that often nearly matches the upper bound, giving us a powerful tool for quickly assessing defenses on a given dataset. Empirically, we find that even under a simple defense, the MNIST-1-7 and Dogfish datasets are resilient to attack, while in contrast the IMDB sentiment dataset can be driven from 12% to 23% test error by adding only 3% poisoned data.",
"title": ""
},
{
"docid": "neg:1840366_12",
"text": "Computational speech reconstruction algorithms have the ultimate aim of returning natural sounding speech to aphonic and dysphonic patients as well as those who can only whisper. In particular, individuals who have lost glottis function due to disease or surgery, retain the power of vocal tract modulation to some degree but they are unable to speak anything more than hoarse whispers without prosthetic aid. While whispering can be seen as a natural and secondary aspect of speech communications for most people, it becomes the primary mechanism of communications for those who have impaired voice production mechanisms, such as laryngectomees. In this paper, by considering the current limitations of speech reconstruction methods, a novel algorithm for converting whispers to normal speech is proposed and the efficiency of the algorithm is explored. The algorithm relies upon cascading mapping models and makes use of artificially generated whispers (called whisperised speech) to regenerate natural phonated speech from whispers. Using a training-based approach, the mapping models exploit whisperised speech to overcome frame to frame time alignment problems that are inherent in the speech reconstruction process. This algorithm effectively regenerates missing information in the conventional frameworks of phonated speech reconstruction, ∗Corresponding author Email address: hsharifzadeh@unitec.ac.nz (Hamid R. Sharifzadeh) Preprint submitted to Journal of Computers & Electrical Engineering February 15, 2016 and is able to outperform the current state-of-the-art regeneration methods using both subjective and objective criteria.",
"title": ""
},
{
"docid": "neg:1840366_13",
"text": "We propose a new regularization method based on virtual adversarial loss: a new measure of local smoothness of the conditional label distribution given input. Virtual adversarial loss is defined as the robustness of the conditional label distribution around each input data point against local perturbation. Unlike adversarial training, our method defines the adversarial direction without label information and is hence applicable to semi-supervised learning. Because the directions in which we smooth the model are only \"virtually\" adversarial, we call our method virtual adversarial training (VAT). The computational cost of VAT is relatively low. For neural networks, the approximated gradient of virtual adversarial loss can be computed with no more than two pairs of forward- and back-propagations. In our experiments, we applied VAT to supervised and semi-supervised learning tasks on multiple benchmark datasets. With a simple enhancement of the algorithm based on the entropy minimization principle, our VAT achieves state-of-the-art performance for semi-supervised learning tasks on SVHN and CIFAR-10.",
"title": ""
},
{
"docid": "neg:1840366_14",
"text": "Data uncertainty is common in real-world applications due to various causes, including imprecise measurement, network latency, outdated sources and sampling errors. These kinds of uncertainty have to be handled cautiously, or else the mining results could be unreliable or even wrong. In this paper, we propose a new rule-based classification and prediction algorithm called uRule for classifying uncertain data. This algorithm introduces new measures for generating, pruning and optimizing rules. These new measures are computed considering uncertain data interval and probability distribution function. Based on the new measures, the optimal splitting attribute and splitting value can be identified and used for classification and prediction. The proposed uRule algorithm can process uncertainty in both numerical and categorical data. Our experimental results show that uRule has excellent performance even when data is highly uncertain.",
"title": ""
},
{
"docid": "neg:1840366_15",
"text": "For worst case parameter mismatch, modest levels of unbalance are predicted through the use of minimum gate decoupling, dynamic load lines with high Q values, common source inductance or high yield screening. Each technique is evaluated in terms of current unbalance, transition energy, peak turn-off voltage and parasitic oscillations, as appropriate, for various pulse duty cycles and frequency ranges.",
"title": ""
},
{
"docid": "neg:1840366_16",
"text": "The paper describes a computerized process of myocardial perfusion diagnosis from cardiac single proton emission computed tomography (SPECT) images using data mining and knowledge discovery approach. We use a six-step knowledge discovery process. A database consisting of 267 cleaned patient SPECT images (about 3000 2D images), accompanied by clinical information and physician interpretation was created first. Then, a new user-friendly algorithm for computerizing the diagnostic process was designed and implemented. SPECT images were processed to extract a set of features, and then explicit rules were generated, using inductive machine learning and heuristic approaches to mimic cardiologist's diagnosis. The system is able to provide a set of computer diagnoses for cardiac SPECT studies, and can be used as a diagnostic tool by a cardiologist. The achieved results are encouraging because of the high correctness of diagnoses.",
"title": ""
},
{
"docid": "neg:1840366_17",
"text": "The underwater images usually suffers from non-uniform lighting, low contrast, blur and diminished colors. In this paper, we proposed an image based preprocessing technique to enhance the quality of the underwater images. The proposed technique comprises a combination of four filters such as homomorphic filtering, wavelet denoising, bilateral filter and contrast equalization. These filters are applied sequentially on degraded underwater images. The literature survey reveals that image based preprocessing algorithms uses standard filter techniques with various combinations. For smoothing the image, the image based preprocessing algorithms uses the anisotropic filter. The main drawback of the anisotropic filter is that iterative in nature and computation time is high compared to bilateral filter. In the proposed technique, in addition to other three filters, we employ a bilateral filter for smoothing the image. The experimentation is carried out in two stages. In the first stage, we have conducted various experiments on captured images and estimated optimal parameters for bilateral filter. Similarly, optimal filter bank and optimal wavelet shrinkage function are estimated for wavelet denoising. In the second stage, we conducted the experiments using estimated optimal parameters, optimal filter bank and optimal wavelet shrinkage function for evaluating the proposed technique. We evaluated the technique using quantitative based criteria such as a gradient magnitude histogram and Peak Signal to Noise Ratio (PSNR). Further, the results are qualitatively evaluated based on edge detection results. The proposed technique enhances the quality of the underwater images and can be employed prior to apply computer vision techniques.",
"title": ""
},
{
"docid": "neg:1840366_18",
"text": "A Reverse Conducting IGBT (RC-IGBT) is a promising device to reduce a size and cost of the power module thanks to the integration of IGBT and FWD into a single chip. However, it is difficult to achieve well-balanced performance between IGBT and FWD. Indeed, the total inverter loss of the conventional RC-IGBT was not so small as the individual IGBT and FWD pair. To minimize the loss, the most important key is the improvement of reverse recovery characteristics of FWD. We carefully extracted five effective parameters to improve the FWD characteristics, and investigated the impact of these parameters by using simulation and experiments. Finally, optimizing these parameters, we succeeded in fabricating the second-generation 600V class RC-IGBT with a smaller FWD loss than the first-generation RC-IGBT.",
"title": ""
},
{
"docid": "neg:1840366_19",
"text": "In connection with a study of various aspects of the modifiability of behavior in the dancing mouse a need for definite knowledge concerning the relation of strength of stimulus to rate of learning arose. It was for the purpose of obtaining this knowledge that we planned and executed the experiments which are now to be described. Our work was greatly facilitated by the advice and assistance of Doctor E. G. MARTIN, Professor G. W. PIERCE, and Professor A. E. KENNELLY, and we desire to express here both our indebtedness and our thanks for their generous services.",
"title": ""
}
] |
1840367 | UTOPIAN: User-Driven Topic Modeling Based on Interactive Nonnegative Matrix Factorization | [
{
"docid": "pos:1840367_0",
"text": "Topic models are a useful and ubiquitous tool for understanding large corpora. However, topic models are not perfect, and for many users in computational social science, digital humanities, and information studies—who are not machine learning experts—existing models and frameworks are often a “take it or leave it” proposition. This paper presents a mechanism for giving users a voice by encoding users’ feedback to topic models as correlations between words into a topic model. This framework, interactive topic modeling (itm), allows untrained users to encode their feedback easily and iteratively into the topic models. Because latency in interactive systems is crucial, we develop more efficient inference algorithms for tree-based topic models. We validate the framework both with simulated and real users.",
"title": ""
},
{
"docid": "pos:1840367_1",
"text": "Significant effort has been devoted to designing clustering algorithms that are responsive to user feedback or that incorporate prior domain knowledge in the form of constraints. However, users desire more expressive forms of interaction to influence clustering outcomes. In our experiences working with diverse application scientists, we have identified an interaction style scatter/gather clustering that helps users iteratively restructure clustering results to meet their expectations. As the names indicate, scatter and gather are dual primitives that describe whether clusters in a current segmentation should be broken up further or, alternatively, brought back together. By combining scatter and gather operations in a single step, we support very expressive dynamic restructurings of data. Scatter/gather clustering is implemented using a nonlinear optimization framework that achieves both locality of clusters and satisfaction of user-supplied constraints. We illustrate the use of our scatter/gather clustering approach in a visual analytic application to study baffle shapes in the bat biosonar (ears and nose) system. We demonstrate how domain experts are adept at supplying scatter/gather constraints, and how our framework incorporates these constraints effectively without requiring numerous instance-level constraints.",
"title": ""
}
] | [
{
"docid": "neg:1840367_0",
"text": "In order to push the performance on realistic computer vision tasks, the number of classes in modern benchmark datasets has significantly increased in recent years. This increase in the number of classes comes along with increased ambiguity between the class labels, raising the question if top-1 error is the right performance measure. In this paper, we provide an extensive comparison and evaluation of established multiclass methods comparing their top-k performance both from a practical as well as from a theoretical perspective. Moreover, we introduce novel top-k loss functions as modifications of the softmax and the multiclass SVM losses and provide efficient optimization schemes for them. In the experiments, we compare on various datasets all of the proposed and established methods for top-k error optimization. An interesting insight of this paper is that the softmax loss yields competitive top-k performance for all k simultaneously. For a specific top-k error, our new top-k losses lead typically to further improvements while being faster to train than the softmax.",
"title": ""
},
{
"docid": "neg:1840367_1",
"text": "Most social media commentary in the Arabic language space is made using unstructured non-grammatical slang Arabic language, presenting complex challenges for sentiment analysis and opinion extraction of online commentary and micro blogging data in this important domain. This paper provides a comprehensive analysis of the important research works in the field of Arabic sentiment analysis. An in-depth qualitative analysis of the various features of the research works is carried out and a summary of objective findings is presented. We used smoothness analysis to evaluate the percentage error in the performance scores reported in the studies from their linearly-projected values (smoothness) which is an estimate of the influence of the different approaches used by the authors on the performance scores obtained. To solve a bounding issue with the data as it was reported, we modified existing logarithmic smoothing technique and applied it to pre-process the performance scores before the analysis. Our results from the analysis have been reported and interpreted for the various performance parameters: accuracy, precision, recall and F-score. Keywords—Arabic Sentiment Analysis; Qualitative Analysis; Quantitative Analysis; Smoothness Analysis",
"title": ""
},
{
"docid": "neg:1840367_2",
"text": "Valgus extension osteotomy (VGEO) is a salvage procedure for 'hinge abduction' in Perthes' disease. The indications for its use are pain and fixed deformity. Our study shows the clinical results at maturity of VGEO carried out in 48 children (51 hips) and the factors which influence subsequent remodelling of the hip. After a mean follow-up of ten years, total hip replacement has been carried out in four patients and arthrodesis in one. The average Iowa Hip Score in the remainder was 86 (54 to 100). Favourable remodelling of the femoral head was seen in 12 hips. This was associated with three factors at surgery; younger age (p = 0.009), the phase of reossification (p = 0.05) and an open triradiate cartilage (p = 0.0007). Our study has shown that, in the short term, VGEO relieves pain and corrects deformity; as growth proceeds it may produce useful remodelling in this worst affected subgroup of children with Perthes' disease.",
"title": ""
},
{
"docid": "neg:1840367_3",
"text": "Painful acute cysts in the natal cleft or lower back, known as pilonidal sinus disease, are a severe burden to many younger patients. Although surgical intervention is the preferred first line treatment, postsurgical wound healing disturbances are frequently reported due to infection or other complications. Different treatment options of pilonidal cysts have been discussed in the literature, however, no standardised guideline for the postsurgical wound treatment is available. After surgery, a common recommended treatment to patients is rinsing the wound with clean water and dressing with a sterile compress. We present a case series of seven patients with wounds healing by secondary intention after surgical intervention of a pilonidal cyst. The average age of the patients was 40 years old. Of the seven patients, three had developed a wound healing disturbance, one wound had started to develop a fibrin coating and three were in a good condition. The applied wound care regimens comprised appropriate mechanical or autolytic debridement, rinsing with an antimicrobial solution, haemoglobin application, and primary and secondary dressings. In all seven cases a complete wound closure was achieved within an average of 76 days with six out of seven wounds achieving wound closure within 23-98 days. Aesthetic appearance was deemed excellent in five out of seven cases excellent and acceptable in one. Treatment of one case with a sustained healing disturbance did result in wound closure but with a poor aesthetic outcome and an extensive cicatrisation of the new tissue. Based on these results we recommend that to avoid healing disturbances of wounds healing by secondary intention after surgical pilonidal cyst intervention, an adequate wound care regime comprising appropriate wound debridement, rinsing, topically applied haemoglobin and adequate wound dressing is recommendable as early as possible after surgery.",
"title": ""
},
{
"docid": "neg:1840367_4",
"text": "This paper presents AOP++, a generic aspect-oriented programming framework in C++. It successfully incorporates AOP with object-oriented programming as well as generic programming naturally in the framework of standard C++. It innovatively makes use of C++ templates to express pointcut expressions and match join points at compile time. It innovatively creates a full-fledged aspect weaver by using template metaprogramming techniques to perform aspect weaving. It is notable that AOP++ itself is written completely in standard C++, and requires no language extensions. With the help of AOP++, C++ programmers can facilitate AOP with only a little effort.",
"title": ""
},
{
"docid": "neg:1840367_5",
"text": "The bank director was pretty upset noticing Joe, the system administrator, spending his spare time playing Mastermind, an old useless game of the 70ies. He had fought the instinct of telling him how to better spend his life, just limiting to look at him in disgust long enough to be certain to be noticed. No wonder when the next day the director fell on his chair astonished while reading, on the newspaper, about a huge digital fraud on the ATMs of his bank, with millions of Euros stolen by a team of hackers all around the world. The article mentioned how the hackers had ‘played with the bank computers just like playing Mastermind’, being able to disclose thousands of user PINs during the one-hour lunch break. That precise moment, a second before falling senseless, he understood the subtle smile on Joe’s face the day before, while training at his preferred game, Mastermind.",
"title": ""
},
{
"docid": "neg:1840367_6",
"text": "OBJECTIVES\nTo present a combination of clinical and histopathological criteria for diagnosing cheilitis glandularis (CG), and to evaluate the association between CG and squamous cell carcinoma (SCC).\n\n\nMATERIALS AND METHODS\nThe medical literature in English was searched from 1950 to 2010 and selected demographic data, and clinical and histopathological features of CG were retrieved and analysed.\n\n\nRESULTS\nA total of 77 cases have been published and four new cases were added to the collective data. The clinical criteria applied included the coexistence of multiple lesions and mucoid/purulent discharge, while the histopathological criteria included two or more of the following findings: sialectasia, chronic inflammation, mucous/oncocytic metaplasia and mucin in ducts. Only 47 (58.0%) cases involving patients with a mean age of 48.5 ± 20.3 years and a male-to-female ratio of 2.9:1 fulfilled the criteria. The lower lip alone was most commonly affected (70.2%). CG was associated with SCC in only three cases (3.5%) for which there was a clear aetiological factor for the malignancy.\n\n\nCONCLUSIONS\nThe proposed diagnostic criteria can assist in delineating true CG from a variety of lesions with a comparable clinical/histopathological presentation. CG in association with premalignant/malignant epithelial changes of the lower lip may represent secondary, reactive changes of the salivary glands.",
"title": ""
},
{
"docid": "neg:1840367_7",
"text": "This paper presents a nonholonomic path planning method, aiming at taking into considerations of curvature constraint, length minimization, and computational demand, for car-like mobile robot based on cubic spirals. The generated path is made up of at most five segments: at most two maximal-curvature cubic spiral segments with zero curvature at both ends in connection with up to three straight line segments. A numerically efficient process is presented to generate a Cartesian shortest path among the family of paths considered for a given pair of start and destination configurations. Our approach is resorted to minimization via linear programming over the sum of length of each path segment of paths synthesized based on minimal locomotion cubic spirals linking start and destination orientations through a selected intermediate orientation. The potential intermediate configurations are not necessarily selected from the symmetric mean circle for non-parallel start and destination orientations. The novelty of the presented path generation method based on cubic spirals is: (i) Practical: the implementation is straightforward so that the generation of feasible paths in an environment free of obstacles is efficient in a few milliseconds; (ii) Flexible: it lends itself to various generalizations: readily applicable to mobile robots capable of forward and backward motion and Dubins’ car (i.e. car with only forward driving capability); well adapted to the incorporation of other constraints like wall-collision avoidance encountered in robot soccer games; straightforward extension to planning a path connecting an ordered sequence of target configurations in simple obstructed environment. © 2005 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "neg:1840367_8",
"text": "Loneliness and depression are associated, in particular in older adults. Less is known about the role of social networks in this relationship. The present study analyzes the influence of social networks in the relationship between loneliness and depression in the older adult population in Spain. A population-representative sample of 3535 adults aged 50 years and over from Spain was analyzed. Loneliness was assessed by means of the three-item UCLA Loneliness Scale. Social network characteristics were measured using the Berkman–Syme Social Network Index. Major depression in the previous 12 months was assessed with the Composite International Diagnostic Interview (CIDI). Logistic regression models were used to analyze the survey data. Feelings of loneliness were more prevalent in women, those who were younger (50–65), single, separated, divorced or widowed, living in a rural setting, with a lower frequency of social interactions and smaller social network, and with major depression. Among people feeling lonely, those with depression were more frequently married and had a small social network. Among those not feeling lonely, depression was associated with being previously married. In depressed people, feelings of loneliness were associated with having a small social network; while among those without depression, feelings of loneliness were associated with being married. The type and size of social networks have a role in the relationship between loneliness and depression. Increasing social interaction may be more beneficial than strategies based on improving maladaptive social cognition in loneliness to reduce the prevalence of depression among Spanish older adults.",
"title": ""
},
{
"docid": "neg:1840367_9",
"text": "Mori (1970) proposed a hypothetical graph describing a nonlinear relation between a character’s degree of human likeness and the emotional response of the human perceiver. However, the index construction of these variables could result in their strong correlation, thus preventing rated characters from being plotted accurately. Phase 1 of this study tested the indices of the Godspeed questionnaire as measures of humanlike characters. The results indicate significant and strong correlations among the relevant indices (Bartneck, Kulić, Croft, & Zoghbi, 2009). Phase 2 of this study developed alternative indices with nonsignificant correlations (p > .05) between the proposed y-axis eeriness and x-axis perceived humanness (r = .02). The new humanness and eeriness indices facilitate plotting relations among rated characters of varying human likeness. 2010 Elsevier Ltd. All rights reserved. 1. Plotting emotional responses to humanlike characters Mori (1970) proposed a hypothetical graph describing a nonlinear relation between a character’s degree of human likeness and the emotional response of the human perceiver (Fig. 1). The graph predicts that more human-looking characters will be perceived as more agreeable up to a point at which they become so human people find their nonhuman imperfections unsettling (MacDorman, Green, Ho, & Koch, 2009; MacDorman & Ishiguro, 2006; Mori, 1970). This dip in appraisal marks the start of the uncanny valley (bukimi no tani in Japanese). As characters near complete human likeness, they rise out of the valley, and people once again feel at ease with them. In essence, a character’s imperfections expose a mismatch between the human qualities that are expected and the nonhuman qualities that instead follow, or vice versa. As an example of things that lie in the uncanny valley, Mori (1970) cites corpses, zombies, mannequins coming to life, and lifelike prosthetic hands. Assuming the uncanny valley exists, what dependent variable is appropriate to represent Mori’s graph? Mori referred to the y-axis as shinwakan, a neologism even in Japanese, which has been variously translated as familiarity, rapport, and comfort level. Bartneck, Kanda, Ishiguro, and Hagita (2009) have proposed using likeability to represent shinwakan, and they applied a likeability index to the evaluation of interactions with Ishiguro’s android double, the Geminoid HI-1. Likeability is virtually synonymous with interpersonal warmth (Asch, 1946; Fiske, Cuddy, & Glick, 2007; Rosenberg, Nelson, & Vivekananthan, 1968), which is also strongly correlated with other important measures, such as comfortability, communality, sociability, and positive (vs. negative) affect (Abele & Wojciszke, 2007; MacDorman, Ough, & Ho, 2007; Mehrabian & Russell, 1974; Sproull, Subramani, Kiesler, Walker, & Waters, 1996; Wojciszke, Abele, & Baryla, 2009). Warmth is the primary dimension of human social perception, accounting for 53% of the variance in perceptions of everyday social behaviors (Fiske, Cuddy, Glick, & Xu, 2002; Fiske et al., 2007; Wojciszke, Bazinska, & Jaworski, 1998). Despite the importance of warmth, this concept misses the essence of the uncanny valley. Mori (1970) refers to negative shinwakan as bukimi, which translates as eeriness. However, eeriness is not the negative anchor of warmth. A person can be cold and disagreeable without being eerie—at least not eerie in the way that an artificial human being is eerie. In addition, the set of negative emotions that predict eeriness (e.g., fear, anxiety, and disgust) are more specific than coldness (Ho, MacDorman, & Pramono, 2008). Thus, shinwakan and bukimi appear to constitute distinct dimensions. Although much has been written on potential benchmarks for anthropomorphic robots (for reviews see Kahn et al., 2007; MacDorman & Cowley, 2006; MacDorman & Kahn, 2007), no indices have been developed and empirically validated for measuring shinwakan or related concepts across a range of humanlike stimuli, such as computer-animated human characters and humanoid robots. The Godspeed questionnaire, compiled by Bartneck, Kulić, Croft, and Zoghbi (2009), includes at least two concepts, anthropomorphism and likeability, that could potentially serve as the xand y-axes of Mori’s graph (Bartneck, Kanda, et al., 2009). Although the 0747-5632/$ see front matter 2010 Elsevier Ltd. All rights reserved. doi:10.1016/j.chb.2010.05.015 * Corresponding author. Tel.: +1 317 215 7040. E-mail address: kmacdorm@indiana.edu (K.F. MacDorman). URL: http://www.macdorman.com (K.F. MacDorman). Computers in Human Behavior 26 (2010) 1508–1518",
"title": ""
},
{
"docid": "neg:1840367_10",
"text": "CD44 is a cell surface adhesion receptor that is highly expressed in many cancers and regulates metastasis via recruitment of CD44 to the cell surface. Its interaction with appropriate extracellular matrix ligands promotes the migration and invasion processes involved in metastases. It was originally identified as a receptor for hyaluronan or hyaluronic acid and later to several other ligands including, osteopontin (OPN), collagens, and matrix metalloproteinases. CD44 has also been identified as a marker for stem cells of several types. Beside standard CD44 (sCD44), variant (vCD44) isoforms of CD44 have been shown to be created by alternate splicing of the mRNA in several cancer. Addition of new exons into the extracellular domain near the transmembrane of sCD44 increases the tendency for expressing larger size vCD44 isoforms. Expression of certain vCD44 isoforms was linked with progression and metastasis of cancer cells as well as patient prognosis. The expression of CD44 isoforms can be correlated with tumor subtypes and be a marker of cancer stem cells. CD44 cleavage, shedding, and elevated levels of soluble CD44 in the serum of patients is a marker of tumor burden and metastasis in several cancers including colon and gastric cancer. Recent observations have shown that CD44 intracellular domain (CD44-ICD) is related to the metastatic potential of breast cancer cells. However, the underlying mechanisms need further elucidation.",
"title": ""
},
{
"docid": "neg:1840367_11",
"text": "This study explores the stability of attachment security and representations from infancy to early adulthood in a sample chosen originally for poverty and high risk for poor developmental outcomes. Participants for this study were 57 young adults who are part of an ongoing prospective study of development and adaptation in a high-risk sample. Attachment was assessed during infancy by using the Ainsworth Strange Situation (Ainsworth & Wittig) and at age 19 by using the Berkeley Adult Attachment Interview (George, Kaplan, & Main). Possible correlates of continuity and discontinuity in attachment were drawn from assessments of the participants and their mothers over the course of the study. Results provided no evidence for significant continuity between infant and adult attachment in this sample, with many participants transitioning to insecurity. The evidence, however, indicated that there might be lawful discontinuity. Analyses of correlates of continuity and discontinuity in attachment classification from infancy to adulthood indicated that the continuous and discontinuous groups were differentiated on the basis of child maltreatment, maternal depression, and family functioning in early adolescence. These results provide evidence that although attachment has been found to be stable over time in other samples, attachment representations are vulnerable to difficult and chaotic life experiences.",
"title": ""
},
{
"docid": "neg:1840367_12",
"text": "This article is focused on examining the factors and relationships that influence the browsing and buying behavior of individuals when they shop online. Specifically, we are interested in individual buyers using business-to-consumer sites. We are also interested in examining shopping preferences based on various demographic categories that might exhibit distinct purchasing attitudes and behaviors for certain categories of products and services. We examine these behaviors in the context of both products and services. After a period of decline in recent months, online shopping is on the rise again. By some estimates, total U.S. spending on online sales increased to $5.7 billion in December 2001 from $3.2 billion in June of 2001 [3, 5]. By these same estimates, the number of households shopping online increased to 18.7 million in December 2001 from 13.1 million in June 2001. Consumers spent an average of $304 per person in December 2001, compared with $247 in June 2001. According to an analyst at Forrester: “The fact that online retail remained stable during ... such social and economic instability speaks volumes about how well eCommerce is positioned to stand up to a poor economy” [4]. What do consumers utilize the Internet for? Nie and Erbring suggest that 52% of the consumers use the Internet for product information, 42% for travel information, and 24% for buying [9]. Recent online consumer behavior-related research refers to any Internet-related activity associated with the consumption of goods, services, and information [6]. In the definition of Internet consumption, Goldsmith and Bridges include “gathering information passively via exposure to advertising; shopping, which includes both browsing and deliberate information search, and the selection and buying of specific goods, services, and information” [7]. For the purposes of this study, we focus on all aspects of this consumption. We include all of them because information gathering aspects of e-commerce serve to educate the consumer, which is ulti-",
"title": ""
},
{
"docid": "neg:1840367_13",
"text": "Context: The use of Systematic Literature Review (SLR) requires expertise and poses many challenges for novice researchers. The experiences of those who have used this research methodology can benefit novice researchers in effectively dealing with these challenges. Objective: The aim of this study is to record the reported experiences of conducting Systematic Literature Reviews, for the benefit of new researchers. Such a review will greatly benefit the researchers wanting to conduct SLR for the very first time. Method: We conducted a tertiary study to gather the experiences published by researchers. Studies that have used the SLR research methodology in software engineering and have implicitly or explicitly reported their experiences are included in this review. Results: Our research has revealed 116 studies relevant to the theme. The data has been extracted by two researchers working independently and conflicts resolved after discussion with third researcher. Findings from these studies highlight Search Strategy, Online Databases, Planning and Data Extraction as the most challenging phases of SLR. Lack of standard terminology in software engineering papers, poor quality of abstracts and problems with search engines are some of the most cited challenges. Conclusion: Further research and guidelines is required to facilitate novice researchers in conducting these phases properly.",
"title": ""
},
{
"docid": "neg:1840367_14",
"text": "Anomaly detection in streaming data is of high interest in numerous application domains. In this paper, we propose a novel one-class semi-supervised algorithm to detect anomalies in streaming data. Underlying the algorithm is a fast and accurate density estimator implemented by multiple fully randomized space trees (RS-Trees), named RS-Forest. The piecewise constant density estimate of each RS-tree is defined on the tree node into which an instance falls. Each incoming instance in a data stream is scored by the density estimates averaged over all trees in the forest. Two strategies, statistical attribute range estimation of high probability guarantee and dual node profiles for rapid model update, are seamlessly integrated into RS Forestto systematically address the ever-evolving nature of data streams. We derive the theoretical upper bound for the proposed algorithm and analyze its asymptotic properties via bias-variance decomposition. Empirical comparisons to the state-of-the-art methods on multiple benchmark datasets demonstrate that the proposed method features high detection rate, fast response, and insensitivity to most of the parameter settings. Algorithm implementations and datasets are available upon request.",
"title": ""
},
{
"docid": "neg:1840367_15",
"text": "To operate reliably in real-world traffic, an autonomous car must evaluate the consequences of its potential actions by anticipating the uncertain intentions of other traffic participants. This paper presents an integrated behavioral inference and decision-making approach that models vehicle behavior for both our vehicle and nearby vehicles as a discrete set of closedloop policies that react to the actions of other agents. Each policy captures a distinct high-level behavior and intention, such as driving along a lane or turning at an intersection. We first employ Bayesian changepoint detection on the observed history of states of nearby cars to estimate the distribution over potential policies that each nearby car might be executing. We then sample policies from these distributions to obtain high-likelihood actions for each participating vehicle. Through closed-loop forward simulation of these samples, we can evaluate the outcomes of the interaction of our vehicle with other participants (e.g., a merging vehicle accelerates and we slow down to make room for it, or the vehicle in front of ours suddenly slows down and we decide to pass it). Based on those samples, our vehicle then executes the policy with the maximum expected reward value. Thus, our system is able to make decisions based on coupled interactions between cars in a tractable manner. This work extends our previous multipolicy system [11] by incorporating behavioral anticipation into decision-making to evaluate sampled potential vehicle interactions. We evaluate our approach using real-world traffic-tracking data from our autonomous vehicle platform, and present decision-making results in simulation involving highway traffic scenarios.",
"title": ""
},
{
"docid": "neg:1840367_16",
"text": "We present the MSP-IMPROV corpus, a multimodal emotional database, where the goal is to have control over lexical content and emotion while also promoting naturalness in the recordings. Studies on emotion perception often require stimuli with fixed lexical content, but that convey different emotions. These stimuli can also serve as an instrument to understand how emotion modulates speech at the phoneme level, in a manner that controls for coarticulation. Such audiovisual data are not easily available from natural recordings. A common solution is to record actors reading sentences that portray different emotions, which may not produce natural behaviors. We propose an alternative approach in which we define hypothetical scenarios for each sentence that are carefully designed to elicit a particular emotion. Two actors improvise these emotion-specific situations, leading them to utter contextualized, non-read renditions of sentences that have fixed lexical content and convey different emotions. We describe the context in which this corpus was recorded, the key features of the corpus, the areas in which this corpus can be useful, and the emotional content of the recordings. The paper also provides the performance for speech and facial emotion classifiers. The analysis brings novel classification evaluations where we study the performance in terms of inter-evaluator agreement and naturalness perception, leveraging the large size of the audiovisual database.",
"title": ""
},
{
"docid": "neg:1840367_17",
"text": "Once thought of as a technology restricted primarily to the scientific community, High-performance Computing (HPC) has now been established as an important value creation tool for the enterprises. Predominantly, the enterprise HPC is fueled by the needs for high-performance data analytics (HPDA) and large-scale machine learning – trades instrumental to business growth in today’s competitive markets. Cloud computing, characterized by the paradigm of on-demand network access to computational resources, has great potential of bringing HPC capabilities to a broader audience. Clouds employing traditional lossy network technologies, however, at large, have not proved to be sufficient for HPC applications. Both the traditional HPC workloads and HPDA require high predictability, large bandwidths, and low latencies, features which combined are not readily available using best-effort cloud networks. On the other hand, lossless interconnection networks commonly deployed in HPC systems, lack the flexibility needed for dynamic cloud environments. In this thesis, we identify and address research challenges that hinder the realization of an efficient HPC cloud computing platform, utilizing the InfiniBand interconnect as a demonstration technology. In particular, we address challenges related to efficient routing, load-balancing, low-overhead virtualization, performance isolation, and fast network reconfiguration, all to improve the utilization and flexibility of the underlying interconnect of an HPC cloud. In addition, we provide a framework to realize a self-adaptive network architecture for HPC clouds, offering dynamic and autonomic adaptation of the underlying interconnect according to varying traffic patterns, resource availability, workload distribution, and also in accordance with service provider defined policies. The work presented in this thesis helps bridging the performance gap between the cloud and traditional HPC infrastructures; the thesis provides practical solutions to enable an efficient, flexible, multi-tenant HPC network suitable for high-performance cloud computing.",
"title": ""
},
{
"docid": "neg:1840367_18",
"text": "Images now come in different forms – color, near-infrared, depth, etc. – due to the development of special and powerful cameras in computer vision and computational photography. Their cross-modal correspondence establishment is however left behind. We address this challenging dense matching problem considering structure variation possibly existing in these image sets and introduce new model and solution. Our main contribution includes designing the descriptor named robust selective normalized cross correlation (RSNCC) to establish dense pixel correspondence in input images and proposing its mathematical parameterization to make optimization tractable. A computationally robust framework including global and local matching phases is also established. We build a multi-modal dataset including natural images with labeled sparse correspondence. Our method will benefit image and vision applications that require accurate image alignment.",
"title": ""
},
{
"docid": "neg:1840367_19",
"text": "Recent research in recommender systems has shown that collaborative filtering algorithms are highly susceptible to attacks that insert biased profile data. Theoretical analyses and empirical experiments have shown that certain attacks can have a significant impact on the recommendations a system provides. These analyses have generally not taken into account the cost of mounting an attack or the degree of prerequisite knowledge for doing so. For example, effective attacks often require knowledge about the distribution of user ratings: the more such knowledge is required, the more expensive the attack to be mounted. In our research, we are examining a variety of attack models, aiming to establish the likely practical risks to collaborative systems. In this paper, we examine user-based collaborative filtering and some attack models that are successful against it, including a limited knowledge \"bandwagon\" attack that requires only that the attacker identify a small number of very popular items and a user-focused \"favorite item\" attack that is also effective against item-based algorithms.",
"title": ""
}
] |
1840368 | Fuzzy Filter Design for Nonlinear Systems in Finite-Frequency Domain | [
{
"docid": "pos:1840368_0",
"text": "This paper discusses a design of stable filters withH∞ disturbance attenuation of Takagi–Sugeno fuzzy systemswith immeasurable premise variables. When we consider the filter design of Takagi–Sugeno fuzzy systems, the selection of premise variables plays an important role. If the premise variable is the state of the system, then a fuzzy system describes a wide class of nonlinear systems. In this case, however, a filter design of fuzzy systems based on parallel distributed compensator idea is infeasible. To avoid such a difficulty, we consider the premise variables uncertainties. Then we consider a robust H∞ filtering problem for such an uncertain system. A solution of the problem is given in terms of linear matrix inequalities (LMIs). Some numerical examples are given to illustrate our theory. © 2008 Elsevier B.V. All rights reserved.",
"title": ""
}
] | [
{
"docid": "neg:1840368_0",
"text": "In order to use a synchronous dynamic RAM (SDRAM) as the off-chip memory of an H.264/AVC encoder, this paper proposes an efficient SDRAM memory controller with an asynchronous bridge. With the proposed architecture, the SDRAM bandwidth is increased by making the operation frequency of an external SDRAM higher than that of the hardware accelerators of an H.264/AVC encoder. Experimental results show that the encoding speed is increased by 30.5% when the SDRAM clock frequency is increased from 100 MHz to 200 MHz while the H.264/AVC hardware accelerators operate at 100 MHz.",
"title": ""
},
{
"docid": "neg:1840368_1",
"text": "The study was to compare treatment preference, efficacy, and tolerability of sildenafil citrate (sildenafil) and tadalafil for treating erectile dysfunction (ED) in Chinese men naοve to phosphodiesterase 5 (PDE5) inhibitor therapies. This multicenter, randomized, open-label, crossover study evaluated whether Chinese men with ED preferred 20-mg tadalafil or 100-mg sildenafil. After a 4 weeks baseline assessment, 383 eligible patients were randomized to sequential 20-mg tadalafil per 100-mg sildenafil or vice versa for 8 weeks respectively and then chose which treatment they preferred to take during the 8 weeks extension. Primary efficacy was measured by Question 1 of the PDE5 Inhibitor Treatment Preference Questionnaire (PITPQ). Secondary efficacy was analyzed by PITPQ Question 2, the International Index of Erectile Function (IIEF) erectile function (EF) domain, sexual encounter profile (SEP) Questions 2 and 3, and the Drug Attributes Questionnaire. Three hundred and fifty men (91%) completed the randomized treatment phase. Two hundred and forty-two per 350 (69.1%) patients preferred 20-mg tadalafil, and 108/350 (30.9%) preferred 100-mg sildenafil (P < 0.001) as their treatment in the 8 weeks extension. Ninety-two per 242 (38%) patients strongly preferred tadalafil and 37/108 (34.3%) strongly the preferred sildenafil. The SEP2 (penetration), SEP3 (successful intercourse), and IIEF-EF domain scores were improved in both tadalafil and sildenafil treatment groups. For patients who preferred tadalafil, getting an erection long after taking the medication was the most reported reason for tadalafil preference. The only treatment-emergent adverse event reported by > 2% of men was headache. After tadalafil and sildenafil treatments, more Chinese men with ED naοve to PDE5 inhibitor preferred tadalafil. Both sildenafil and tadalafil treatments were effective and safe.",
"title": ""
},
{
"docid": "neg:1840368_2",
"text": "Chitin and its deacetylated derivative chitosan are natural polymers composed of randomly distributed -(1-4)linked D-glucosamine (deacetylated unit) and N-acetyl-D-glucosamine (acetylated unit). Chitin is insoluble in aqueous media while chitosan is soluble in acidic conditions due to the free protonable amino groups present in the D-glucosamine units. Due to their natural origin, both chitin and chitosan can not be defined as a unique chemical structure but as a family of polymers which present a high variability in their chemical and physical properties. This variability is related not only to the origin of the samples but also to their method of preparation. Chitin and chitosan are used in fields as different as food, biomedicine and agriculture, among others. The success of chitin and chitosan in each of these specific applications is directly related to deep research into their physicochemical properties. In recent years, several reviews covering different aspects of the applications of chitin and chitosan have been published. However, these reviews have not taken into account the key role of the physicochemical properties of chitin and chitosan in their possible applications. The aim of this review is to highlight the relationship between the physicochemical properties of the polymers and their behaviour. A functional characterization of chitin and chitosan regarding some biological properties and some specific applications (drug delivery, tissue engineering, functional food, food preservative, biocatalyst immobilization, wastewater treatment, molecular imprinting and metal nanocomposites) is presented. The molecular mechanism of the biological properties such as biocompatibility, mucoadhesion, permeation enhancing effect, anticholesterolemic, and antimicrobial has been up-",
"title": ""
},
{
"docid": "neg:1840368_3",
"text": "In this paper we report exploratory analyses of high-density oligonucleotide array data from the Affymetrix GeneChip system with the objective of improving upon currently used measures of gene expression. Our analyses make use of three data sets: a small experimental study consisting of five MGU74A mouse GeneChip arrays, part of the data from an extensive spike-in study conducted by Gene Logic and Wyeth's Genetics Institute involving 95 HG-U95A human GeneChip arrays; and part of a dilution study conducted by Gene Logic involving 75 HG-U95A GeneChip arrays. We display some familiar features of the perfect match and mismatch probe (PM and MM) values of these data, and examine the variance-mean relationship with probe-level data from probes believed to be defective, and so delivering noise only. We explain why we need to normalize the arrays to one another using probe level intensities. We then examine the behavior of the PM and MM using spike-in data and assess three commonly used summary measures: Affymetrix's (i) average difference (AvDiff) and (ii) MAS 5.0 signal, and (iii) the Li and Wong multiplicative model-based expression index (MBEI). The exploratory data analyses of the probe level data motivate a new summary measure that is a robust multi-array average (RMA) of background-adjusted, normalized, and log-transformed PM values. We evaluate the four expression summary measures using the dilution study data, assessing their behavior in terms of bias, variance and (for MBEI and RMA) model fit. Finally, we evaluate the algorithms in terms of their ability to detect known levels of differential expression using the spike-in data. We conclude that there is no obvious downside to using RMA and attaching a standard error (SE) to this quantity using a linear model which removes probe-specific affinities.",
"title": ""
},
{
"docid": "neg:1840368_4",
"text": "A new impedance-based stability criterion was proposed for a grid-tied inverter system based on a Norton equivalent circuit of the inverter [18]. As an extension of the work in [18], this paper shows that using a Thévenin representation of the inverter can lead to the same criterion in [18]. Further, this paper shows that the criterion proposed by Middlebrook can still be used for the inverter systems. The link between the criterion in [18] and the original criterion is the inverse Nyquist stability criterion. The criterion in [18] is easier to be used. Because the current feedback controller and the phase-locked loop of the inverter introduce poles at the origin and right-half plane to the output impedance of the inverter. These poles do not appear in the minor loop gain defined in [18] but in the minor loop gain defined by Middlebrook. Experimental systems are used to verify the proposed analysis.",
"title": ""
},
{
"docid": "neg:1840368_5",
"text": "Sensors including RFID tags have been widely deployed for measuring environmental parameters such as temperature, humidity, oxygen concentration, monitoring the location and velocity of moving objects, tracking tagged objects, and many others. To support effective, efficient, and near real-time phenomena probing and objects monitoring, streaming sensor data have to be gracefully managed in an event processing manner. Different from the traditional events, sensor events come with temporal or spatio-temporal constraints and can be non-spontaneous. Meanwhile, like general event streams, sensor event streams can be generated with very high volumes and rates. Primitive sensor events need to be filtered, aggregated and correlated to generate more semantically rich complex events to facilitate the requirements of up-streaming applications. Motivated by such challenges, many new methods have been proposed in the past to support event processing in sensor event streams. In this chapter, we survey state-of-the-art research on event processing in sensor networks, and provide a broad overview of major topics in Springer Science+Business Media New York 2013 © Managing and Mining Sensor Data, DOI 10.1007/978-1-4614-6309-2_4, C.C. Aggarwal (ed.), 77 78 MANAGING AND MINING SENSOR DATA complex RFID event processing, including event specification languages, event detection models, event processing methods and their optimizations. Additionally, we have presented an open discussion on advanced issues such as processing uncertain and out-of-order sensor events.",
"title": ""
},
{
"docid": "neg:1840368_6",
"text": "It has been shown that integration of acoustic and visual information especially in noisy conditions yields improved speech recognition results. This raises the question of how to weight the two modalities in different noise conditions. Throughout this paper we develop a weighting process adaptive to various background noise situations. In the presented recognition system, audio and video data are combined following a Separate Integration (SI) architecture. A hybrid Artificial Neural Network/Hidden Markov Model (ANN/HMM) system is used for the experiments. The neural networks were in all cases trained on clean data. Firstly, we evaluate the performance of different weighting schemes in a manually controlled recognition task with different types of noise. Next, we compare different criteria to estimate the reliability of the audio stream. Based on this, a mapping between the measurements and the free parameter of the fusion process is derived and its applicability is demonstrated. Finally, the possibilities and limitations of adaptive weighting are compared and discussed.",
"title": ""
},
{
"docid": "neg:1840368_7",
"text": "Advances in laser technology have progressed so rapidly during the past decade that successful treatment of many cutaneous concerns and congenital defects, including vascular and pigmented lesions, tattoos, scars and unwanted haircan be achieved. The demand for laser surgery has increased as a result of the relative ease with low incidence of adverse postoperative sequelae. In this review, the currently available laser systems with cutaneous applications are outlined to identify the various types of dermatologic lasers available, to list their clinical indications and to understand the possible side effects.",
"title": ""
},
{
"docid": "neg:1840368_8",
"text": "Cyber threats and the field of computer cyber defense are gaining more and more an increased importance in our lives. Starting from our regular personal computers and ending with thin clients such as netbooks or smartphones we find ourselves bombarded with constant malware attacks. In this paper we will present a new and novel way in which we can detect these kind of attacks by using elements of modern game theory. We will present the effects and benefits of game theory and we will talk about a defense exercise model that can be used to train cyber response specialists.",
"title": ""
},
{
"docid": "neg:1840368_9",
"text": "Convolutional neural networks (CNNs) are able to model local stationary structures in natural images in a multi-scale fashion, when learning all model parameters with supervision. While excellent performance was achieved for image classification when large amounts of labeled visual data are available, their success for unsupervised tasks such as image retrieval has been moderate so far.Our paper focuses on this latter setting and explores several methods for learning patch descriptors without supervision with application to matching and instance-level retrieval. To that effect, we propose a new family of patch representations, based on the recently introduced convolutional kernel networks. We show that our descriptor, named Patch-CKN, performs better than SIFT as well as other convolutional networks learned by artificially introducing supervision and is significantly faster to train. To demonstrate its effectiveness, we perform an extensive evaluation on standard benchmarks for patch and image retrieval where we obtain state-of-the-art results. We also introduce a new dataset called RomePatches, which allows to simultaneously study descriptor performance for patch and image retrieval.",
"title": ""
},
{
"docid": "neg:1840368_10",
"text": "For over a decade, researchers have devoted much effort to construct theoretical models, such as the Technology Acceptance Model (TAM) and the Expectation Confirmation Model (ECM) for explaining and predicting user behavior in IS acceptance and continuance. Another model, the Cognitive Model (COG), was proposed for continuance behavior; it combines some of the variables used in both TAM and ECM. This study applied the technique of structured equation modeling with multiple group analysis to compare the TAM, ECM, and COG models. Results indicate that TAM, ECM, and COG have quite different assumptions about the underlying constructs that dictate user behavior and thus have different explanatory powers. The six constructs in the three models were synthesized to propose a new Technology Continuance Theory (TCT). A major contribution of TCT is that it combines two central constructs: attitude and satisfaction into one continuance model, and has applicability for users at different stages of the adoption life cycle, i.e., initial, short-term and long-term users. The TCT represents a substantial improvement over the TAM, ECM and COG models in terms of both breadth of applicability and explanatory power.",
"title": ""
},
{
"docid": "neg:1840368_11",
"text": "Objective: In this paper, we develop a personalized real-time risk scoring algorithm that provides timely and granular assessments for the clinical acuity of ward patients based on their (temporal) lab tests and vital signs; the proposed risk scoring system ensures timely intensive care unit admissions for clinically deteriorating patients. Methods: The risk scoring system is based on the idea of sequential hypothesis testing under an uncertain time horizon. The system learns a set of latent patient subtypes from the offline electronic health record data, and trains a mixture of Gaussian Process experts, where each expert models the physiological data streams associated with a specific patient subtype. Transfer learning techniques are used to learn the relationship between a patient's latent subtype and her static admission information (e.g., age, gender, transfer status, ICD-9 codes, etc). Results: Experiments conducted on data from a heterogeneous cohort of 6321 patients admitted to Ronald Reagan UCLA medical center show that our score significantly outperforms the currently deployed risk scores, such as the Rothman index, MEWS, APACHE, and SOFA scores, in terms of timeliness, true positive rate, and positive predictive value. Conclusion: Our results reflect the importance of adopting the concepts of personalized medicine in critical care settings; significant accuracy and timeliness gains can be achieved by accounting for the patients’ heterogeneity. Significance: The proposed risk scoring methodology can confer huge clinical and social benefits on a massive number of critically ill inpatients who exhibit adverse outcomes including, but not limited to, cardiac arrests, respiratory arrests, and septic shocks.",
"title": ""
},
{
"docid": "neg:1840368_12",
"text": "We present a method for transferring neural representations from label-rich source domains to unlabeled target domains. Recent adversarial methods proposed for this task learn to align features across domains by fooling a special domain critic network. However, a drawback of this approach is that the critic simply labels the generated features as in-domain or not, without considering the boundaries between classes. This can lead to ambiguous features being generated near class boundaries, reducing target classification accuracy. We propose a novel approach, Adversarial Dropout Regularization (ADR), to encourage the generator to output more discriminative features for the target domain. Our key idea is to replace the critic with one that detects non-discriminative features, using dropout on the classifier network. The generator then learns to avoid these areas of the feature space and thus creates better features. We apply our ADR approach to the problem of unsupervised domain adaptation for image classification and semantic segmentation tasks, and demonstrate significant improvement over the state of the art. We also show that our approach can be used to train Generative Adversarial Networks for semi-supervised learning.",
"title": ""
},
{
"docid": "neg:1840368_13",
"text": "Resource scheduling in cloud is a challenging job and the scheduling of appropriate resources to cloud workloads depends on the QoS requirements of cloud applications. In cloud environment, heterogeneity, uncertainty and dispersion of resources encounters problems of allocation of resources, which cannot be addressed with existing resource allocation policies. Researchers still face troubles to select the efficient and appropriate resource scheduling algorithm for a specific workload from the existing literature of resource scheduling algorithms. This research depicts a broad methodical literature analysis of resource management in the area of cloud in general and cloud resource scheduling in specific. In this survey, standard methodical literature analysis technique is used based on a complete collection of 110 research papers out of large collection of 1206 research papers published in 19 foremost workshops, symposiums and conferences and 11 prominent journals. The current status of resource scheduling in cloud computing is distributed into various categories. Methodical analysis of resource scheduling in cloud computing is presented, resource scheduling algorithms and management, its types and benefits with tools, resource scheduling aspects and resource distribution policies are described. The literature concerning to thirteen types of resource scheduling algorithms has also been stated. Further, eight types of resource distribution policies are described. Methodical analysis of this research work will help researchers to find the important characteristics of resource scheduling algorithms and also will help to select most suitable algorithm for scheduling a specific workload. Future research directions have also been suggested in this research work.",
"title": ""
},
{
"docid": "neg:1840368_14",
"text": "In this article, we have reviewed the state of the art of IPT systems and have explored the suitability of the technology to wirelessly charge battery powered vehicles. the review shows that the IPT technology has merits for stationary charging (when the vehicle is parked), opportunity charging (when the vehicle is stopped for a short period of time, for example, at a bus stop), and dynamic charging (when the vehicle is moving along a dedicated lane equipped with an IPT system). Dynamic wireless charging holds promise to partially or completely eliminate the overnight charging through a compact network of dynamic chargers installed on the roads that would keep the vehicle batteries charged at all times, consequently reducing the range anxiety and increasing the reliability of EVs. Dynamic charging can help lower the price of EVs by reducing the size of the battery pack. Indeed, if the recharging energy is readily available, the batteries do not have to support the whole driving range but only supply power when the IPT system is not available. Depending on the power capability, the use of dynamic charging may increase driving range and reduce the size of the battery pack.",
"title": ""
},
{
"docid": "neg:1840368_15",
"text": "The complex methodology of investigations was applied to study a movement structure on bench press. We have checked the usefulness of multimodular measuring system (SMART-E, BTS company, Italy) and a special device for tracking the position of barbell (pantograph). Software Smart Analyser was used to create a database allowing chosen parameters to be compared. The results from different measuring devices are very similar, therefore the replacement of many devices by one multimodular system is reasonable. In our study, the effect of increased barbell load on the values of muscles activity and bar kinematics during the flat bench press movement was clearly visible. The greater the weight of a barbell, the greater the myoactivity of shoulder muscles and vertical velocity of the bar. It was also confirmed the presence of the so-called sticking point (period) during the concentric phase of the bench press. In this study, the initial velocity of the barbell decreased (v(min)) not only under submaximal and maximal loads (90 and 100% of the one repetition maximum; 1-RM), but also under slightly lighter weights (70 and 80% of 1-RM).",
"title": ""
},
{
"docid": "neg:1840368_16",
"text": "Classification and regression trees are becoming increasingly popular for partitioning data and identifying local structure in small and large datasets. Classification trees include those models in which the dependent variable (the predicted variable) is categorical. Regression trees include those in which it is continuous. This paper discusses pitfalls in the use of these methods and highlights where they are especially suitable. Paper presented at the 1992 Sun Valley, ID, Sawtooth/SYSTAT Joint Software Conference.",
"title": ""
},
{
"docid": "neg:1840368_17",
"text": "Although there is considerable interest in the advance bookings model as a forecasting method in the hotel industry, there has been little research analyzing the use of an advance booking curve in forecasting hotel reservations. The mainstream of advance booking models reviewed in the literature uses only the bookings-on-hand data on a certain day and ignores the previous booking data. This empirical study analyzes the entire booking data set for one year provided by the Hotel ICON in Hong Kong, and identifies the trends and patterns in the data. The analysis demonstrates the use of an advance booking curve in forecasting hotel reservations at property level.",
"title": ""
},
{
"docid": "neg:1840368_18",
"text": "Our analysis of many real-world event based applications has revealed that existing Complex Event Processing technology (CEP), while effective for efficient pattern matching on event stream, is limited in its capability of reacting in realtime to opportunities and risks detected or environmental changes. We are the first to tackle this problem by providing active rule support embedded directly within the CEP engine, henceforth called Active Complex Event Processing technology, or short, Active CEP. We design the Active CEP model and associated rule language that allows rules to be triggered by CEP system state changes and correctly executed during the continuous query process. Moreover we design an Active CEP infrastructure, that integrates the active rule component into the CEP kernel, allowing finegrained and optimized rule processing. We demonstrate the power of Active CEP by applying it to the development of a collaborative project with UMass Medical School, which detects potential threads of infection and reminds healthcare workers to perform hygiene precautions in real-time. 1. BACKGROUND AND MOTIVATION Complex patterns of events often capture exceptions, threats or opportunities occurring across application space and time. Complex Event Processing (CEP) technology has thus increasingly gained popularity for efficiently detecting such event patterns in real-time. For example CEP has been employed by diverse applications ranging from healthcare systems , financial analysis , real-time business intelligence to RFID based surveillance. However, existing CEP technologies [3, 7, 2, 5], while effective for pattern matching, are limited in their capability of supporting active rules. We motivate the need for such capability based on our experience with the development of a real-world hospital infection control system, called HygieneReminder, or short HyReminder. Application: HyReminder. According to the U.S. Centers for Disease Control and Prevention [8], healthcareassociated infections hit 1.7 million people a year in the Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Articles from this volume were presented at The 36th International Conference on Very Large Data Bases, September 13-17, 2010, Singapore. Proceedings of the VLDB Endowment, Vol. 3, No. 2 Copyright 2010 VLDB Endowment 2150-8097/10/09... $ 10.00. United States, causing an estimated 99,000 deaths. HyReminder is a collaborated project between WPI and University of Massachusetts Medical School (UMMS) that uses advanced CEP technologies to solve this long-standing public health problem. HyReminder system aims to continuously track healthcare workers (HCW) for hygiene compliance (for example cleansing hands before entering a H1N1 patient’s room), and remind the HCW at the appropriate moments to perform hygiene precautions thus preventing spread of infections. CEP technologies are adopted to efficiently monitor event patterns, such as the sequence that a HCW left a patient room (this behavior is measured by a sensor reading and modeled as “exit” event), did not sanitize his hands (referred as “!sanitize”, where ! represents negation), and then entered another patient’s room (referred as “enter”). Such a sequence of behaviors, i.e. SEQ(exit,!sanitize,enter), would be deemed as a violation of hand hygiene regulations. Besides detecting complex events, the HyReminder system requires the ability to specify logic rules reminding HCWs to perform the respective appropriate hygiene upon detection of an imminent hand hygiene violation or an actual observed violation. A condensed version of example logic rules derived from HyReminder and modeled using CEP semantics is depicted in Figure 1. In the figure, the edge marked “Q1.1” expresses the logic that “if query Q1.1 is satisfied for a HCW, then change his hygiene status to warning and change his badge light to yellow”. This logic rule in fact specifies how the system should react to the observed change, here meaning the risk being detected by the continuous pattern matching query Q1.1, during the long running query process. The system’s streaming environment requires that such reactions be executed in a timely fashion. An additional complication arises in that the HCW status changed by this logic rule must be used as a condition by other continuous queries at run time, like Q2.1 and Q2.2. We can see that active rules and continuous queries over streaming data are tightly-coupled: continuous queries are monitoring the world while active rules are changing the world, both in real-time. Yet contrary to traditional databases, data is not persistently stored in a DSMS, but rather streamed through the system in fluctuating arrival rate. Thus processing active rules in CEP systems requires precise synchronization between queries and rules and careful consideration of latency and resource utilization. Limitations of Existing CEP Technology. In summary, the following active functionalities are needed by many event stream applications, but not supported by the existing",
"title": ""
}
] |
1840369 | The Riemann Zeros and Eigenvalue Asymptotics | [
{
"docid": "pos:1840369_0",
"text": "Assuming a special version of the Montgomery-Odlyzko law on the pair correlation of zeros of the Riemann zeta function conjectured by Rudnick and Sarnak and assuming the Riemann Hypothesis, we prove new results on the prime number theorem, difference of consecutive primes, and the twin prime conjecture. 1. Introduction. Assuming the Riemann Hypothesis (RH), let us denote by 1=2 ig a nontrivial zero of a primitive L-function L
s;p attached to an irreducible cuspidal automorphic representation of GLm; m ^ 1, over Q. When m 1, this L-function is the Riemann zeta function z
s or the Dirichlet L-function L
s; c for a primitive character c. Rudnick and Sarnak [13] examined the n-level correlation for these zeros and made a far reaching conjecture which is called the Montgomery [9]-Odlyzko [11], [12] Law by Katz and Sarnak [6]. Rudnick and Sarnak also proved a case of their conjecture when a test function f has its Fourier transform b f supported in a restricted region. In this article, we will show that a version of the above conjecture for the pair correlation of zeros of the zeta function z
s implies interesting arithmetical results on prime distribution (Theorems 2, 3, and 4). These results can give us deep insight on possible ultimate bounds of these prime distribution problems. One can also see that the pair (and nlevel) correlation of zeros of zeta and L-functions is a powerful method in number theory. Our computation shows that the test function f and the support of its Fourier transform b f play a crucial role in the conjecture. To see the conjecture in Rudnick and Sarnak [13] in the case of the zeta function z
s and n 2, the pair correlation, we use a test function f
x; y which satisfies the following three conditions: (i) f
x; y f
y; x for any x; y 2 R, (ii) f
x t; y t f
x; y for any t 2 R, and (iii) f
x; y tends to 0 rapidly as j
x; yj ! 1 on the hyperplane x y 0. Arch. Math. 76 (2001) 41±50 0003-889X/01/010041-10 $ 3.50/0 Birkhäuser Verlag, Basel, 2001 Archiv der Mathematik Mathematics Subject Classification (1991): 11M26, 11N05, 11N75. 1) Supported in part by China NNSF Grant # 19701019. 2) Supported in part by USA NSF Grant # DMS 97-01225. Define the function W2
x; y 1ÿ sin p
xÿ y
p
xÿ y : Denote the Dirac function by d
x which satisfies R d
xdx 1 and defines a distribution f 7! f
0. We then define the pair correlation sum of zeros gj of the zeta function: R2
T; f ; h P g1;g2 distinct h g1 T ; g2 T f Lg1 2p ; Lg2 2p ; where T ^ 2, L log T, and h
x; y is a localized cutoff function which tends to zero rapidly when j
x; yj tends to infinity. The conjecture proposed by Rudnick and Sarnak [13] is that R2
T; f ; h 1 2p TL",
"title": ""
}
] | [
{
"docid": "neg:1840369_0",
"text": "The illegal distribution of a digital movie is a common and significant threat to the film industry. With the advent of high-speed broadband Internet access, a pirated copy of a digital video can now be easily distributed to a global audience. A possible means of limiting this type of digital theft is digital video watermarking whereby additional information, called a watermark, is embedded in the host video. This watermark can be extracted at the decoder and used to determine whether the video content is watermarked. This paper presents a review of the digital video watermarking techniques in which their applications, challenges, and important properties are discussed, and categorizes them based on the domain in which they embed the watermark. It then provides an overview of a few emerging innovative solutions using watermarks. Protecting a 3D video by watermarking is an emerging area of research. The relevant 3D video watermarking techniques in the literature are classified based on the image-based representations of a 3D video in stereoscopic, depth-image-based rendering, and multi-view video watermarking. We discuss each technique, and then present a survey of the literature. Finally, we provide a summary of this paper and propose some future research directions.",
"title": ""
},
{
"docid": "neg:1840369_1",
"text": "Convolutional Neural Networks (CNNs) have reached outstanding results in several complex visual recognition tasks, such as classification and scene parsing. CNNs are composed of multiple filtering layers that perform 2D convolutions over input images. The intrinsic parallelism in such a computation kernel makes it suitable to be effectively accelerated on parallel hardware. In this paper we propose a highly flexible and scalable architectural template for acceleration of CNNs on FPGA devices, based on the cooperation between a set of software cores and a parallel convolution engine that communicate via a tightly coupled L1 shared scratchpad. Our accelerator structure, tested on a Xilinx Zynq XC-Z7045 device, delivers peak performance up to 80 GMAC/s, corresponding to 100 MMAC/s for each DSP slice in the programmable fabric. Thanks to the flexible architecture, convolution operations can be scheduled in order to reduce input/output bandwidth down to 8 bytes per cycle without degrading the performance of the accelerator in most of the meaningful use-cases.",
"title": ""
},
{
"docid": "neg:1840369_2",
"text": "Generative adversarial networks (GANs) have achieved huge success in unsupervised learning. Most of GANs treat the discriminator as a classifier with the binary sigmoid cross entropy loss function. However, we find that the sigmoid cross entropy loss function will sometimes lead to the saturation problem in GANs learning. In this work, we propose to adopt the L2 loss function for the discriminator. The properties of the L2 loss function can improve the stabilization of GANs learning. With the usage of the L2 loss function, we propose the multi-class generative adversarial networks for the purpose of image generation with multiple classes. We evaluate the multi-class GANs on a handwritten Chinese characters dataset with 3740 classes. The experiments demonstrate that the multi-class GANs can generate elegant images on datasets with a large number of classes. Comparison experiments between the L2 loss function and the sigmoid cross entropy loss function are also conducted and the results demonstrate the stabilization of the L2 loss function.",
"title": ""
},
{
"docid": "neg:1840369_3",
"text": "In this paper we propose a new footstep detection technique for data acquired using a triaxial geophone. The idea evolves from the investigation of geophone transduction principle. The technique exploits the randomness of neighbouring data vectors observed when the footstep is absent. We extend the same principle for triaxial signal denoising. Effectiveness of the proposed technique for transient detection and denoising are presented for real seismic data collected using a triaxial geophone.",
"title": ""
},
{
"docid": "neg:1840369_4",
"text": "Example classifications (test set) [And09] Andriluka et al. Pictorial structures revisited: People detection and articulated pose estimation. In CVPR, 2009 [Eic09] Eichner et al. Articulated Human Pose Estimation and Search in (Almost) Unconstrained Still Images. In IJCV, 2012 [Sap10] Sapp et al. Cascaded models for articulated pose estimation. In ECCV, 2010 [Yan11] Yang and Ramanan. Articulated pose estimation with flexible mixturesof-parts. In CVPR, 2011. References Human Pose Estimation (HPE) Algorithm Input",
"title": ""
},
{
"docid": "neg:1840369_5",
"text": "Harnessing crowds can be a powerful mechanism for increasing innovation. However, current approaches to crowd innovation rely on large numbers of contributors generating ideas independently in an unstructured way. We introduce a new approach called distributed analogical idea generation, which aims to make idea generation more effective and less reliant on chance. Drawing from the literature in cognitive science on analogy and schema induction, our approach decomposes the creative process in a structured way amenable to using crowds. In three experiments we show that distributed analogical idea generation leads to better ideas than example-based approaches, and investigate the conditions under which crowds generate good schemas and ideas. Our results have implications for improving creativity and building systems for distributed crowd innovation.",
"title": ""
},
{
"docid": "neg:1840369_6",
"text": "The key issue in image fusion is the process of defining evaluation indices for the output image and for multi-scale image data set. This paper attempted to develop a fusion model for plantar pressure distribution images, which is expected to contribute to feature points construction based on shoe-last surface generation and modification. First, the time series plantar pressure distribution image was preprocessed, including back removing and Laplacian of Gaussian (LoG) filter. Then, discrete wavelet transform and a multi-scale pixel conversion fusion operating using a parameter estimation optimized Gaussian mixture model (PEO-GMM) were performed. The output image was used in a fuzzy weighted evaluation system, that included the following evaluation indices: mean, standard deviation, entropy, average gradient, and spatial frequency; the difference with the reference image, including the root mean square error, signal to noise ratio (SNR), and the peak SNR; and the difference with source image including the cross entropy, joint entropy, mutual information, deviation index, correlation coefficient, and the degree of distortion. These parameters were used to evaluate the results of the comprehensive evaluation value for the synthesized image. The image reflected the fusion of plantar pressure distribution using the proposed method compared with other fusion methods, such as up-down, mean-mean, and max-min fusion. The experimental results showed that the proposed LoG filtering with PEO-GMM fusion operator outperformed other methods.",
"title": ""
},
{
"docid": "neg:1840369_7",
"text": "Gyroscope is one of the primary sensors for air vehicle navigation and controls. This paper investigates the noise characteristics of microelectromechanical systems (MEMS) gyroscope null drift and temperature compensation. This study mainly focuses on temperature as a long-term error source. An in-house-designed inertial measurement unit (IMU) is used to perform temperature effect testing in the study. The IMU is placed into a temperature control chamber. The chamber temperature is controlled to increase from 25 C to 80 C at approximately 0.8 degrees per minute. After that, the temperature is decreased to -40 C and then returns to 25 C. The null voltage measurements clearly demonstrate the rapidly changing short-term random drift and slowly changing long-term drift due to temperature variations. The characteristics of the short-term random drifts are analyzed and represented in probability density functions. A temperature calibration mechanism is established by using an artificial neural network to compensate the long-term drift. With the temperature calibration, the attitude computation problem due to gyro drifts can be improved significantly.",
"title": ""
},
{
"docid": "neg:1840369_8",
"text": "presenting with bullous pemphigoid-like lesions. Dermatol Online J 2006; 12: 19. 3 Bhawan J, Milstone E, Malhotra R, et al. Scabies presenting as bullous pemphigoid-like eruption. J Am Acad Dermatol 1991; 24: 179–181. 4 Ostlere LS, Harris D, Rustin MH. Scabies associated with a bullous pemphigoid-like eruption. Br J Dermatol 1993; 128: 217–219. 5 Parodi A, Saino M, Rebora A. Bullous pemphigoid-like scabies. Clin Exp Dermatol 1993; 18: 293. 6 Slawsky LD, Maroon M, Tyler WB, et al. Association of scabies with a bullous pemphigoid-like eruption. J Am Acad Dermatol 1996; 34: 878–879. 7 Chen MC, Luo DQ. Bullous scabies failing to respond to glucocorticoids, immunoglobulin, and cyclophosphamide. Int J Dermatol 2014; 53: 265–266. 8 Nakamura E, Taniguchi H, Ohtaki N. A case of crusted scabies with a bullous pemphigoid-like eruption and nail involvement. J Dermatol 2006; 33: 196–201. 9 Galvany Rossell L, Salleras Redonnet M, Umbert Millet P. Bullous scabies responding to ivermectin therapy. Actas Dermosifiliogr 2010; 101: 81–84. 10 Gutte RM. Bullous scabies in an adult: a case report with review of literature. Indian Dermatol Online J 2013; 4: 311–313.",
"title": ""
},
{
"docid": "neg:1840369_9",
"text": "Humans can naturally understand an image in depth with the aid of rich knowledge accumulated from daily lives or professions. For example, to achieve fine-grained image recognition (e.g., categorizing hundreds of subordinate categories of birds) usually requires a comprehensive visual concept organization including category labels and part-level attributes. In this work, we investigate how to unify rich professional knowledge with deep neural network architectures and propose a Knowledge-Embedded Representation Learning (KERL) framework for handling the problem of fine-grained image recognition. Specifically, we organize the rich visual concepts in the form of knowledge graph and employ a Gated Graph Neural Network to propagate node message through the graph for generating the knowledge representation. By introducing a novel gated mechanism, our KERL framework incorporates this knowledge representation into the discriminative image feature learning, i.e., implicitly associating the specific attributes with the feature maps. Compared with existing methods of fine-grained image classification, our KERL framework has several appealing properties: i) The embedded high-level knowledge enhances the feature representation, thus facilitating distinguishing the subtle differences among subordinate categories. ii) Our framework can learn feature maps with a meaningful configuration that the highlighted regions finely accord with the nodes (specific attributes) of the knowledge graph. Extensive experiments on the widely used CaltechUCSD bird dataset demonstrate the superiority of ∗Corresponding author is Liang Lin (Email: linliang@ieee.org). This work was supported by the National Natural Science Foundation of China under Grant 61622214, the Science and Technology Planning Project of Guangdong Province under Grant 2017B010116001, and Guangdong Natural Science Foundation Project for Research Teams under Grant 2017A030312006. head-pattern: masked Bohemian",
"title": ""
},
{
"docid": "neg:1840369_10",
"text": "Deep neural networks have become the stateof-the-art models in numerous machine learning tasks. However, general guidance to network architecture design is still missing. In our work, we bridge deep neural network design with numerical differential equations. We show that many effective networks, such as ResNet, PolyNet, FractalNet and RevNet, can be interpreted as different numerical discretizations of differential equations. This finding brings us a brand new perspective on the design of effective deep architectures. We can take advantage of the rich knowledge in numerical analysis to guide us in designing new and potentially more effective deep networks. As an example, we propose a linear multi-step architecture (LM-architecture) which is inspired by the linear multi-step method solving ordinary differential equations. The LMarchitecture is an effective structure that can be used on any ResNet-like networks. In particular, we demonstrate that LM-ResNet and LMResNeXt (i.e. the networks obtained by applying the LM-architecture on ResNet and ResNeXt respectively) can achieve noticeably higher accuracy than ResNet and ResNeXt on both CIFAR and ImageNet with comparable numbers of trainable parameters. In particular, on both CIFAR and ImageNet, LM-ResNet/LM-ResNeXt can significantly compress the original networkSchool of Mathematical Sciences, Peking University, Beijing, China MGH/BWH Center for Clinical Data Science, Masschusetts General Hospital, Harvard Medical School Center for Data Science in Health and Medicine, Peking University Laboratory for Biomedical Image Analysis, Beijing Institute of Big Data Research Beijing International Center for Mathematical Research, Peking University Center for Data Science, Peking University. Correspondence to: Bin Dong <dongbin@math.pku.edu.cn>, Quanzheng Li <Li.Quanzheng@mgh.harvard.edu>. Proceedings of the 35 th International Conference on Machine Learning, Stockholm, Sweden, PMLR 80, 2018. Copyright 2018 by the author(s). s while maintaining a similar performance. This can be explained mathematically using the concept of modified equation from numerical analysis. Last but not least, we also establish a connection between stochastic control and noise injection in the training process which helps to improve generalization of the networks. Furthermore, by relating stochastic training strategy with stochastic dynamic system, we can easily apply stochastic training to the networks with the LM-architecture. As an example, we introduced stochastic depth to LM-ResNet and achieve significant improvement over the original LM-ResNet on CIFAR10.",
"title": ""
},
{
"docid": "neg:1840369_11",
"text": "Network alignment is the problem of matching the nodes of two graphs, maximizing the similarity of the matched nodes and the edges between them. This problem is encountered in a wide array of applications---from biological networks to social networks to ontologies---where multiple networked data sources need to be integrated. Due to the difficulty of the task, an accurate alignment can rarely be found without human assistance. Thus, it is of great practical importance to develop network alignment algorithms that can optimally leverage experts who are able to provide the correct alignment for a small number of nodes. Yet, only a handful of existing works address this active network alignment setting.\n The majority of the existing active methods focus on absolute queries (\"are nodes a and b the same or not?\"), whereas we argue that it is generally easier for a human expert to answer relative queries (\"which node in the set b1,...,bn is the most similar to node a?\"). This paper introduces two novel relative-query strategies, TopMatchings and GibbsMatchings, which can be applied on top of any network alignment method that constructs and solves a bipartite matching problem. Our methods identify the most informative nodes to query by sampling the matchings of the bipartite graph associated to the network-alignment instance.\n We compare the proposed approaches to several commonly-used query strategies and perform experiments on both synthetic and real-world datasets. Our sampling-based strategies yield the highest overall performance, outperforming all the baseline methods by more than 15 percentage points in some cases. In terms of accuracy, TopMatchings and GibbsMatchings perform comparably. However, GibbsMatchings is significantly more scalable, but it also requires hyperparameter tuning for a temperature parameter.",
"title": ""
},
{
"docid": "neg:1840369_12",
"text": "This paper discusses a fuzzy cost-based failure modes, effects, and criticality analysis (FMECA) approach for wind turbines. Conventional FMECA methods use a crisp risk priority number (RPN) as a measure of criticality which suffers from the difficulty of quantifying the risk. One method of increasing wind turbine reliability is to install a condition monitoring system (CMS). The RPN can be reduced with the help of a CMS because faults can be detected at an incipient level, and preventive maintenance can be scheduled. However, the cost of installing a CMS cannot be ignored. The fuzzy cost-based FMECA method proposed in this paper takes into consideration the cost of a CMS and the benefits it brings and provides a method for determining whether it is financially profitable to install a CMS. The analysis is carried out in MATLAB® which provides functions for fuzzy logic operation and defuzzification.",
"title": ""
},
{
"docid": "neg:1840369_13",
"text": "Co-Attentions are highly effective attention mechanisms for text matching applications. Co-Attention enables the learning of pairwise attentions, i.e., learning to attend based on computing word-level affinity scores between two documents. However, text matching problems can exist in either symmetrical or asymmetrical domains. For example, paraphrase identification is a symmetrical task while question-answer matching and entailment classification are considered asymmetrical domains. In this paper, we argue that Co-Attention models in asymmetrical domains require different treatment as opposed to symmetrical domains, i.e., a concept of word-level directionality should be incorporated while learning word-level similarity scores. Hence, the standard inner product in real space commonly adopted in co-attention is not suitable. This paper leverages attractive properties of the complex vector space and proposes a co-attention mechanism based on the complex-valued inner product (Hermitian products). Unlike the real dot product, the dot product in complex space is asymmetric because the first item is conjugated. Aside from modeling and encoding directionality, our proposed approach also enhances the representation learning process. Extensive experiments on five text matching benchmark datasets demonstrate the effectiveness of our approach.",
"title": ""
},
{
"docid": "neg:1840369_14",
"text": "In contrast to the Android application layer, Android’s application framework’s internals and their influence on the platform security and user privacy are still largely a black box for us. In this paper, we establish a static runtime model of the application framework in order to study its internals and provide the first high-level classification of the framework’s protected resources. We thereby uncover design patterns that differ highly from the runtime model at the application layer. We demonstrate the benefits of our insights for security-focused analysis of the framework by re-visiting the important use-case of mapping Android permissions to framework/SDK API methods. We, in particular, present a novel mapping based on our findings that significantly improves on prior results in this area that were established based on insufficient knowledge about the framework’s internals. Moreover, we introduce the concept of permission locality to show that although framework services follow the principle of separation of duty, the accompanying permission checks to guard sensitive operations violate it.",
"title": ""
},
{
"docid": "neg:1840369_15",
"text": "Attribute-based encryption (ABE) is a vision of public key encryption that allows users to encrypt and decrypt messages based on user attributes. This functionality comes at a cost. In a typical implementation, the size of the ciphertext is proportional to the number of attributes associated with it and the decryption time is proportional to the number of attributes used during decryption. Specifically, many practical ABE implementations require one pairing operation per attribute used during decryption. This work focuses on designing ABE schemes with fast decryption algorithms. We restrict our attention to expressive systems without systemwide bounds or limitations, such as placing a limit on the number of attributes used in a ciphertext or a private key. In this setting, we present the first key-policy ABE system where ciphertexts can be decrypted with a constant number of pairings. We show that GPSW ciphertexts can be decrypted with only 2 pairings by increasing the private key size by a factor of |Γ |, where Γ is the set of distinct attributes that appear in the private key. We then present a generalized construction that allows each system user to independently tune various efficiency tradeoffs to their liking on a spectrum where the extremes are GPSW on one end and our very fast scheme on the other. This tuning requires no changes to the public parameters or the encryption algorithm. Strategies for choosing an individualized user optimization plan are discussed. Finally, we discuss how these ideas can be translated into the ciphertext-policy ABE setting at a higher cost.",
"title": ""
},
{
"docid": "neg:1840369_16",
"text": "This article provides an overview of the pathogenesis of type 2 diabetes mellitus. Discussion begins by describing normal glucose homeostasis and ingestion of a typical meal and then discusses glucose homeostasis in diabetes. Topics covered include insulin secretion in type 2 diabetes mellitus and insulin resistance, the site of insulin resistance, the interaction between insulin sensitivity and secretion, the role of adipocytes in the pathogenesis of type 2 diabetes, cellular mechanisms of insulin resistance including glucose transport and phosphorylation, glycogen and synthesis,glucose and oxidation, glycolysis, and insulin signaling.",
"title": ""
},
{
"docid": "neg:1840369_17",
"text": "Lymphedema is a common condition frequently seen in cancer patients who have had lymph node dissection +/- radiation treatment. Traditional management is mainly non-surgical and unsatisfactory. Surgical treatment has relied on excisional techniques in the past. Physiologic operations have more recently been devised to help improve this condition. Assessing patients and deciding which of the available operations to offer them can be challenging. MRI is an extremely useful tool in patient assessment and treatment planning. J. Surg. Oncol. 2017;115:18-22. © 2016 Wiley Periodicals, Inc.",
"title": ""
},
{
"docid": "neg:1840369_18",
"text": "Low power consumption is crucial for medical implant devices. A single-chip, very-low-power interface IC used in implantable pacemaker systems is presented. It contains amplifiers, filters, ADCs, battery management system, voltage multipliers, high voltage pulse generators, programmable logic and timing control. A few circuit techniques are proposed to achieve nanopower circuit operations within submicron CMOS process. Subthreshold transistor designs and switched-capacitor circuits are widely used. The 200 k transistor IC occupies 49 mm/sup 2/, is fabricated in a 0.5-/spl mu/m two-poly three-metal multi-V/sub t/ process, and consumes 8 /spl mu/W.",
"title": ""
},
{
"docid": "neg:1840369_19",
"text": "A recurring problem faced when training neural networks is that there is typically not enough data to maximize the generalization capability of deep neural networks. There are many techniques to address this, including data augmentation, dropout, and transfer learning. In this paper, we introduce an additional method, which we call smart augmentation and we show how to use it to increase the accuracy and reduce over fitting on a target network. Smart augmentation works, by creating a network that learns how to generate augmented data during the training process of a target network in a way that reduces that networks loss. This allows us to learn augmentations that minimize the error of that network. Smart augmentation has shown the potential to increase accuracy by demonstrably significant measures on all data sets tested. In addition, it has shown potential to achieve similar or improved performance levels with significantly smaller network sizes in a number of tested cases.",
"title": ""
}
] |
1840370 | Dimensions of peri-implant mucosa: an evaluation of maxillary anterior single implants in humans. | [
{
"docid": "pos:1840370_0",
"text": "IN 1921, Gottlieb's discovery of the epithelial attachment of the gingiva opened new horizons which served as the basis for a better understanding of the biology of the dental supporting tissues in health and disease. Three years later his pupils, Orban and Kohler (1924), undertook the task of measuring the epithelial attachment as well as the surrounding tissue relations during the four phases of passive eruption of the tooth. Gottlieb and Orban's descriptions of the epithelial attachment unveiled the exact morphology of this epithelial structure, and clarified the relation of this",
"title": ""
}
] | [
{
"docid": "neg:1840370_0",
"text": "We report the discovery of a highly active Ni-Co alloy electrocatalyst for the oxidation of hydrazine (N(2)H(4)) and provide evidence for competing electrochemical (faradaic) and chemical (nonfaradaic) reaction pathways. The electrochemical conversion of hydrazine on catalytic surfaces in fuel cells is of great scientific and technological interest, because it offers multiple redox states, complex reaction pathways, and significantly more favorable energy and power densities compared to hydrogen fuel. Structure-reactivity relations of a Ni(60)Co(40) alloy electrocatalyst are presented with a 6-fold increase in catalytic N(2)H(4) oxidation activity over today's benchmark catalysts. We further study the mechanistic pathways of the catalytic N(2)H(4) conversion as function of the applied electrode potential using differentially pumped electrochemical mass spectrometry (DEMS). At positive overpotentials, N(2)H(4) is electrooxidized into nitrogen consuming hydroxide ions, which is the fuel cell-relevant faradaic reaction pathway. In parallel, N(2)H(4) decomposes chemically into molecular nitrogen and hydrogen over a broad range of electrode potentials. The electroless chemical decomposition rate was controlled by the electrode potential, suggesting a rare example of a liquid-phase electrochemical promotion effect of a chemical catalytic reaction (\"EPOC\"). The coexisting electrocatalytic (faradaic) and heterogeneous catalytic (electroless, nonfaradaic) reaction pathways have important implications for the efficiency of hydrazine fuel cells.",
"title": ""
},
{
"docid": "neg:1840370_1",
"text": "Concepts are the elementary units of reason and linguistic meaning. They are conventional and relatively stable. As such, they must somehow be the result of neural activity in the brain. The questions are: Where? and How? A common philosophical position is that all concepts-even concepts about action and perception-are symbolic and abstract, and therefore must be implemented outside the brain's sensory-motor system. We will argue against this position using (1) neuroscientific evidence; (2) results from neural computation; and (3) results about the nature of concepts from cognitive linguistics. We will propose that the sensory-motor system has the right kind of structure to characterise both sensory-motor and more abstract concepts. Central to this picture are the neural theory of language and the theory of cogs, according to which, brain structures in the sensory-motor regions are exploited to characterise the so-called \"abstract\" concepts that constitute the meanings of grammatical constructions and general inference patterns.",
"title": ""
},
{
"docid": "neg:1840370_2",
"text": "Advanced silicon (Si) node technology development is moving to 10/7nm technology and pursuing die size reduction, efficiency enhancement and lower power consumption for mobile applications in the semiconductor industry. The flip chip chip scale package (fcCSP) has been viewed as an attractive solution to achieve the miniaturization of die size, finer bump pitch, finer line width and spacing (LW/LS) substrate requirements, and is widely adopted in mobile devices to satisfy the increasing demands of higher performance, higher bandwidth, and lower power consumption as well as multiple functions. The utilization of mass reflow (MR) chip attach process in a fcCSP with copper (Cu) pillar bumps, embedded trace substrate (ETS) technology and molded underfill (MUF) is usually viewed as the cost-efficient solution. However, when finer bump pitch and LW/LS with an escaped trace are designed in flip chip MR process, a higher risk of a bump to trace short can occur. In order to reduce the risk of bump to trace short as well as extremely low-k (ELK) damage in a fcCSP with advanced Si node, the thermo-compression bonding (TCB) and TCB with non-conductive paste (TCNCP) have been adopted, although both methodologies will cause a higher assembly cost due to the lower units per hour (UPH) assembly process. For the purpose of delivering a cost-effective chip attach process as compared to TCB/TCNCP methodologies as well as reducing the risk of bump to trace as compared to the MR process, laser assisted bonding (LAB) chip attach methodology was studied in a 15x15mm fcCSP with 10nm backend process daisy-chain die for this paper. Using LAB chip attach technology can increase the UPH by more than 2-times over TCB and increase the UPH 5-times compared to TCNCP. To realize the ELK performance of a 10nm fcCSP with fine bump pitch of $60 \\mu \\mathrm{m}$ and $90 \\mu \\mathrm{m}$ as well as 2-layer ETS with two escaped traces design, the quick temperature cycling (QTC) test was performed after the LAB chip attach process. The comparison of polyimide (PI) layer Cu pillar bumps to non-PI Cu pillar bumps (without a PI layer) will be discussed to estimate the 10nm ELK performance. The evaluated result shows that the utilization of LAB can not only achieve a bump pitch reduction with a finer LW/LS substrate with escaped traces in the design, but it also validates ELK performance and Si node reduction. Therefore, the illustrated LAB chip attach processes examined here can guarantee the assembly yield with less ELK damage risk in a 10nm fcCSP with finer bump pitch and substrate finer LW/LS design in the future.",
"title": ""
},
{
"docid": "neg:1840370_3",
"text": "Child maltreatment is a pervasive problem in our society that has long-term detrimental consequences to the development of the affected child such as future brain growth and functioning. In this paper, we surveyed empirical evidence on the neuropsychological effects of child maltreatment, with a special emphasis on emotional, behavioral, and cognitive process–response difficulties experienced by maltreated children. The alteration of the biochemical stress response system in the brain that changes an individual’s ability to respond efficiently and efficaciously to future stressors is conceptualized as the traumatic stress response. Vulnerable brain regions include the hypothalamic–pituitary–adrenal axis, the amygdala, the hippocampus, and prefrontal cortex and are linked to children’s compromised ability to process both emotionally-laden and neutral stimuli in the future. It is suggested that information must be garnered from varied literatures to conceptualize a research framework for the traumatic stress response in maltreated children. This research framework suggests an altered developmental trajectory of information processing and emotional dysregulation, though much debate still exists surrounding the correlational nature of empirical studies, the potential of resiliency following childhood trauma, and the extent to which early interventions may facilitate recovery.",
"title": ""
},
{
"docid": "neg:1840370_4",
"text": "Although amyotrophic lateral sclerosis and its variants are readily recognised by neurologists, about 10% of patients are misdiagnosed, and delays in diagnosis are common. Prompt diagnosis, sensitive communication of the diagnosis, the involvement of the patient and their family, and a positive care plan are prerequisites for good clinical management. A multidisciplinary, palliative approach can prolong survival and maintain quality of life. Treatment with riluzole improves survival but has a marginal effect on the rate of functional deterioration, whereas non-invasive ventilation prolongs survival and improves or maintains quality of life. In this Review, we discuss the diagnosis, management, and how to cope with impaired function and end of life on the basis of our experience, the opinions of experts, existing guidelines, and clinical trials. We highlight the need for research on the effectiveness of gastrostomy, access to non-invasive ventilation and palliative care, communication between the care team, the patient and his or her family, and recognition of the clinical and social effects of cognitive impairment. We recommend that the plethora of evidence-based guidelines should be compiled into an internationally agreed guideline of best practice.",
"title": ""
},
{
"docid": "neg:1840370_5",
"text": "One of the overriding interests of the literature on health care economics is to discover where personal choice in market economies end and corrective government intervention should begin. Our study addresses this question in the context of John Stuart Mill's utilitarian principle of harm. Our primary objective is to determine whether public policy interventions concerning more than 35,000 online pharmacies worldwide are necessary and efficient compared to traditional market-oriented approaches. Secondly, we seek to determine whether government interference could enhance personal utility maximization, despite its direct and indirect (unintended) costs on medical e-commerce. This study finds that containing the negative externalities of medical e-commerce provides the most compelling raison d'etre of government interference. It asserts that autonomy and paternalism need not be mutually exclusive, despite their direct and indirect consequences on individual choice and decision-making processes. Valuable insights derived from Mill's principle should enrich theory-building in health care economics and policy.",
"title": ""
},
{
"docid": "neg:1840370_6",
"text": "The empirical mode decomposition (EMD) proposed by Huang et al. in 1998 shows remarkably effective in analyzing nonlinear signals. It adaptively represents nonstationary signals as sums of zero-mean amplitude modulation-frequency modulation (AM-FM) components by iteratively conducting the sifting process. How to determine the boundary conditions of the cubic spline when constructing the envelopes of data is the critical issue of the sifting process. A simple bound hit process technique is presented in this paper which constructs two periodic series from the original data by even and odd extension and then builds the envelopes using cubic spline with periodic boundary condition. The EMD is conducted fluently without any assumptions of the processed data by this approach. An example is presented to pick out the weak modulation of internal waves from an Envisat ASAR image by EMD with the boundary process technique",
"title": ""
},
{
"docid": "neg:1840370_7",
"text": "Augmented and virtual reality have the potential of being indistinguishable from the real world. Holographic displays, including head mounted units, support this vision by creating rich stereoscopic scenes, with objects that appear to float in thin air - often within arm's reach. However, one has but to reach out and grasp nothing but air to destroy the suspension of disbelief. Snake-charmer is an attempt to provide physical form to virtual objects by revisiting the concept of Robotic Graphics or Encountered-type Haptic interfaces with current commodity hardware. By means of a robotic arm, Snake-charmer brings physicality to a virtual scene and explores what it means to truly interact with an object. We go beyond texture and position simulation and explore what it means to have a physical presence inside a virtual scene. We demonstrate how to render surface characteristics beyond texture and position, including temperature; how to physically move objects; and how objects can physically interact with the user's hand. We analyze our implementation, present the performance characteristics, and provide guidance for the construction of future physical renderers.",
"title": ""
},
{
"docid": "neg:1840370_8",
"text": "Despite the widespread use of social media by students and its increased use by instructors, very little empirical evidence is available concerning the impact of social media use on student learning and engagement. This paper describes our semester-long experimental study to determine if using Twitter – the microblogging and social networking platform most amenable to ongoing, public dialogue – for educationally relevant purposes can impact college student engagement and grades. A total of 125 students taking a first year seminar course for pre-health professional majors participated in this study (70 in the experimental group and 55 in the control group). With the experimental group, Twitter was used for various types of academic and co-curricular discussions. Engagement was quantified by using a 19-item scale based on the National Survey of Student Engagement. To assess differences in engagement and grades, we used mixed effects analysis of variance (ANOVA) models, with class sections nested within treatment groups. We also conducted content analyses of samples of Twitter exchanges. The ANOVA results showed that the experimental group had a significantly greater increase in engagement than the control group, as well as higher semester grade point averages. Analyses of Twitter communications showed that students and faculty were both highly engaged in the learning process in ways that transcended traditional classroom activities. This study provides experimental evidence that Twitter can be used as an educational tool to help engage students and to mobilize faculty into a more active and participatory role.",
"title": ""
},
{
"docid": "neg:1840370_9",
"text": "With decreasing costs of high-quality surveillance systems, human activity detection and tracking has become increasingly practical. Accordingly, automated systems have been designed for numerous detection tasks, but the task of detecting illegally parked vehicles has been left largely to the human operators of surveillance systems. We propose a methodology for detecting this event in real time by applying a novel image projection that reduces the dimensionality of the data and, thus, reduces the computational complexity of the segmentation and tracking processes. After event detection, we invert the transformation to recover the original appearance of the vehicle and to allow for further processing that may require 2-D data. We evaluate the performance of our algorithm using the i-LIDS vehicle detection challenge datasets as well as videos we have taken ourselves. These videos test the algorithm in a variety of outdoor conditions, including nighttime video and instances of sudden changes in weather.",
"title": ""
},
{
"docid": "neg:1840370_10",
"text": "Fluorescamine is a new reagent for the detection of primary amines in the picomole range. Its reaction with amines is almost instantaneous at room temperature in aqueous media. The products are highly fluorescent, whereas the reagent and its degradation products are nonfluorescent. Applications are discussed.",
"title": ""
},
{
"docid": "neg:1840370_11",
"text": "The first method that was developed to deal with the SLAM problem is based on the extended Kalman filter, EKF SLAM. However this approach cannot be applied to a large environments because of the quadratic complexity and data association problem. The second approach to address the SLAM problem is based on the Rao-Blackwellized Particle filter FastSLAM, which follows a large number of hypotheses that represent the different possible trajectories, each trajectory carries its own map, its complexity increase logarithmically with the number of landmarks in the map. In this paper we will present the result of an implementation of the FastSLAM 2.0 on an open multimedia applications processor, based on a monocular camera as an exteroceptive sensor. A parallel implementation of this algorithm was achieved. Results aim to demonstrate that an optimized algorithm implemented on a low cost architecture is suitable to design an embedded system for SLAM applications.",
"title": ""
},
{
"docid": "neg:1840370_12",
"text": "Requirement engineering is an integral part of the software development lifecycle since the basis for developing successful software depends on comprehending its requirements in the first place. Requirement engineering involves a number of processes for gathering requirements in accordance with the needs and demands of users and stakeholders of the software product. In this paper, we have reviewed the prominent processes, tools and technologies used in the requirement gathering phase. The study is useful to perceive the current state of the affairs pertaining to the requirement engineering research and to understand the strengths and limitations of the existing requirement engineering techniques. The study also summarizes the best practices and how to use a blend of the requirement engineering techniques as an effective methodology to successfully conduct the requirement engineering task. The study also highlights the importance of security requirements as though they are part of the nonfunctional requirement, yet are naturally considered fundamental to secure software development.",
"title": ""
},
{
"docid": "neg:1840370_13",
"text": "Recent technological advances have enabled human users to interact with computers in ways previously unimaginable. Beyond the confines of the keyboard and mouse, new modalities for human-computer interaction such as voice, gesture, and force-feedback are emerging. Despite important advances, one necessary ingredient for natural interaction is still missing–emotions. Emotions play an important role in human-to-human communication and interaction, allowing people to express themselves beyond the verbal domain. The ability to understand human emotions is desirable for the computer in several applications. This paper explores new ways of human-computer interaction that enable the computer to be more aware of the user’s emotional and attentional expressions. We present the basic research in the field and the recent advances into the emotion recognition from facial, voice, and pshysiological signals, where the different modalities are treated independently. We then describe the challenging problem of multimodal emotion recognition and we advocate the use of probabilistic graphical models when fusing the different modalities. We also discuss the difficult issues of obtaining reliable affective data, obtaining ground truth for emotion recognition, and the use of unlabeled data.",
"title": ""
},
{
"docid": "neg:1840370_14",
"text": "This work focuses on algorithms which learn from examples to perform multiclass text and speech categorization tasks. We rst show how to extend the standard notion of classiication by allowing each instance to be associated with multiple labels. We then discuss our approach for multiclass multi-label text categorization which is based on a new and improved family of boosting algorithms. We describe in detail an implementation, called BoosTexter, of the new boosting algorithms for text categorization tasks. We present results comparing the performance of BoosTexter and a number of other text-categorization algorithms on a variety of tasks. We conclude by describing the application of our system to automatic call-type identiication from unconstrained spoken customer responses.",
"title": ""
},
{
"docid": "neg:1840370_15",
"text": "The Web today provides a corpus of design examples unparalleled in human history. However, leveraging existing designs to produce new pages is currently difficult. This paper introduces the Bricolage algorithm for automatically transferring design and content between Web pages. Bricolage introduces a novel structuredprediction technique that learns to create coherent mappings between pages by training on human-generated exemplars. The produced mappings can then be used to automatically transfer the content from one page into the style and layout of another. We show that Bricolage can learn to accurately reproduce human page mappings, and that it provides a general, efficient, and automatic technique for retargeting content between a variety of real Web pages.",
"title": ""
},
{
"docid": "neg:1840370_16",
"text": "A 5-year clinical and laboratory study of Nigerian children with renal failure (RF) was performed to determine the factors that limited their access to dialysis treatment and what could be done to improve access. There were 48 boys and 33 girls (aged 20 days to 15 years). Of 81 RF patients, 55 were eligible for dialysis; 33 indicated ability to afford dialysis, but only 6 were dialyzed, thus giving a dialysis access rate of 10.90% (6/55). Ability to bear dialysis cost/dialysis accessibility ratio was 5.5:1 (33/6). Factors that limited access to dialysis treatment in our patients included financial restrictions from parents (33%), no parental consent for dialysis (6%), lack or failure of dialysis equipment (45%), shortage of dialysis personnel (6%), reluctance of renal staff to dialyze (6%), and late presentation in hospital (4%). More deaths were recorded among undialyzed than dialyzed patients (P<0.01); similarly, undialyzed patients had more deaths compared with RF patients who required no dialysis (P<0.025). Since most of our patients could not be dialyzed owing to a range of factors, preventive nephrology is advocated to reduce the morbidity and mortality from RF due to preventable diseases.",
"title": ""
},
{
"docid": "neg:1840370_17",
"text": "IMSI Catchers are tracking devices that break the privacy of the subscribers of mobile access networks, with disruptive effects to both the communication services and the trust and credibility of mobile network operators. Recently, we verified that IMSI Catcher attacks are really practical for the state-of-the-art 4G/LTE mobile systems too. Our IMSI Catcher device acquires subscription identities (IMSIs) within an area or location within a few seconds of operation and then denies access of subscribers to the commercial network. Moreover, we demonstrate that these attack devices can be easily built and operated using readily available tools and equipment, and without any programming. We describe our experiments and procedures that are based on commercially available hardware and unmodified open source software.",
"title": ""
},
{
"docid": "neg:1840370_18",
"text": "L'hamartome lipomateux superficiel de Hoffmann-Zurhelle est une tumeur bénigne souvent congénitale. Histologiquement, il est caractérisé par la présence hétérotopique de cellules adipeuses quelquefois lipoblastiques autour des trajets vasculaires dermiques. Nous rapportons une nouvelle observation de forme multiple à révélation tardive chez une femme âgée de 31 ans sans antécédents pathologiques notables qui a été adressée à la consultation pour des papules et tumeurs asymptomatiques de couleur chaire se regroupent en placards à disposition linéaire et zostèriforme au niveau de la face externe de la cuisse droite depuis l'âge de 13 ans, augmentant progressivement de taille. L'étude histologique d'un fragment biopsique avait montré un épiderme régulier, plicaturé et kératinisant, soulevé par un tissu fibro-adipeux abondant incluant quelques vaisseaux sanguins aux dépens du derme moyen. Ces données cliniques et histologiques ont permis de retenir le diagnostic d'hamartome lipomateux superficiel. Une exérèse chirurgicale des tumeurs de grande taille a été proposée complété par le laser CO2 pour le reste de lésions cutanées. L'hamartome lipomateux superficiel est une lésion bénigne sans potentiel de malignité. L'exérèse chirurgicale peut être proposée si la lésion est gênante ou dans un but essentiellement esthétique. Pan African Medical Journal. 2015; 21:31 doi:10.11604/pamj.2015.21.31.4773 This article is available online at: http://www.panafrican-med-journal.com/content/article/21/31/full/ © Sanaa Krich et al. The Pan African Medical Journal ISSN 1937-8688. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Pan African Medical Journal – ISSN: 19378688 (www.panafrican-med-journal.com) Published in partnership with the African Field Epidemiology Network (AFENET). (www.afenet.net) Case report Open Access",
"title": ""
},
{
"docid": "neg:1840370_19",
"text": "cplint on SWISH is a web application that allows users to perform reasoning tasks on probabilistic logic programs. Both inference and learning systems can be performed: conditional probabilities with exact, rejection sampling and Metropolis-Hasting methods. Moreover, the system now allows hybrid programs, i.e., programs where some of the random variables are continuous. To perform inference on such programs likelihood weighting and particle filtering are used. cplint on SWISH is also able to sample goals’ arguments and to graph the results. This paper reports on advances and new features of cplint on SWISH, including the capability of drawing the binary decision diagrams created during the inference processes.",
"title": ""
}
] |
1840371 | Suspecting Less and Doing Better: New Insights on Palmprint Identification for Faster and More Accurate Matching | [
{
"docid": "pos:1840371_0",
"text": "Two-dimensional (2-D) hand-geometry features carry limited discriminatory information and therefore yield moderate performance when utilized for personal identification. This paper investigates a new approach to achieve performance improvement by simultaneously acquiring and combining three-dimensional (3-D) and 2-D features from the human hand. The proposed approach utilizes a 3-D digitizer to simultaneously acquire intensity and range images of the presented hands of the users in a completely contact-free manner. Two new representations that effectively characterize the local finger surface features are extracted from the acquired range images and are matched using the proposed matching metrics. In addition, the characterization of 3-D palm surface using SurfaceCode is proposed for matching a pair of 3-D palms. The proposed approach is evaluated on a database of 177 users acquired in two sessions. The experimental results suggest that the proposed 3-D hand-geometry features have significant discriminatory information to reliably authenticate individuals. Our experimental results demonstrate that consolidating 3-D and 2-D hand-geometry features results in significantly improved performance that cannot be achieved with the traditional 2-D hand-geometry features alone. Furthermore, this paper also investigates the performance improvement that can be achieved by integrating five biometric features, i.e., 2-D palmprint, 3-D palmprint, finger texture, along with 3-D and 2-D hand-geometry features, that are simultaneously extracted from the user's hand presented for authentication.",
"title": ""
}
] | [
{
"docid": "neg:1840371_0",
"text": "A tutorial on the design and development of automatic speakerrecognition systems is presented. Automatic speaker recognition is the use of a machine to recognize a person from a spoken phrase. These systems can operate in two modes: to identify a particular person or toverify a person’s claimed identity. Speech processing and the basic components of automatic speakerrecognition systems are shown and design tradeoffs are discussed. Then, a new automatic speaker-recognition system is given. This recognizer performs with 98.9% correct identification. Last, the performances of various systems are compared.",
"title": ""
},
{
"docid": "neg:1840371_1",
"text": "We consider the problem of mining association rules on a shared-nothing multiprocessor. We present three algorithms that explore a spectrum of trade-oos between computation, communication, memory usage, synchronization, and the use of problem-speciic information. The best algorithm exhibits near perfect scaleup behavior, yet requires only minimal overhead compared to the current best serial algorithm.",
"title": ""
},
{
"docid": "neg:1840371_2",
"text": "Road traffic accidents are among the top leading causes of deaths and injuries of various levels. Ethiopia is experiencing highest rate of such accidents resulting in fatalities and various levels of injuries. Addis Ababa, the capital city of Ethiopia, takes the lion’s share of the risk having higher number of vehicles and traffic and the cost of these fatalities and injuries has a great impact on the socio-economic development of a society. This research is focused on developing adaptive regression trees to build a decision support system to handle road traffic accident analysis for Addis Ababa city traffic office. The study focused on injury severity levels resulting from an accident using real data obtained from the Addis Ababa traffic office. Empirical results show that the developed models could classify accidents within reasonable accuracy.",
"title": ""
},
{
"docid": "neg:1840371_3",
"text": "This article presents a method for rectifying and stabilising video from cell-phones with rolling shutter (RS) cameras. Due to size constraints, cell-phone cameras have constant, or near constant focal length, making them an ideal application for calibrated projective geometry. In contrast to previous RS rectification attempts that model distortions in the image plane, we model the 3D rotation of the camera. We parameterise the camera rotation as a continuous curve, with knots distributed across a short frame interval. Curve parameters are found using non-linear least squares over inter-frame correspondences from a KLT tracker. By smoothing a sequence of reference rotations from the estimated curve, we can at a small extra cost, obtain a high-quality image stabilisation. Using synthetic RS sequences with associated ground-truth, we demonstrate that our rectification improves over two other methods. We also compare our video stabilisation with the methods in iMovie and Deshaker.",
"title": ""
},
{
"docid": "neg:1840371_4",
"text": "A boomerang-shaped alar base excision is described to narrow the nasal base and correct the excessive alar flare. The boomerang excision combined the external alar wedge resection with an internal vestibular floor excision. The internal excision was inclined 30 to 45 degrees laterally to form the inner limb of the boomerang. The study included 46 patients presenting with wide nasal base and excessive alar flaring. All cases were followed for a mean period of 18 months (range, 8 to 36 months). The laterally oriented vestibular floor excision allowed for maximum preservation of the natural curvature of the alar rim where it meets the nostril floor and upon its closure resulted in a considerable medialization of alar lobule, which significantly reduced the amount of alar flare and the amount of external alar excision needed. This external alar excision measured, on average, 3.8 mm (range, 2 to 8 mm), which is significantly less than that needed when a standard vertical internal excision was used ( P < 0.0001). Such conservative external excisions eliminated the risk of obliterating the natural alar-facial crease, which did not occur in any of our cases. No cases of postoperative bleeding, infection, or vestibular stenosis were encountered. Keloid or hypertrophic scar formation was not encountered; however, dermabrasion of the scars was needed in three (6.5%) cases to eliminate apparent suture track marks. The boomerang alar base excision proved to be a safe and effective technique for narrowing the nasal base and elimination of the excessive flaring and resulted in a natural, well-proportioned nasal base with no obvious scarring.",
"title": ""
},
{
"docid": "neg:1840371_5",
"text": "Camera tracking is an important issue in many computer vision and robotics applications, such as, augmented reality and Simultaneous Localization And Mapping (SLAM). In this paper, a feature-based technique for monocular camera tracking is proposed. The proposed approach is based on tracking a set of sparse features, which are successively tracked in a stream of video frames. In the developed system, camera initially views a chessboard with known cell size for few frames to be enabled to construct initial map of the environment. Thereafter, Camera pose estimation for each new incoming frame is carried out in a framework that is merely working with a set of visible natural landmarks. Estimation of 6-DOF camera pose parameters is performed using a particle filter. Moreover, recovering depth of newly detected landmarks, a linear triangulation method is used. The proposed method is applied on real world videos and positioning error of the camera pose is less than 3 cm in average that indicates effectiveness and accuracy of the proposed method.",
"title": ""
},
{
"docid": "neg:1840371_6",
"text": "Siamese-like networks, Streetscore-CNN (SS-CNN) and Ranking SS-CNN, to predict pairwise comparisons Figure 1: User Interface for Crowdsourced Online Game Performance Analysis • SS-CNN: We calculate the % of pairwise comparisons in test set predicted correctly by (1) Softmax of output neurons in final layer (2) comparing TrueSkill scores [2] obtained from synthetic pairwise comparisons from the CNN (3) extracting features from penultimate layer of CNN and feeding pairwise feature representations to a RankSVM [3] • RSS-CNN: We compare the ranking function outputs for both images in a test pair to decide which image wins, and calculate the binary prediction accuracy.",
"title": ""
},
{
"docid": "neg:1840371_7",
"text": "In an electronic warfare (EW) battlefield environment, it is highly necessary for a fighter aircraft to intercept and identify the several interleaved radar signals that it receives from the surrounding emitters, so as to prepare itself for countermeasures. The main function of the Electronic Support Measure (ESM) receiver is to receive, measure, deinterleave pulses and then identify alternative threat emitters. Deinterleaving of radar signals is based on time of arrival (TOA) analysis and the use of the sequential difference (SDIF) histogram method for determining the pulse repetition interval (PRI), which is an important pulse parameter. Once the pulse repetition intervals are determined, check for the existence of staggered PRI (level-2) is carried out, implemented in MATLAB. Keywordspulse deinterleaving, pulse repetition interval, stagger PRI, sequential difference histogram, time of arrival.",
"title": ""
},
{
"docid": "neg:1840371_8",
"text": "Two-stream Convolutional Networks (ConvNets) have shown strong performance for human action recognition in videos. Recently, Residual Networks (ResNets) have arisen as a new technique to train extremely deep architectures. In this paper, we introduce spatiotemporal ResNets as a combination of these two approaches. Our novel architecture generalizes ResNets for the spatiotemporal domain by introducing residual connections in two ways. First, we inject residual connections between the appearance and motion pathways of a two-stream architecture to allow spatiotemporal interaction between the two streams. Second, we transform pretrained image ConvNets into spatiotemporal networks by equipping them with learnable convolutional filters that are initialized as temporal residual connections and operate on adjacent feature maps in time. This approach slowly increases the spatiotemporal receptive field as the depth of the model increases and naturally integrates image ConvNet design principles. The whole model is trained end-to-end to allow hierarchical learning of complex spatiotemporal features. We evaluate our novel spatiotemporal ResNet using two widely used action recognition benchmarks where it exceeds the previous state-of-the-art.",
"title": ""
},
{
"docid": "neg:1840371_9",
"text": "This paper describes a machine learningbased approach that uses word embedding features to recognize drug names from biomedical texts. As a starting point, we developed a baseline system based on Conditional Random Field (CRF) trained with standard features used in current Named Entity Recognition (NER) systems. Then, the system was extended to incorporate new features, such as word vectors and word clusters generated by the Word2Vec tool and a lexicon feature from the DINTO ontology. We trained the Word2vec tool over two different corpus: Wikipedia and MedLine. Our main goal is to study the effectiveness of using word embeddings as features to improve performance on our baseline system, as well as to analyze whether the DINTO ontology could be a valuable complementary data source integrated in a machine learning NER system. To evaluate our approach and compare it with previous work, we conducted a series of experiments on the dataset of SemEval-2013 Task 9.1 Drug Name Recognition.",
"title": ""
},
{
"docid": "neg:1840371_10",
"text": "User behaviour targeting is essential in online advertising. Compared with sponsored search keyword targeting and contextual advertising page content targeting, user behaviour targeting builds users’ interest profiles via tracking their online behaviour and then delivers the relevant ads according to each user’s interest, which leads to higher targeting accuracy and thus more improved advertising performance. The current user profiling methods include building keywords and topic tags or mapping users onto a hierarchical taxonomy. However, to our knowledge, there is no previous work that explicitly investigates the user online visits similarity and incorporates such similarity into their ad response prediction. In this work, we propose a general framework which learns the user profiles based on their online browsing behaviour, and transfers the learned knowledge onto prediction of their ad response. Technically, we propose a transfer learning model based on the probabilistic latent factor graphic models, where the users’ ad response profiles are generated from their online browsing profiles. The large-scale experiments based on real-world data demonstrate significant improvement of our solution over some strong baselines.",
"title": ""
},
{
"docid": "neg:1840371_11",
"text": "Despite recent breakthroughs in the applications of deep neural networks, one setting that presents a persistent challenge is that of “one-shot learning.” Traditional gradient-based networks require a lot of data to learn, often through extensive iterative training. When new data is encountered, the models must inefficiently relearn their parameters to adequately incorporate the new information without catastrophic interference. Architectures with augmented memory capacities, such as Neural Turing Machines (NTMs), offer the ability to quickly encode and retrieve new information, and hence can potentially obviate the downsides of conventional models. Here, we demonstrate the ability of a memory-augmented neural network to rapidly assimilate new data, and leverage this data to make accurate predictions after only a few samples. We also introduce a new method for accessing an external memory that focuses on memory content, unlike previous methods that additionally use memory locationbased focusing mechanisms.",
"title": ""
},
{
"docid": "neg:1840371_12",
"text": "PixelCNNs are a recently proposed class of powerful generative models with tractable likelihood. Here we discuss our implementation of PixelCNNs which we make available at https://github.com/openai/pixel-cnn. Our implementation contains a number of modifications to the original model that both simplify its structure and improve its performance. 1) We use a discretized logistic mixture likelihood on the pixels, rather than a 256-way softmax, which we find to speed up training. 2) We condition on whole pixels, rather than R/G/B sub-pixels, simplifying the model structure. 3) We use downsampling to efficiently capture structure at multiple resolutions. 4) We introduce additional short-cut connections to further speed up optimization. 5) We regularize the model using dropout. Finally, we present state-of-the-art log likelihood results on CIFAR-10 to demonstrate the usefulness of these modifications.",
"title": ""
},
{
"docid": "neg:1840371_13",
"text": "Cell-based sensing represents a new paradigm for performing direct and accurate detection of cell- or tissue-specific responses by incorporating living cells or tissues as an integral part of a sensor. Here we report a new magnetic cell-based sensing platform by combining magnetic sensors implemented in the complementary metal-oxide-semiconductor (CMOS) integrated microelectronics process with cardiac progenitor cells that are differentiated directly on-chip. We show that the pulsatile movements of on-chip cardiac progenitor cells can be monitored in a real-time manner. Our work provides a new low-cost approach to enable high-throughput screening systems as used in drug development and hand-held devices for point-of-care (PoC) biomedical diagnostic applications.",
"title": ""
},
{
"docid": "neg:1840371_14",
"text": "When we are investigating an object in a data set, which itself may or may not be an outlier, can we identify unusual (i.e., outlying) aspects of the object? In this paper, we identify the novel problem of mining outlying aspects on numeric data. Given a query object $$o$$ o in a multidimensional numeric data set $$O$$ O , in which subspace is $$o$$ o most outlying? Technically, we use the rank of the probability density of an object in a subspace to measure the outlyingness of the object in the subspace. A minimal subspace where the query object is ranked the best is an outlying aspect. Computing the outlying aspects of a query object is far from trivial. A naïve method has to calculate the probability densities of all objects and rank them in every subspace, which is very costly when the dimensionality is high. We systematically develop a heuristic method that is capable of searching data sets with tens of dimensions efficiently. Our empirical study using both real data and synthetic data demonstrates that our method is effective and efficient.",
"title": ""
},
{
"docid": "neg:1840371_15",
"text": "Near Field Communication (NFC) enables physically proximate devices to communicate over very short ranges in a peer-to-peer manner without incurring complex network configuration overheads. However, adoption of NFC-enabled applications has been stymied by the low levels of penetration of NFC hardware. In this paper, we address the challenge of enabling NFC-like capability on the existing base of mobile phones. To this end, we develop Dhwani, a novel, acoustics-based NFC system that uses the microphone and speakers on mobile phones, thus eliminating the need for any specialized NFC hardware. A key feature of Dhwani is the JamSecure technique, which uses self-jamming coupled with self-interference cancellation at the receiver, to provide an information-theoretically secure communication channel between the devices. Our current implementation of Dhwani achieves data rates of up to 2.4 Kbps, which is sufficient for most existing NFC applications.",
"title": ""
},
{
"docid": "neg:1840371_16",
"text": "This paper proposes two rectangular ring planar monopole antennas for wideband and ultra-wideband applications. Simple planar rectangular rings are used to design the planar antennas. These rectangular rings are designed in a way to achieve the wideband operations. The operating frequency band ranges from 1.85 GHz to 4.95 GHz and 3.12 GHz to 14.15 GHz. The gain varies from 1.83 dBi to 2.89 dBi for rectangular ring wideband antenna and 1.89 dBi to 5.2 dBi for rectangular ring ultra-wideband antenna. The design approach and the results are discussed.",
"title": ""
},
{
"docid": "neg:1840371_17",
"text": "This paper explores the role of the business model in capturing value from early stage technology. A successful business model creates a heuristic logic that connects technical potential with the realization of economic value. The business model unlocks latent value from a technology, but its logic constrains the subsequent search for new, alternative models for other technologies later on—an implicit cognitive dimension overlooked in most discourse on the topic. We explore the intellectual roots of the concept, offer a working definition and show how the Xerox Corporation arose by employing an effective business model to commercialize a technology rejected by other leading companies of the day. We then show the long shadow that this model cast upon Xerox’s later management of selected spin-off companies from Xerox PARC. Xerox evaluated the technical potential of these spin-offs through its own business model, while those spin-offs that became successful did so through evolving business models that came to differ substantially from that of Xerox. The search and learning for an effective business model in failed ventures, by contrast, were quite limited.",
"title": ""
},
{
"docid": "neg:1840371_18",
"text": "Banks in Nigeria need to understand the perceptual difference in both male and female employees to better develop adequate policy on sexual harassment. This study investigated the perceptual differences on sexual harassment among male and female bank employees in two commercial cities (Kano and Lagos) of Nigeria.Two hundred and seventy five employees (149 males, 126 females) were conveniently sampled for this study. A survey design with a questionnaire adapted from Sexual Experience Questionnaire (SEQ) comprises of three dimension scalesof sexual harassment was used. The hypotheses were tested with independent samples t-test. The resultsindicated no perceptual differences in labelling sexual harassment clues between male and female bank employees in Nigeria. Thus, the study recommends that bank managers should support and establish the tone for sexual harassment-free workplace. KeywordsGender Harassment, Sexual Coercion, Unwanted Sexual Attention, Workplace.",
"title": ""
}
] |
1840372 | In-DBMS Sampling-based Sub-trajectory Clustering | [
{
"docid": "pos:1840372_0",
"text": "The increasing pervasiveness of location-acquisition technologies has enabled collection of huge amount of trajectories for almost any kind of moving objects. Discovering useful patterns from their movement behaviors can convey valuable knowledge to a variety of critical applications. In this light, we propose a novel concept, called gathering, which is a trajectory pattern modeling various group incidents such as celebrations, parades, protests, traffic jams and so on. A key observation is that these incidents typically involve large congregations of individuals, which form durable and stable areas with high density. In this work, we first develop a set of novel techniques to tackle the challenge of efficient discovery of gathering patterns on archived trajectory dataset. Afterwards, since trajectory databases are inherently dynamic in many real-world scenarios such as traffic monitoring, fleet management and battlefield surveillance, we further propose an online discovery solution by applying a series of optimization schemes, which can keep track of gathering patterns while new trajectory data arrive. Finally, the effectiveness of the proposed concepts and the efficiency of the approaches are validated by extensive experiments based on a real taxicab trajectory dataset.",
"title": ""
}
] | [
{
"docid": "neg:1840372_0",
"text": "Continuum robotic manipulators articulate due to their inherent compliance. Tendon actuation leads to compression of the manipulator, extension of the actuators, and is limited by the practical constraint that tendons cannot support compression. In light of these observations, we present a new linear model for transforming desired beam configuration to tendon displacements and vice versa. We begin from first principles in solid mechanics by analyzing the effects of geometrically nonlinear tendon loads. These loads act both distally at the termination point and proximally along the conduit contact interface. The resulting model simplifies to a linear system including only the bending and axial modes of the manipulator as well as the actuator compliance. The model is then manipulated to form a concise mapping from beam configuration-space parameters to n redundant tendon displacements via the internal loads and strains experienced by the system. We demonstrate the utility of this model by implementing an optimal feasible controller. The controller regulates axial strain to a constant value while guaranteeing positive tendon forces and minimizing their magnitudes over a range of articulations. The mechanics-based model from this study provides insight as well as performance gains for this increasingly ubiquitous class of manipulators.",
"title": ""
},
{
"docid": "neg:1840372_1",
"text": "We propose a novel approach for solving the approximate nearest neighbor search problem in arbitrary metric spaces. The distinctive feature of our approach is that we can incrementally build a non-hierarchical distributed structure for given metric space data with a logarithmic complexity scaling on the size of the structure and adjustable accuracy probabilistic nearest neighbor queries. The structure is based on a small world graph with vertices corresponding to the stored elements, edges for links between them and the greedy algorithm as base algorithm for searching. Both search and addition algorithms require only local information from the structure. The performed simulation for data in the Euclidian space shows that the structure built using the proposed algorithm has navigable small world properties with logarithmic search complexity at fixed accuracy and has weak (power law) scalability with the dimensionality of the stored data.",
"title": ""
},
{
"docid": "neg:1840372_2",
"text": "Can parents burn out? The aim of this research was to examine the construct validity of the concept of parental burnout and to provide researchers which an instrument to measure it. We conducted two successive questionnaire-based online studies, the first with a community-sample of 379 parents using principal component analyses and the second with a community- sample of 1,723 parents using both principal component analyses and confirmatory factor analyses. We investigated whether the tridimensional structure of the burnout syndrome (i.e., exhaustion, inefficacy, and depersonalization) held in the parental context. We then examined the specificity of parental burnout vis-à-vis professional burnout assessed with the Maslach Burnout Inventory, parental stress assessed with the Parental Stress Questionnaire and depression assessed with the Beck Depression Inventory. The results support the validity of a tri-dimensional burnout syndrome including exhaustion, inefficacy and emotional distancing with, respectively, 53.96 and 55.76% variance explained in study 1 and study 2, and reliability ranging from 0.89 to 0.94. The final version of the Parental Burnout Inventory (PBI) consists of 22 items and displays strong psychometric properties (CFI = 0.95, RMSEA = 0.06). Low to moderate correlations between parental burnout and professional burnout, parental stress and depression suggests that parental burnout is not just burnout, stress or depression. The prevalence of parental burnout confirms that some parents are so exhausted that the term \"burnout\" is appropriate. The proportion of burnout parents lies somewhere between 2 and 12%. The results are discussed in light of their implications at the micro-, meso- and macro-levels.",
"title": ""
},
{
"docid": "neg:1840372_3",
"text": "Vegetable quality is frequently referred to size, shape, mass, firmness, color and bruises from which fruits can be classified and sorted. However, technological by small and middle producers implementation to assess this quality is unfeasible, due to high costs of software, equipment as well as operational costs. Based on these considerations, the proposal of this research is to evaluate a new open software that enables the classification system by recognizing fruit shape, volume, color and possibly bruises at a unique glance. The software named ImageJ, compatible with Windows, Linux and MAC/OS, is quite popular in medical research and practices, and offers algorithms to obtain the above mentioned parameters. The software allows calculation of volume, area, averages, border detection, image improvement and morphological operations in a variety of image archive formats as well as extensions by means of “plugins” written in Java.",
"title": ""
},
{
"docid": "neg:1840372_4",
"text": "This second article of our series looks at the process of designing a survey. The design process begins with reviewing the objectives, examining the target population identified by the objectives, and deciding how best to obtain the information needed to address those objectives. However, we also need to consider factors such as determining the appropriate sample size and ensuring the largest possible response rate.To illustrate our ideas, we use the three surveys described in Part 1 of this series to suggest good and bad practice in software engineering survey research.",
"title": ""
},
{
"docid": "neg:1840372_5",
"text": "Cloud computing technologies have matured enough that the service providers are compelled to migrate their services to virtualized infrastructure in cloud data centers. However, moving the computation and network to shared physical infrastructure poses a multitude of questions, both for service providers and for data center owners. In this work, we propose HyViDE - a framework for optimal placement of multiple virtual data center networks on a physical data center network. HyViDE preselects a subset of virtual data center network requests and uses a hybrid strategy for embedding them on the physical data center. Coordinated static and dynamic embedding algorithms are used in this hybrid framework to minimize the rejection of requests and fulfill QoS demands of the embedded networks. HyViDE can employ suitable static and dynamic strategies to meet the objectives of data center owners and customers. Experimental evaluation of our algorithms on HyViDE shows that, the acceptance rate is high with faster servicing of requests.",
"title": ""
},
{
"docid": "neg:1840372_6",
"text": "The primary focus of autonomous driving research is to improve driving accuracy. While great progress has been made, state-of-the-art algorithms still fail at times. Such failures may have catastrophic consequences. It therefore is im- portant that automated cars foresee problems ahead as early as possible. This is also of paramount importance if the driver will be asked to take over. We conjecture that failures do not occur randomly. For instance, driving models may fail more likely at places with heavy traffic, at complex intersections, and/or under adverse weather/illumination conditions. This work presents a method to learn to predict the occurrence of these failures, i.e., to assess how difficult a scene is to a given driving model and to possibly give the human driver an early headsup. A camera- based driving model is developed and trained over real driving datasets. The discrepancies between the model's predictions and the human 'ground-truth' maneuvers were then recorded, to yield the 'failure' scores. Experimental results show that the failure score can indeed be learned and predicted. Thus, our prediction method is able to improve the overall safety of an automated driving model by alerting the human driver timely, leading to better human-vehicle collaborative driving.",
"title": ""
},
{
"docid": "neg:1840372_7",
"text": "A zero voltage switching (ZVS) isolated Sepic converter with active clamp topology is presented. The buck-boost type of active clamp is connected in parallel with the primary side of the transformer to absorb all the energy stored in the transformer leakage inductance and to limit the peak voltage on the switching device. During the transition interval between the main and auxiliary switches, the resonance based on the output capacitor of switch and the transformer leakage inductor can achieve ZVS for both switches. The operational principle, steady state analysis and design consideration of the proposed converter are presented. Finally, the proposed converter is verified by the experimental results based on an 180 W prototype circuit.",
"title": ""
},
{
"docid": "neg:1840372_8",
"text": "Hashing has recently attracted considerable attention for large scale similarity search. However, learning compact codes with good performance is still a challenge. In many cases, the real-world data lies on a low-dimensional manifold embedded in high-dimensional ambient space. To capture meaningful neighbors, a compact hashing representation should be able to uncover the intrinsic geometric structure of the manifold, e.g., the neighborhood relationships between subregions. Most existing hashing methods only consider this issue during mapping data points into certain projected dimensions. When getting the binary codes, they either directly quantize the projected values with a threshold, or use an orthogonal matrix to refine the initial projection matrix, which both consider projection and quantization separately, and will not well preserve the locality structure in the whole learning process. In this paper, we propose a novel hashing algorithm called Locality Preserving Hashing to effectively solve the above problems. Specifically, we learn a set of locality preserving projections with a joint optimization framework, which minimizes the average projection distance and quantization loss simultaneously. Experimental comparisons with other state-of-the-art methods on two large scale datasets demonstrate the effectiveness and efficiency of our method.",
"title": ""
},
{
"docid": "neg:1840372_9",
"text": "Smart phones, tablets, and the rise of the Internet of Things are driving an insatiable demand for wireless capacity. This demand requires networking and Internet infrastructures to evolve to meet the needs of current and future multimedia applications. Wireless HetNets will play an important role toward the goal of using a diverse spectrum to provide high quality-of-service, especially in indoor environments where most data are consumed. An additional tier in the wireless HetNets concept is envisioned using indoor gigabit small-cells to offer additional wireless capacity where it is needed the most. The use of light as a new mobile access medium is considered promising. In this article, we describe the general characteristics of WiFi and VLC (or LiFi) and demonstrate a practical framework for both technologies to coexist. We explore the existing research activity in this area and articulate current and future research challenges based on our experience in building a proof-of-concept prototype VLC HetNet.",
"title": ""
},
{
"docid": "neg:1840372_10",
"text": "Association football is a popular sport, but it is also a big business. From a managerial perspective, the most important decisions that team managers make concern player transfers, so issues related to player valuation, especially the determination of transfer fees and market values, are of major concern. Market values can be understood as estimates of transfer fees—that is, prices that could be paid for a player on the football market—so they play an important role in transfer negotiations. These values have traditionally been estimated by football experts, but crowdsourcing has emerged as an increasingly popular approach to estimating market value. While researchers have found high correlations between crowdsourced market values and actual transfer fees, the process behind crowd judgments is not transparent, crowd estimates are not replicable, and they are updated infrequently because they require the participation of many users. Data analytics may thus provide a sound alternative or a complementary approach to crowd-based estimations of market value. Based on a unique data set that is comprised of 4217 players from the top five European leagues and a period of six playing seasons, we estimate players’ market values using multilevel regression analysis. The regression results suggest that data-driven estimates of market value can overcome several of the crowd’s practical limitations while producing comparably accurate numbers. Our results have important implications for football managers and scouts, as data analytics facilitates precise, objective, and reliable estimates of market value that can be updated at any time. © 2017 The Author(s). Published by Elsevier B.V. This is an open access article under the CC BY-NC-ND license. ( http://creativecommons.org/licenses/by-nc-nd/4.0/ )",
"title": ""
},
{
"docid": "neg:1840372_11",
"text": "Introductory psychology students (120 females and 120 males) rated attractiveness and fecundity of one of six computer-altered female gures representing three body-weight categories (underweight, normal weight and overweight) and two levels of waist-to-hip ratio (WHR), one in the ideal range (0.72) and one in the non-ideal range (0.86). Both females and males judged underweight gures to be more attractive than normal or overweight gures, regardless of WHR. The female gure with the high WHR (0.86) was judged to be more attractive than the gure with the low WHR (0.72) across all body-weight conditions. Analyses of fecundity ratings revealed an interaction between weight and WHR such that the models did not differ in the normal weight category, but did differ in the underweight (model with WHR of 0.72 was less fecund) and overweight (model with WHR of 0.86 was more fecund) categories. These ndings lend stronger support to sociocultural rather than evolutionary hypotheses.",
"title": ""
},
{
"docid": "neg:1840372_12",
"text": "We present a new class of methods for high-dimensional nonparametric regression and classification called sparse additive models (SpAM). Our methods combine ideas from sparse linear modeling and additive nonparametric regression. We derive an algorithm for fitting the models that is practical and effective even when the number of covariates is larger than the sample size. SpAM is essentially a functional version of the grouped lasso of Yuan and Lin (2006). SpAM is also closely related to the COSSO model of Lin and Zhang (2006), but decouples smoothing and sparsity, enabling the use of arbitrary nonparametric smoothers. We give an analysis of the theoretical properties of sparse additive models, and present empirical results on synthetic and real data, showing that SpAM can be effective in fitting sparse nonparametric models in high dimensional data.",
"title": ""
},
{
"docid": "neg:1840372_13",
"text": "The end-Permian mass extinction was the most severe biodiversity crisis in Earth history. To better constrain the timing, and ultimately the causes of this event, we collected a suite of geochronologic, isotopic, and biostratigraphic data on several well-preserved sedimentary sections in South China. High-precision U-Pb dating reveals that the extinction peak occurred just before 252.28 ± 0.08 million years ago, after a decline of 2 per mil (‰) in δ(13)C over 90,000 years, and coincided with a δ(13)C excursion of -5‰ that is estimated to have lasted ≤20,000 years. The extinction interval was less than 200,000 years and synchronous in marine and terrestrial realms; associated charcoal-rich and soot-bearing layers indicate widespread wildfires on land. A massive release of thermogenic carbon dioxide and/or methane may have caused the catastrophic extinction.",
"title": ""
},
{
"docid": "neg:1840372_14",
"text": "BACKGROUND\nDiabetes is a chronic disease, with high prevalence across many nations, which is characterized by elevated level of blood glucose and risk of acute and chronic complication. The Kingdom of Saudi Arabia (KSA) has one of the highest levels of diabetes prevalence globally. It is well-known that the treatment of diabetes is complex process and requires both lifestyle change and clear pharmacologic treatment plan. To avoid the complication from diabetes, the effective behavioural change and extensive education and self-management is one of the key approaches to alleviate such complications. However, this process is lengthy and expensive. The recent studies on the user of smart phone technologies for diabetes self-management have proven to be an effective tool in controlling hemoglobin (HbA1c) levels especially in type-2 diabetic (T2D) patients. However, to date no reported study addressed the effectiveness of this approach in the in Saudi patients. This study investigates the impact of using mobile health technologies for the self-management of diabetes in Saudi Arabia.\n\n\nMETHODS\nIn this study, an intelligent mobile diabetes management system (SAED), tailored for T2D patients in KSA was developed. A pilot study of the SAED system was conducted in Saudi Arabia with 20 diabetic patients for 6 months duration. The patients were randomly categorized into a control group who did not use the SAED system and an intervention group whom used the SAED system for their diabetes management during this period. At the end of the follow-up period, the HbA1c levels in the patients in both groups were measure together with a diabetes knowledge test was also conducted to test the diabetes awareness of the patients.\n\n\nRESULTS\nThe results of SAED pilot study showed that the patients in the intervention group were able to significantly decrease their HbA1c levels compared to the control group. The SAED system also enhanced the diabetes awareness amongst the patients in the intervention group during the trial period. These outcomes confirm the global studies on the effectiveness of smart phone technologies in diabetes management. The significance of the study is that this was one of the first such studies conducted on Saudi patients and of their acceptance for such technology in their diabetes self-management treatment plans.\n\n\nCONCLUSIONS\nThe pilot study of the SAED system showed that a mobile health technology can significantly improve the HbA1C levels among Saudi diabetic and improve their disease management plans. The SAED system can also be an effective and low-cost solution in improving the quality of life of diabetic patients in the Kingdom considering the high level of prevalence and the increasing economic burden of this disease.",
"title": ""
},
{
"docid": "neg:1840372_15",
"text": "T Internet has increased the flexibility of retailers, allowing them to operate an online arm in addition to their physical stores. The online channel offers potential benefits in selling to customer segments that value the convenience of online shopping, but it also raises new challenges. These include the higher likelihood of costly product returns when customers’ ability to “touch and feel” products is important in determining fit. We study competing retailers that can operate dual channels (“bricks and clicks”) and examine how pricing strategies and physical store assistance levels change as a result of the additional Internet outlet. A central result we obtain is that when differentiation among competing retailers is not too high, having an online channel can actually increase investment in store assistance levels (e.g., greater shelf display, more-qualified sales staff, floor samples) and decrease profits. Consequently, when the decision to open an Internet channel is endogenized, there can exist an asymmetric equilibrium where only one retailer elects to operate an online arm but earns lower profits than its bricks-only rival. We also characterize equilibria where firms open an online channel, even though consumers only use it for research and learning purposes but buy in stores. A number of extensions are discussed, including retail settings where firms carry multiple product categories, shipping and handling costs, and the role of store assistance in impacting consumer perceived benefits.",
"title": ""
},
{
"docid": "neg:1840372_16",
"text": "This work introduces a novel framework for quantifying the presence and strength of recurrent dynamics in video data. Specifically, we provide continuous measures of periodicity (perfect repetition) and quasiperiodicity (superposition of periodic modes with non-commensurate periods), in a way which does not require segmentation, training, object tracking or 1-dimensional surrogate signals. Our methodology operates directly on video data. The approach combines ideas from nonlinear time series analysis (delay embeddings) and computational topology (persistent homology), by translating the problem of finding recurrent dynamics in video data, into the problem of determining the circularity or toroidality of an associated geometric space. Through extensive testing, we show the robustness of our scores with respect to several noise models/levels; we show that our periodicity score is superior to other methods when compared to human-generated periodicity rankings; and furthermore, we show that our quasiperiodicity score clearly indicates the presence of biphonation in videos of vibrating vocal folds, which has never before been accomplished end to end quantitatively.",
"title": ""
},
{
"docid": "neg:1840372_17",
"text": "We propose a new equilibrium enforcing method paired with a loss derived from the Wasserstein distance for training auto-encoder based Generative Adversarial Networks. This method balances the generator and discriminator during training. Additionally, it provides a new approximate convergence measure, fast and stable training and high visual quality. We also derive a way of controlling the trade-off between image diversity and visual quality. We focus on the image generation task, setting a new milestone in visual quality, even at higher resolutions. This is achieved while using a relatively simple model architecture and a standard training procedure.",
"title": ""
},
{
"docid": "neg:1840372_18",
"text": "Multi-objective evolutionary algorithms (MOEAs) have achieved great progress in recent decades, but most of them are designed to solve unconstrained multi-objective optimization problems. In fact, many real-world multi-objective problems usually contain a number of constraints. To promote the research of constrained multi-objective optimization, we first propose three primary types of difficulty, which reflect the challenges in the real-world optimization problems, to characterize the constraint functions in CMOPs, including feasibility-hardness, convergencehardness and diversity-hardness. We then develop a general toolkit to construct difficulty adjustable and scalable constrained multi-objective optimization problems (CMOPs) with three types of parameterized constraint functions according to the proposed three primary types of difficulty. In fact, combination of the three primary constraint functions with different parameters can lead to construct a large variety of CMOPs, whose difficulty can be uniquely defined by a triplet with each of its parameter specifying the level of each primary difficulty type respectively. Furthermore, the number of objectives in this toolkit are able to scale to more than two. Based on this toolkit, we suggest nine difficulty adjustable and scalable CMOPs named DAS-CMOP1-9. To evaluate the proposed test problems, two popular CMOEAs MOEA/D-CDP and NSGA-II-CDP are adopted to test their performances on DAS-CMOP1-9 with different difficulty triplets. The experiment results demonstrate that none of them can solve these problems efficiently, which stimulate us to develop new constrained MOEAs to solve the suggested DAS-CMOPs.",
"title": ""
},
{
"docid": "neg:1840372_19",
"text": "This paper discusses the design and evaluation of an online social network used within twenty-two established after school programs across three major urban areas in the Northeastern United States. The overall goal of this initiative is to empower students in grades K-8 to prevent obesity through healthy eating and exercise. The online social network was designed to support communication between program participants. Results from the related evaluation indicate that the online social network has potential for advancing awareness and community action around health related issues; however, greater attention is needed to professional development programs for program facilitators, and design features could better support critical thinking, social presence, and social activity.",
"title": ""
}
] |
1840373 | Dataset, Ground-Truth and Performance Metrics for Table Detection Evaluation | [
{
"docid": "pos:1840373_0",
"text": "Tables are a common structuring element in many documents, s uch as PDF files. To reuse such tables, appropriate methods need to b e develop, which capture the structure and the content information. We have d e loped several heuristics which together recognize and decompose tables i n PDF files and store the extracted data in a structured data format (XML) for easi er reuse. Additionally, we implemented a prototype, which gives the user the ab ility of making adjustments on the extracted data. Our work shows that purel y heuristic-based approaches can achieve good results, especially for lucid t ables.",
"title": ""
},
{
"docid": "pos:1840373_1",
"text": "Table characteristics vary widely. Consequently, a great variety of computational approaches have been applied to table recognition. In this survey, the table recognition literature is presented as an interaction of table models, observations, transformations and inferences. A table model defines the physical and logical structure of tables; the model is used to detect tables, and to analyze and decompose the detected tables. Observations perform feature measurements and data lookup, transformations alter or restructure data, and inferences generate and test hypotheses. This presentation clarifies the decisions that are made by a table recognizer, and the assumptions and inferencing techniques that underlie these decisions.",
"title": ""
}
] | [
{
"docid": "neg:1840373_0",
"text": "The Skills for Inclusive Growth (S4IG) program is an initiative of the Australian Government’s aid program and implemented with the Sri Lankan Ministry of Skills Development and Vocational Training, Tourism Authorities, Provincial and District Level Government, Industry and Community Organisations. The Program will demonstrate how an integrated approach to skills development can support inclusive economic growth opportunities along the tourism value chain in the four districts of Trincomalee, Ampara, Batticaloa (Eastern Province) and Polonnaruwa (North Central Province). In doing this the S4IG supports sustainable job creation and increased incomes and business growth for the marginalised and the disadvantaged, particularly women and people with disabilities.",
"title": ""
},
{
"docid": "neg:1840373_1",
"text": "In this paper, we address a real life waste collection vehicle routing problem with time windows (VRPTW) with consideration of multiple disposal trips and drivers’ lunch breaks. Solomon’s well-known insertion algorithm is extended for the problem. While minimizing the number of vehicles and total traveling time is the major objective of vehicle routing problems in the literature, here we also consider the route compactness and workload balancing of a solution since they are very important aspects in practical applications. In order to improve the route compactness and workload balancing, a capacitated clustering-based waste collection VRPTW algorithm is developed. The proposed algorithms have been successfully implemented and deployed for the real life waste collection problems at Waste Management, Inc. A set of waste collection VRPTW benchmark problems is also presented in this paper. Waste collection problems are frequently considered as arc routing problems without time windows. However, that point of view can be applied only to residential waste collection problems. In the waste collection industry, there are three major areas: commercial waste collection, residential waste collection and roll-on-roll-off. In this paper, we mainly focus on the commercial waste collection problem. The problem can be characterized as a variant of VRPTW since commercial waste collection stops may have time windows. The major variation from a standard VRPTW is due to disposal operations and driver’s lunch break. When a vehicle is full, it needs to go to one of the disposal facilities (landfill or transfer station). Each vehicle can, and typically does, make multiple disposal trips per day. The purpose of this paper is to introduce the waste collection VRPTW, benchmark problem sets, and a solution approach for the problem. The proposed algorithms have been successfully implemented and deployed for the real life waste collection problems of Waste Management, the leading provider of comprehensive waste management services in North America with nearly 26,000 collection and transfer vehicles. 2005 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "neg:1840373_2",
"text": "The non-central chi-square distribution plays an important role in communications, for example in the analysis of mobile and wireless communication systems. It not only includes the important cases of a squared Rayleigh distribution and a squared Rice distribution, but also the generalizations to a sum of independent squared Gaussian random variables of identical variance with or without mean, i.e., a \"squared MIMO Rayleigh\" and \"squared MIMO Rice\" distribution. In this paper closed-form expressions are derived for the expectation of the logarithm and for the expectation of the n-th power of the reciprocal value of a non-central chi-square random variable. It is shown that these expectations can be expressed by a family of continuous functions gm(ldr) and that these families have nice properties (monotonicity, convexity, etc.). Moreover, some tight upper and lower bounds are derived that are helpful in situations where the closed-form expression of gm(ldr) is too complex for further analysis.",
"title": ""
},
{
"docid": "neg:1840373_3",
"text": "Graves’ disease (GD) and Hashimoto's thyroiditis (HT) represent the commonest forms of autoimmune thyroid disease (AITD) each presenting with distinct clinical features. Progress has been made in determining association of HLA class II DRB1, DQB1 and DQA1 loci with GD demonstrating a predisposing effect for DR3 (DRB1*03-DQB1*02-DQA1*05) and a protective effect for DR7 (DRB1*07-DQB1*02-DQA1*02). Small data sets have hindered progress in determining HLA class II associations with HT. The aim of this study was to investigate DRB1-DQB1-DQA1 in the largest UK Caucasian HT case control cohort to date comprising 640 HT patients and 621 controls. A strong association between HT and DR4 (DRB1*04-DQB1*03-DQA1*03) was detected (P=6.79 × 10−7, OR=1.98 (95% CI=1.51–2.59)); however, only borderline association of DR3 was found (P=0.050). Protective effects were also detected for DR13 (DRB1*13-DQB1*06-DQA1*01) (P=0.001, OR=0.61 (95% CI=0.45–0.83)) and DR7 (P=0.013, OR=0.70 (95% CI=0.53–0.93)). Analysis of our unique cohort of subjects with well characterized AITD has demonstrated clear differences in association within the HLA class II region between HT and GD. Although HT and GD share a number of common genetic markers this study supports the suggestion that differences in HLA class II genotype may, in part, contribute to the different immunopathological processes and clinical presentation of these related diseases.",
"title": ""
},
{
"docid": "neg:1840373_4",
"text": "A major challenge that arises in Weakly Supervised Object Detection (WSOD) is that only image-level labels are available, whereas WSOD trains instance-level object detectors. A typical approach to WSOD is to 1) generate a series of region proposals for each image and assign the image-level label to all the proposals in that image; 2) train a classifier using all the proposals; and 3) use the classifier to select proposals with high confidence scores as the positive instances for another round of training. In this way, the image-level labels are iteratively transferred to instance-level labels.\n We aim to resolve the following two fundamental problems within this paradigm. First, existing proposal generation algorithms are not yet robust, thus the object proposals are often inaccurate. Second, the selected positive instances are sometimes noisy and unreliable, which hinders the training at subsequent iterations. We adopt two separate neural networks, one to focus on each problem, to better utilize the specific characteristic of region proposal refinement and positive instance selection. Further, to leverage the mutual benefits of the two tasks, the two neural networks are jointly trained and reinforced iteratively in a progressive manner, starting with easy and reliable instances and then gradually incorporating difficult ones at a later stage when the selection classifier is more robust. Extensive experiments on the PASCAL VOC dataset show that our method achieves state-of-the-art performance.",
"title": ""
},
{
"docid": "neg:1840373_5",
"text": "Deep Packet Inspection (DPI) is the state-of-the-art technology for traffic classification. According to the conventional wisdom, DPI is the most accurate classification technique. Consequently, most popular products, either commercial or open-source, rely on some sort of DPI for traffic classification. However, the actual performance of DPI is still unclear to the research community, since the lack of public datasets prevent the comparison and reproducibility of their results. This paper presents a comprehensive comparison of 6 well-known DPI tools, which are commonly used in the traffic classification literature. Our study includes 2 commercial products (PACE and NBAR) and 4 open-source tools (OpenDPI, L7-filter, nDPI, and Libprotoident). We studied their performance in various scenarios (including packet and flow truncation) and at different classification levels (application protocol, application and web service). We carefully built a labeled dataset with more than 750 K flows, which contains traffic from popular applications. We used the Volunteer-Based System (VBS), developed at Aalborg University, to guarantee the correct labeling of the dataset. We released this dataset, including full packet payloads, to the research community. We believe this dataset could become a common benchmark for the comparison and validation of network traffic classifiers. Our results present PACE, a commercial tool, as the most accurate solution. Surprisingly, we find that some open-source tools, such as nDPI and Libprotoident, also achieve very high accuracy.",
"title": ""
},
{
"docid": "neg:1840373_6",
"text": "Co-fabrication of a nanoscale vacuum field emission transistor (VFET) and a metal-oxide-semiconductor field effect transistor (MOSFET) is demonstrated on a silicon-on-insulator wafer. The insulated-gate VFET with a gap distance of 100 nm is achieved by using a conventional 0.18-μm process technology and subsequent photoresist ashing process. The VFET shows a turn-on voltage of 2 V at a cell current of 2 nA and a cell current of 3 μA at the operation voltage of 10 V with an ON/OFF current ratio of 104. The gap distance between the cathode and anode in the VFET is defined to be less than the mean free path of electrons in air, and consequently, the operation voltage is reduced to be less than the ionization potential of air molecules. This allows the relaxation of the vacuum requirement. The present integration scheme can be useful as it combines the advantages of both structures on the same chip.",
"title": ""
},
{
"docid": "neg:1840373_7",
"text": "The requirements for OLTP database systems are becoming ever more demanding. New OLTP applications require high degrees of scalability with controlled transaction latencies in in-memory databases. Deployments of these applications require low-level control of database system overhead and program-to-data affinity to maximize resource utilization in modern machines. Unfortunately, current solutions fail to meet these requirements. First, existing database solutions fail to expose a high-level programming abstraction in which latency of transactions can be reasoned about by application developers. Second, these solutions limit infrastructure engineers in exercising low-level control on the deployment of the system on a target infrastructure, further impacting performance. In this paper, we propose a relational actor programming model for in-memory databases. Conceptually, relational actors, or reactors for short, are application-defined, isolated logical actors encapsulating relations that process function calls asynchronously. Reactors ease reasoning about correctness by guaranteeing serializability of application-level function calls. In contrast to classic transactional models, however, reactors allow developers to take advantage of intra-transaction parallelism to reduce latency and improve performance. Moreover, reactors enable a new degree of flexibility in database deployment. We present REACTDB, a novel system design exposing reactors that allows for flexible virtualization of database architecture between the extremes of shared-nothing and shared-everything without changes to application code. Our experiments with REACTDB illustrate performance predictability, multi-core scalability, and low overhead in OLTP benchmarks.",
"title": ""
},
{
"docid": "neg:1840373_8",
"text": "Taylor & Francis makes every effort to ensure the accuracy of all the information (the “Content”) contained in the publications on our platform. However, Taylor & Francis, our agents, and our licensors make no representations or warranties whatsoever as to the accuracy, completeness, or suitability for any purpose of the Content. Any opinions and views expressed in this publication are the opinions and views of the authors, and are not the views of or endorsed by Taylor & Francis. The accuracy of the Content should not be relied upon and should be independently verified with primary sources of information. Taylor and Francis shall not be liable for any losses, actions, claims, proceedings, demands, costs, expenses, damages, and other liabilities whatsoever or howsoever caused arising directly or indirectly in connection with, in relation to or arising out of the use of the Content.",
"title": ""
},
{
"docid": "neg:1840373_9",
"text": "Novel scientific knowledge is constantly produced by the scientific community. Understanding the level of novelty characterized by scientific literature is key for modeling scientific dynamics and analyzing the growth mechanisms of scientific knowledge. Metrics derived from bibliometrics and citation analysis were effectively used to characterize the novelty in scientific development. However, time is required before we can observe links between documents such as citation links or patterns derived from the links, which makes these techniques more effective for retrospective analysis than predictive analysis. In this study, we present a new approach to measuring the novelty of a research topic in a scientific community over a specific period by tracking semantic changes of the terms and characterizing the research topic in their usage context. The semantic changes are derived from the text data of scientific literature by temporal embedding learning techniques. We validated the effects of the proposed novelty metric on predicting the future growth of scientific publications and investigated the relations between novelty and growth by panel data analysis applied in a largescale publication dataset (MEDLINE/PubMed). Key findings based on the statistical investigation indicate that the novelty metric has significant predictive effects on the growth of scientific literature and the predictive effects may last for more than ten years. We demonstrated the effectiveness and practical implications of the novelty metric in three case studies. ∗jiangen.he@drexel.edu, chaomei.chen@drexel.edu. Department of Information Science, Drexel University. 1 ar X iv :1 80 1. 09 12 1v 1 [ cs .D L ] 2 7 Ja n 20 18",
"title": ""
},
{
"docid": "neg:1840373_10",
"text": "Deep reinforcement learning (RL) has proven a powerful technique in many sequential decision making domains. However, Robotics poses many challenges for RL, most notably training on a physical system can be expensive and dangerous, which has sparked significant interest in learning control policies using a physics simulator. While several recent works have shown promising results in transferring policies trained in simulation to the real world, they often do not fully utilize the advantage of working with a simulator. In this work, we exploit the full state observability in the simulator to train better policies which take as input only partial observations (RGBD images). We do this by employing an actor-critic training algorithm in which the critic is trained on full states while the actor (or policy) gets rendered images as input. We show experimentally on a range of simulated tasks that using these asymmetric inputs significantly improves performance. Finally, we combine this method with domain randomization and show real robot experiments for several tasks like picking, pushing, and moving a block. We achieve this simulation to real world transfer without training on any real world data. Videos of these experiments can be found at www.goo.gl/b57WTs.",
"title": ""
},
{
"docid": "neg:1840373_11",
"text": "Contrary to popular belief, despite decades of research in fingerprints, reliable fingerprint recognition is still an open problem. Extracting features out of poor quality prints is the most challenging problem faced in this area. This paper introduces a new approach for fingerprint enhancement based on Short Time Fourier Transform(STFT) Analysis. STFT is a well known technique in signal processing to analyze non-stationary signals. Here we extend its application to 2D fingerprint images. The algorithm simultaneously estimates all the intrinsic properties of the fingerprints such as the foreground region mask, local ridge orientation and local ridge frequency. Furthermore we propose a probabilistic approach of robustly estimating these parameters. We experimentally compare the proposed approach to other filtering approaches in literature and show that our technique performs favorably.",
"title": ""
},
{
"docid": "neg:1840373_12",
"text": "A GPU cluster is a cluster equipped with GPU devices. Excellent acceleration is achievable for computation-intensive tasks (e. g. matrix multiplication and LINPACK) and bandwidth-intensive tasks with data locality (e. g. finite-difference simulation). Bandwidth-intensive tasks such as large-scale FFTs without data locality are harder to accelerate, as the bottleneck often lies with the PCI between main memory and GPU device memory or the communication network between workstation nodes. That means optimizing the performance of FFT for a single GPU device will not improve the overall performance. This paper uses large-scale FFT as an example to show how to achieve substantial speedups for these more challenging tasks on a GPU cluster. Three GPU-related factors lead to better performance: firstly the use of GPU devices improves the sustained memory bandwidth for processing large-size data; secondly GPU device memory allows larger subtasks to be processed in whole and hence reduces repeated data transfers between memory and processors; and finally some costly main-memory operations such as matrix transposition can be significantly sped up by GPUs if necessary data adjustment is performed during data transfers. This technique of manipulating array dimensions during data transfer is the main technical contribution of this paper. These factors (as well as the improved communication library in our implementation) attribute to 24.3x speedup with respect to FFTW and 7x speedup with respect to Intel MKL for 4096 3D single-precision FFT on a 16-node cluster with 32 GPUs. Around 5x speedup with respect to both standard libraries are achieved for double precision.",
"title": ""
},
{
"docid": "neg:1840373_13",
"text": "Portfolio diversification in capital markets is an accepted investment strategy. On the other hand corporate diversification has drawn many opponents especially the agency theorists who argue that executives must not diversify on behalf of share holders. Diversification is a strategic option used by many managers to improve their firm’s performance. While extensive literature investigates the diversification performance linkage, little agreements exist concerning the nature of this relationship. Both theoretical and empirical disagreements abound as the extensive research has neither reached a consensus nor any interpretable and acceptable findings. This paper looked at diversification as a corporate strategy and its effect on firm performance using Conglomerates in the Food and Beverages Sector listed on the ZSE. The study used a combination of primary and secondary data. Primary data was collected through interviews while secondary data were gathered from financial statements and management accounts. Data was analyzed using SPSS computer package. Three competing models were derived from literature (the linear model, Inverted U model and Intermediate model) and these were empirically assessed and tested.",
"title": ""
},
{
"docid": "neg:1840373_14",
"text": "The class average accuracies of different methods on the NYU V2: The Proposed Network Structure The model has a convolutional network and deconvolutional network for each modality, as well as a feature transformation network. In this structure, 1. The RGB and depth convolutional network have the same structure; 2. The deconvolutional networks are the mirrored version of the convolutional networks; 3. The feature transformation network extracts common features and modality specific features; 4. One modality can borrow the common features learned from the other modality.",
"title": ""
},
{
"docid": "neg:1840373_15",
"text": "INTRODUCTION\nMobile phones are ubiquitous in society and owned by a majority of psychiatric patients, including those with severe mental illness. Their versatility as a platform can extend mental health services in the areas of communication, self-monitoring, self-management, diagnosis, and treatment. However, the efficacy and reliability of publicly available applications (apps) have yet to be demonstrated. Numerous articles have noted the need for rigorous evaluation of the efficacy and clinical utility of smartphone apps, which are largely unregulated. Professional clinical organizations do not provide guidelines for evaluating mobile apps.\n\n\nMATERIALS AND METHODS\nGuidelines and frameworks are needed to evaluate medical apps. Numerous frameworks and evaluation criteria exist from the engineering and informatics literature, as well as interdisciplinary organizations in similar fields such as telemedicine and healthcare informatics.\n\n\nRESULTS\nWe propose criteria for both patients and providers to use in assessing not just smartphone apps, but also wearable devices and smartwatch apps for mental health. Apps can be evaluated by their usefulness, usability, and integration and infrastructure. Apps can be categorized by their usability in one or more stages of a mental health provider's workflow.\n\n\nCONCLUSIONS\nUltimately, leadership is needed to develop a framework for describing apps, and guidelines are needed for both patients and mental health providers.",
"title": ""
},
{
"docid": "neg:1840373_16",
"text": "Internet of Things is evolving heavily in these times. One of the major obstacle is energy consumption in the IoT devices (sensor nodes and wireless gateways). The IoT devices are often battery powered wireless devices and thus reducing the energy consumption in these devices is essential to lengthen the lifetime of the device without battery change. It is possible to lengthen battery lifetime by efficient but lightweight sensor data analysis in close proximity of the sensor. Performing part of the sensor data analysis in the end device can reduce the amount of data needed to transmit wirelessly. Transmitting data wirelessly is very energy consuming task. At the same time, the privacy and security should not be compromised. It requires effective but computationally lightweight encryption schemes. This survey goes thru many aspects to consider in edge and fog devices to minimize energy consumption and thus lengthen the device and the network lifetime.",
"title": ""
},
{
"docid": "neg:1840373_17",
"text": "From smart homes that prepare coffee when we wake, to phones that know not to interrupt us during important conversations, our collective visions of HCI imagine a future in which computers understand a broad range of human behaviors. Today our systems fall short of these visions, however, because this range of behaviors is too large for designers or programmers to capture manually. In this paper, we instead demonstrate it is possible to mine a broad knowledge base of human behavior by analyzing more than one billion words of modern fiction. Our resulting knowledge base, Augur, trains vector models that can predict many thousands of user activities from surrounding objects in modern contexts: for example, whether a user may be eating food, meeting with a friend, or taking a selfie. Augur uses these predictions to identify actions that people commonly take on objects in the world and estimate a user's future activities given their current situation. We demonstrate Augur-powered, activity-based systems such as a phone that silences itself when the odds of you answering it are low, and a dynamic music player that adjusts to your present activity. A field deployment of an Augur-powered wearable camera resulted in 96% recall and 71% precision on its unsupervised predictions of common daily activities. A second evaluation where human judges rated the system's predictions over a broad set of input images found that 94% were rated sensible.",
"title": ""
},
{
"docid": "neg:1840373_18",
"text": "Automatic identification of predatory conversations i chat logs helps the law enforcement agencies act proactively through early detection of predatory acts in cyberspace. In this paper, we describe the novel application of a deep learnin g method to the automatic identification of predatory chat conversations in large volumes of ch at logs. We present a classifier based on Convolutional Neural Network (CNN) to address this problem domain. The proposed CNN architecture outperforms other classification techn iques that are common in this domain including Support Vector Machine (SVM) and regular Neural Network (NN) in terms of classification performance, which is measured by F 1-score. In addition, our experiments show that using existing pre-trained word vectors are no t suitable for this specific domain. Furthermore, since the learning algorithm runs in a m ssively parallel environment (i.e., general-purpose GPU), the approach can benefit a la rge number of computation units (neurons) compared to when CPU is used. To the best of our knowledge, this is the first tim e that CNNs are adapted and applied to this application do main.",
"title": ""
},
{
"docid": "neg:1840373_19",
"text": "In order to ensure the service quality, modern Internet Service Providers (ISPs) invest tremendously on their network monitoring and measurement infrastructure. Vast amount of network data, including device logs, alarms, and active/passive performance measurement across different network protocols and layers, are collected and stored for analysis. As network measurement grows in scale and sophistication, it becomes increasingly challenging to effectively “search” for the relevant information that best support the needs of network operations. In this paper, we look into techniques that have been widely applied in the information retrieval and search engine domain and explore their applicability in network management domain. We observe that unlike the textural information on the Internet, network data are typically annotated with time and location information, which can be further augmented using information based on network topology, protocol and service dependency. We design NetSearch, a system that pre-processes various network data sources on data ingestion, constructs index that matches both the network spatial hierarchy model and the inherent timing/textual information contained in the data, and efficiently retrieves the relevant information that network operators search for. Through case study, we demonstrate that NetSearch is an important capability for many critical network management functions such as complex impact analysis.",
"title": ""
}
] |
1840374 | Facial volume restoration of the aging face with poly-l-lactic acid. | [
{
"docid": "pos:1840374_0",
"text": "PURPOSE\nThe bony skeleton serves as the scaffolding for the soft tissues of the face; however, age-related changes of bony morphology are not well defined. This study sought to compare the anatomic relationships of the facial skeleton and soft tissue structures between young and old men and women.\n\n\nMETHODS\nA retrospective review of CT scans of 100 consecutive patients imaged at Duke University Medical Center between 2004 and 2007 was performed using the Vitrea software package. The study population included 25 younger women (aged 18-30 years), 25 younger men, 25 older women (aged 55-65 years), and 25 older men. Using a standardized reference line, the distances from the anterior corneal plane to the superior orbital rim, lateral orbital rim, lower eyelid fat pad, inferior orbital rim, anterior cheek mass, and pyriform aperture were measured. Three-dimensional bony reconstructions were used to record the angular measurements of 4 bony regions: glabellar, orbital, maxillary, and pyriform aperture.\n\n\nRESULTS\nThe glabellar (p = 0.02), orbital (p = 0.0007), maxillary (p = 0.0001), and pyriform (p = 0.008) angles all decreased with age. The maxillary pyriform (p = 0.003) and infraorbital rim (p = 0.02) regressed with age. Anterior cheek mass became less prominent with age (p = 0.001), but the lower eyelid fat pad migrated anteriorly over time (p = 0.007).\n\n\nCONCLUSIONS\nThe facial skeleton appears to remodel throughout adulthood. Relative to the globe, the facial skeleton appears to rotate such that the frontal bone moves anteriorly and inferiorly while the maxilla moves posteriorly and superiorly. This rotation causes bony angles to become more acute and likely has an effect on the position of overlying soft tissues. These changes appear to be more dramatic in women.",
"title": ""
},
{
"docid": "pos:1840374_1",
"text": "Cutaneous facial aging is responsible for the increasingly wrinkled and blotchy appearance of the skin, whereas aging of the facial structures is attributed primarily to gravity. This article purports to show, however, that the primary etiology of structural facial aging relates instead to repeated contractions of certain facial mimetic muscles, the age marker fascicules, whereas gravity only secondarily abets an aging process begun by these muscle contractions. Magnetic resonance imaging (MRI) has allowed us to study the contrasts in the contour of the facial mimetic muscles and their associated deep and superficial fat pads in patients of different ages. The MRI model shows that the facial mimetic muscles in youth have a curvilinear contour presenting an anterior surface convexity. This curve reflects an underlying fat pad lying deep to these muscles, which acts as an effective mechanical sliding plane. The muscle’s anterior surface convexity constitutes the key evidence supporting the authors’ new aging theory. It is this youthful convexity that dictates a specific characteristic to the muscle contractions conveyed outwardly as youthful facial expression, a specificity of both direction and amplitude of facial mimetic movement. With age, the facial mimetic muscles (specifically, the age marker fascicules), as seen on MRI, gradually straighten and shorten. The authors relate this radiologic end point to multiple repeated muscle contractions over years that both expel underlying deep fat from beneath the muscle plane and increase the muscle resting tone. Hence, over time, structural aging becomes more evident as the facial appearance becomes more rigid.",
"title": ""
}
] | [
{
"docid": "neg:1840374_0",
"text": "The development of capacitive power transfer (CPT) as a competitive wireless/contactless power transfer solution over short distances is proving viable in both consumer and industrial electronic products/systems. The CPT is usually applied in low-power applications, due to small coupling capacitance. Recent research has increased the coupling capacitance from the pF to the nF scale, enabling extension of CPT to kilowatt power level applications. This paper addresses the need of efficient power electronics suitable for CPT at higher power levels, while remaining cost effective. Therefore, to reduce the cost and losses single-switch-single-diode topologies are investigated. Four single active switch CPT topologies based on the canonical Ćuk, SEPIC, Zeta, and Buck-boost converters are proposed and investigated. Performance tradeoffs within the context of a CPT system are presented and corroborated with experimental results. A prototype single active switch converter demonstrates 1-kW power transfer at a frequency of 200 kHz with >90% efficiency.",
"title": ""
},
{
"docid": "neg:1840374_1",
"text": "Paper Mechatronics is a novel interdisciplinary design medium, enabled by recent advances in craft technologies: the term refers to a reappraisal of traditional papercraft in combination with accessible mechanical, electronic, and computational elements. I am investigating the design space of paper mechatronics as a new hands-on medium by developing a series of examples and building a computational tool, FoldMecha, to support non-experts to design and construct their own paper mechatronics models. This paper describes how I used the tool to create two kinds of paper mechatronics models: walkers and flowers and discuss next steps.",
"title": ""
},
{
"docid": "neg:1840374_2",
"text": "The primary goal of a recommender system is to generate high quality user-centred recommendations. However, the traditional evaluation methods and metrics were developed before researchers understood all the factors that increase user satisfaction. This study is an introduction to a novel user and item classification framework. It is proposed that this framework should be used during user-centred evaluation of recommender systems and the need for this framework is justified through experiments. User profiles are constructed and matched against other users’ profiles to formulate neighbourhoods and generate top-N recommendations. The recommendations are evaluated to measure the success of the process. In conjunction with the framework, a new diversity metric is presented and explained. The accuracy, coverage, and diversity of top-N recommendations is illustrated and discussed for groups of users. It is found that in contradiction to common assumptions, not all users suffer as expected from the data sparsity problem. In fact, the group of users that receive the most accurate recommendations do not belong to the least sparse area of the dataset.",
"title": ""
},
{
"docid": "neg:1840374_3",
"text": "The tourism industry is characterized by ever-increasing competition, causing destinations to seek new methods to attract tourists. Traditionally, a decision to visit a destination is interpreted, in part, as a rational calculation of the costs/benefits of a set of alternative destinations, which were derived from external information sources, including e-WOM (word-of-mouth) or travelers' blogs. There are numerous travel blogs available for people to share and learn about travel experiences. Evidence shows, however, that not every blog exerts the same degree of influence on tourists. Therefore, which characteristics of these travel blogs attract tourists' attention and influence their decisions, becomes an interesting research question. Based on the concept of information relevance, a model is proposed for interrelating various attributes specific to blog's content and perceived enjoyment, an intrinsic motivation of information systems usage, to mitigate the above-mentioned gap. Results show that novelty, understandability, and interest of blogs' content affect behavioral intention through blog usage enjoyment. Finally, theoretical and practical implications are proposed. Tourism is a popular activity in modern life and has contributed significantly to economic development for decades. However, competition in almost every sector of this industry has intensified during recent years & Pan, 2008); tourism service providers are now finding it difficult to acquire and keep customers (Echtner & Ritchie, 1991; Ho, 2007). Therefore, methods of attracting tourists to a destination are receiving greater attention from researchers, policy makers, and marketers. Before choosing a destination, tourists may search for information to support their decision-making By understanding the relationships between various information sources' characteristics and destination choice, tourism managers can improve their marketing efforts. Recently, personal blogs have become an important source for acquiring travel information With personal blogs, many tourists can share their travel experiences with others and potential tourists can search for and respond to others' experiences. Therefore, a blog can be seen as an asynchronous and many-to-many channel for conveying travel-related electronic word-of-mouth (e-WOM). By using these forms of inter-personal influence media, companies in this industry can create a competitive advantage (Litvin et al., 2008; Singh et al., 2008). Weblogs are now widely available; therefore, it is not surprising that the quantity of available e-WOM has increased (Xiang & Gret-zel, 2010) to an extent where information overload has become a Empirical evidence , however, indicates that people may not consult numerous blogs for advice; the degree of inter-personal influence varies from blog to blog (Zafiropoulos, 2012). Determining …",
"title": ""
},
{
"docid": "neg:1840374_4",
"text": "Due to the increasing demand in the agricultural industry, the need to effectively grow a plant and increase its yield is very important. In order to do so, it is important to monitor the plant during its growth period, as well as, at the time of harvest. In this paper image processing is used as a tool to monitor the diseases on fruits during farming, right from plantation to harvesting. For this purpose artificial neural network concept is used. Three diseases of grapes and two of apple have been selected. The system uses two image databases, one for training of already stored disease images and the other for implementation of query images. Back propagation concept is used for weight adjustment of training database. The images are classified and mapped to their respective disease categories on basis of three feature vectors, namely, color, texture and morphology. From these feature vectors morphology gives 90% correct result and it is more than other two feature vectors. This paper demonstrates effective algorithms for spread of disease and mango counting. Practical implementation of neural networks has been done using MATLAB.",
"title": ""
},
{
"docid": "neg:1840374_5",
"text": "The Web contains a vast amount of structured information such as HTML tables, HTML lists and deep-web databases; there is enormous potential in combining and re-purposing this data in creative ways. However, integrating data from this relational web raises several challenges that are not addressed by current data integration systems or mash-up tools. First, the structured data is usually not published cleanly and must be extracted (say, from an HTML list) before it can be used. Second, due to the vastness of the corpus, a user can never know all of the potentially-relevant databases ahead of time (much less write a wrapper or mapping for each one); the source databases must be discovered during the integration process. Third, some of the important information regarding the data is only present in its enclosing web page and needs to be extracted appropriately. This paper describes Octopus, a system that combines search, extraction, data cleaning and integration, and enables users to create new data sets from those found on the Web. The key idea underlying Octopus is to offer the user a set of best-effort operators that automate the most labor-intensive tasks. For example, the Search operator takes a search-style keyword query and returns a set of relevance-ranked and similarity-clustered structured data sources on the Web; the Context operator helps the user specify the semantics of the sources by inferring attribute values that may not appear in the source itself, and the Extend operator helps the user find related sources that can be joined to add new attributes to a table. Octopus executes some of these operators automatically, but always allows the user to provide feedback and correct errors. We describe the algorithms underlying each of these operators and experiments that demonstrate their efficacy.",
"title": ""
},
{
"docid": "neg:1840374_6",
"text": "Children’s neurological development is influenced by their experiences. Early experiences and the environments in which they occur can alter gene expression and affect long-term neural development. Today, discretionary screen time, often involving multiple devices, is the single main experience and environment of children. Various screen activities are reported to induce structural and functional brain plasticity in adults. However, childhood is a time of significantly greater changes in brain anatomical structure and connectivity. There is empirical evidence that extensive exposure to videogame playing during childhood may lead to neuroadaptation and structural changes in neural regions associated with addiction. Digital natives exhibit a higher prevalence of screen-related ‘addictive’ behaviour that reflect impaired neurological rewardprocessing and impulse-control mechanisms. Associations are emerging between screen dependency disorders such as Internet Addiction Disorder and specific neurogenetic polymorphisms, abnormal neural tissue and neural function. Although abnormal neural structural and functional characteristics may be a precondition rather than a consequence of addiction, there may also be a bidirectional relationship. As is the case with substance addictions, it is possible that intensive routine exposure to certain screen activities during critical stages of neural development may alter gene expression resulting in structural, synaptic and functional changes in the developing brain leading to screen dependency disorders, particularly in children with predisposing neurogenetic profiles. There may also be compound/secondary effects on neural development. Screen dependency disorders, even at subclinical levels, involve high levels of discretionary screen time, inducing greater child sedentary behaviour thereby reducing vital aerobic fitness, which plays an important role in the neurological health of children, particularly in brain structure and function. Child health policy must therefore adhere to the principle of precaution as a prudent approach to protecting child neurological integrity and well-being. This paper explains the basis of current paediatric neurological concerns surrounding screen dependency disorders and proposes preventive strategies for child neurology and allied professions.",
"title": ""
},
{
"docid": "neg:1840374_7",
"text": "We propose NEURAL ENQUIRER — a neural network architecture for answering natural language (NL) questions given a knowledge base (KB) table. Unlike previous work on end-to-end training of semantic parsers, NEURAL ENQUIRER is fully “neuralized”: it gives distributed representations of queries and KB tables, and executes queries through a series of differentiable operations. The model can be trained with gradient descent using both endto-end and step-by-step supervision. During training the representations of queries and the KB table are jointly optimized with the query execution logic. Our experiments show that the model can learn to execute complex NL queries on KB tables with rich structures.",
"title": ""
},
{
"docid": "neg:1840374_8",
"text": "The changing face of technology has played an integral role in the development of the hotel and restaurant industry. The manuscript investigated the impact that technology has had on the hotel and restaurant industry. A detailed review of the literature regarding the growth of technology in the industry was linked to the development of strategic direction. The manuscript also looked at the strategic analysis methodology for evaluating and taking advantage of current and future technological innovations for the hospitality industry. Identification and implementation of these technologies can help in building a sustainable competitive advantage for hotels and restaurants.",
"title": ""
},
{
"docid": "neg:1840374_9",
"text": "Neural network-based systems can now learn to locate the referents of words and phrases in images, answer questions about visual scenes, and even execute symbolic instructions as first-person actors in partially-observable worlds. To achieve this so-called grounded language learning, models must overcome certain well-studied learning challenges that are also fundamental to infants learning their first words. While it is notable that models with no meaningful prior knowledge overcome these learning obstacles, AI researchers and practitioners currently lack a clear understanding of exactly how they do so. Here we address this question as a way of achieving a clearer general understanding of grounded language learning, both to inform future research and to improve confidence in model predictions. For maximum control and generality, we focus on a simple neural network-based language learning agent trained via policy-gradient methods to interpret synthetic linguistic instructions in a simulated 3D world. We apply experimental paradigms from developmental psychology to this agent, exploring the conditions under which established human biases and learning effects emerge. We further propose a novel way to visualise and analyse semantic representation in grounded language learning agents that yields a plausible computational account of the observed effects.",
"title": ""
},
{
"docid": "neg:1840374_10",
"text": "It is well known that the road signs play’s a vital role in road safety its ignorance results in accidents .This Paper proposes an Idea for road safety by using a RFID based traffic sign recognition system. By using it we can prevent the road risk up to a great extend.",
"title": ""
},
{
"docid": "neg:1840374_11",
"text": "The Timed Up and Go (TUG) is a clinical test used widely to measure balance and mobility, e.g. in Parkinson's disease (PD). The test includes a sequence of functional activities, namely: sit-to-stand, 3-meters walk, 180° turn, walk back, another turn and sit on the chair. Meanwhile the stopwatch is used to score the test by measuring the time which the patients with PD need to perform the test. Here, the work presents an instrumented TUG using a wearable inertial sensor unit attached on the lower back of the person. The approach is used to automate the process of assessment compared with the manual evaluation by using visual observation and a stopwatch. The developed algorithm is based on the Dynamic Time Warping (DTW) for multi-dimensional time series and has been applied with the augmented feature for detection and duration assessment of turn state transitions, while a 1-dimensional DTW is used to detect the sit-to-stand and stand-to-sit phases. The feature set is a 3-dimensional vector which consists of the angular velocity, derived angle and features from Linear Discriminant Analysis (LDA). The algorithm was tested on 10 healthy individuals and 20 patients with PD (10 patients with early and late disease phases respectively). The test demonstrates that the developed technique can successfully extract the time information of the sit-to-stand, both turns and stand-to-sit transitions in the TUG test.",
"title": ""
},
{
"docid": "neg:1840374_12",
"text": "BACKGROUND\nNormal-weight adults gain lower-body fat via adipocyte hyperplasia and upper-body subcutaneous (UBSQ) fat via adipocyte hypertrophy.\n\n\nOBJECTIVES\nWe investigated whether regional fat loss mirrors fat gain and whether the loss of lower-body fat is attributed to decreased adipocyte number or size.\n\n\nDESIGN\nWe assessed UBSQ, lower-body, and visceral fat gains and losses in response to overfeeding and underfeeding in 23 normal-weight adults (15 men) by using dual-energy X-ray absorptiometry and abdominal computed tomography scans. Participants gained ∼5% of weight in 8 wk and lost ∼80% of gained fat in 8 wk. We measured abdominal subcutaneous and femoral adipocyte sizes and numbers after weight gain and loss.\n\n\nRESULTS\nVolunteers gained 3.1 ± 2.1 (mean ± SD) kg body fat with overfeeding and lost 2.4 ± 1.7 kg body fat with underfeeding. Although UBSQ and visceral fat gains were completely reversed after 8 wk of underfeeding, lower-body fat had not yet returned to baseline values. Abdominal and femoral adipocyte sizes, but not numbers, decreased with weight loss. Decreases in abdominal adipocyte size and UBSQ fat mass were correlated (ρ = 0.76, P = 0.001), as were decreases in femoral adipocyte size and lower-body fat (ρ = 0.49, P = 0.05).\n\n\nCONCLUSIONS\nUBSQ and visceral fat increase and decrease proportionately with a short-term weight gain and loss, whereas a gain of lower-body fat does not relate to the loss of lower-body fat. The loss of lower-body fat is attributed to a reduced fat cell size, but not number, which may result in long-term increases in fat cell numbers.",
"title": ""
},
{
"docid": "neg:1840374_13",
"text": "Previous research suggests a possible link between eveningness and general difficulties with self-regulation (e.g., evening types are more likely than other chronotypes to have irregular sleep schedules and social rhythms and use substances). Our study investigated the relationship between eveningness and self-regulation by using two standardized measures of self-regulation: the Self-Control Scale and the Procrastination Scale. We predicted that an eveningness preference would be associated with poorer self-control and greater procrastination than would an intermediate or morningness preference. Participants were 308 psychology students (mean age=19.92 yrs) at a small Canadian college. Students completed the self-regulation questionnaires and Morningness/Eveningness Questionnaire (MEQ) online. The mean MEQ score was 46.69 (SD=8.20), which is intermediate between morningness and eveningness. MEQ scores ranged from definite morningness to definite eveningness, but the dispersion of scores was skewed toward more eveningness. Pearson and partial correlations (controlling for age) were used to assess the relationship between MEQ score and the Self-Control Scale (global score and 5 subscale scores) and Procrastination Scale (global score). All correlations were significant. The magnitude of the effects was medium for all measures except one of the Self-Control subscales, which was small. A multiple regression analysis to predict MEQ score using the Self-Control Scale (global score), Procrastination Scale, and age as predictors indicated the Self-Control Scale was a significant predictor (accounting for 20% of the variance). A multiple regression analysis to predict MEQ scores using the five subscales of the Self-Control Scale and age as predictors showed the subscales for reliability and work ethic were significant predictors (accounting for 33% of the variance). Our study showed a relationship between eveningness and low self-control, but it did not address whether the relationship is a causal one.",
"title": ""
},
{
"docid": "neg:1840374_14",
"text": "Warehouse automation systems that use robots to save human labor are becoming increasingly common. In a previous study, a picking system using a multi-joint type robot was developed. However, articulated robots are not ideal in warehouse scenarios, since inter-shelf space can limit their freedom of motion. Although the use of linear motion-type robots has been suggested as a solution, their drawback is that an additional cable carrier is needed. The authors therefore propose a new configuration for a robot manipulator that uses wireless power transmission (WPT), which delivers power without physical contact except at the base of the robot arm. We describe here a WPT circuit design suitable for rotating and sliding-arm mechanisms. Overall energy efficiency was confirmed to be 92.0%.",
"title": ""
},
{
"docid": "neg:1840374_15",
"text": "Cloud computing is playing an ever larger role in the IT infrastructure. The migration into the cloud means that we must rethink and adapt our security measures. Ultimately, both the cloud provider and the customer have to accept responsibilities to ensure security best practices are followed. Firewalls are one of the most critical security features. Most IaaS providers make firewalls available to their customers. In most cases, the customer assumes a best-case working scenario which is often not assured. In this paper, we studied the filtering behavior of firewalls provided by five different cloud providers. We found that three providers have firewalls available within their infrastructure. Based on our findings, we developed an open-ended firewall monitoring tool which can be used by cloud customers to understand the firewall's filtering behavior. This information can then be efficiently used for risk management and further security considerations. Measuring today's firewalls has shown that they perform well for the basics, although may not be fully featured considering fragmentation or stateful behavior.",
"title": ""
},
{
"docid": "neg:1840374_16",
"text": "For most families with elderly relatives, care within their own home is by far the most preferred option both for the elderly and their carers. However, frequently these carers are the partners of the person with long-term care needs, and themselves are elderly and in need of support to cope with the burdens and stress associated with these duties. When it becomes too much for them, they may have to rely on professional care services, or even use residential care for a respite. In order to support the carers as well as the elderly person, an ambient assisted living platform has been developed. The system records information about the activities of daily living using unobtrusive sensors within the home, and allows the carers to record their own wellbeing state. By providing facilities to schedule and monitor the activities of daily care, and providing orientation and advice to improve the care given and their own wellbeing, the system helps to reduce the burden on the informal carers. Received on 30 August 2016; accepted on 03 February 2017; published on 21 March 2017",
"title": ""
},
{
"docid": "neg:1840374_17",
"text": "Breast cancer is one of the leading causes of cancer death among women worldwide. In clinical routine, automatic breast ultrasound (BUS) image segmentation is very challenging and essential for cancer diagnosis and treatment planning. Many BUS segmentation approaches have been studied in the last two decades, and have been proved to be effective on private datasets. Currently, the advancement of BUS image segmentation seems to meet its bottleneck. The improvement of the performance is increasingly challenging, and only few new approaches were published in the last several years. It is the time to look at the field by reviewing previous approaches comprehensively and to investigate the future directions. In this paper, we study the basic ideas, theories, pros and cons of the approaches, group them into categories, and extensively review each category in depth by discussing the principles, application issues, and advantages/disadvantages. Keyword: breast ultrasound (BUS) images; breast cancer; segmentation; benchmark; early detection; computer-aided diagnosis (CAD)",
"title": ""
},
{
"docid": "neg:1840374_18",
"text": "Machine-to-Machine (M2M) paradigm enables machines (sensors, actuators, robots, and smart meter readers) to communicate with each other with little or no human intervention. M2M is a key enabling technology for the cyber-physical systems (CPSs). This paper explores CPS beyond M2M concept and looks at futuristic applications. Our vision is CPS with distributed actuation and in-network processing. We describe few particular use cases that motivate the development of the M2M communication primitives tailored to large-scale CPS. M2M communications in literature were considered in limited extent so far. The existing work is based on small-scale M2M models and centralized solutions. Different sources discuss different primitives. Few existing decentralized solutions do not scale well. There is a need to design M2M communication primitives that will scale to thousands and trillions of M2M devices, without sacrificing solution quality. The main paradigm shift is to design localized algorithms, where CPS nodes make decisions based on local knowledge. Localized coordination and communication in networked robotics, for matching events and robots, were studied to illustrate new directions.",
"title": ""
},
{
"docid": "neg:1840374_19",
"text": "Cloud gaming represents a highly interactive service whereby game logic is rendered in the cloud and streamed as a video to end devices. While benefits include the ability to stream high-quality graphics games to practically any end user device, drawbacks include high bandwidth requirements and very low latency. Consequently, a challenge faced by cloud gaming service providers is the design of algorithms for adapting video streaming parameters to meet the end user system and network resource constraints. In this paper, we conduct an analysis of the commercial NVIDIA GeForce NOW game streaming platform adaptation mechanisms in light of variable network conditions. We further conduct an empirical user study involving the GeForce NOW platform to assess player Quality of Experience when such adaptation mechanisms are employed. The results provide insight into limitations of the currently deployed mechanisms, as well as aim to provide input for the proposal of designing future video encoding adaptation strategies.",
"title": ""
}
] |
1840375 | A Critical Review of Online Social Data: Biases, Methodological Pitfalls, and Ethical Boundaries | [
{
"docid": "pos:1840375_0",
"text": "Cathy O’Neil’s Weapons of Math Destruction is a timely reminder of the power and perils of predictive algorithms and model-driven decision processes. The book deals in some depth with eight case studies of the abuses she associates with WMDs: “weapons of math destruction.” The cases include the havoc wrought by value-added models used to evaluate teacher performance and by the college ranking system introduced by U.S. News and World Report; the collateral damage of online advertising and models devised to track and monetize “eyeballs”; the abuses associated with the recidivism models used in judicial decisions; the inequities perpetrated by the use of personality tests in hiring decisions; the burdens placed on low-wage workers by algorithm-driven attempts to maximize labor efficiency; the injustices written into models that evaluate creditworthiness; the inequities produced by insurance companies’ risk models; and the potential assault on the democratic process by the use of big data in political campaigns. As this summary suggests, O’Neil had plenty of examples to choose from when she wrote the book, but since the publication of Weapons of Math Destruction, two more problems associated with model-driven decision procedures have surfaced, making O’Neil’s work even more essential reading. The first—the role played by fake news, much of it circulated on Facebook, in the 2016 election—has led to congressional investigations. The second—the failure of algorithm-governed oversight to recognize and delete gruesome posts on the Facebook Live streaming service—has caused CEO Mark Zuckerberg to announce the addition of 3,000 human screeners to the Facebook staff. While O’Neil’s book may seem too polemical to some readers and too cautious to others, it speaks forcefully to the cultural moment we share. O’Neil weaves the story of her own credentials and work experience into her analysis, because, as she explains, her training as a mathematician and her experience in finance shaped the way she now understands the world. O’Neil earned a PhD in mathematics from Harvard; taught at Barnard College, where her research area was algebraic number theory; and worked for the hedge fund D. E. Shaw, which uses mathematical analysis to guide investment decisions. When the financial crisis of 2008 revealed that even the most sophisticated models were incapable of anticipating risks associated with “black swans”—events whose rarity make them nearly impossible to predict—O’Neil left the world of corporate finance to join the RiskMetrics Group, where she helped market risk models to financial institutions eager to rehabilitate their image. Ultimately, she became disillusioned with the financial industry’s refusal to take seriously the limitations of risk management models and left RiskMetrics. She rebranded herself a “data scientist” and took a job at Intent Media, where she helped design algorithms that would make big data useful for all kinds of applications. All the while, as O’Neil describes it, she “worried about the separation between technical models and real people, and about the moral repercussions of that separation” (page 48). O’Neil eventually left Intent Media to devote her energies to inWeapons of Math Destruction",
"title": ""
}
] | [
{
"docid": "neg:1840375_0",
"text": "Social networks are growing in number and size, with hundreds of millions of user accounts among them. One added benefit of these networks is that they allow users to encode more information about their relationships than just stating who they know. In this work, we are particularly interested in trust relationships, and how they can be used in designing interfaces. In this paper, we present FilmTrust, a website that uses trust in web-based social networks to create predictive movie recommendations. Using the FilmTrust system as a foundation, we show that these recommendations are more accurate than other techniques when the user’s opinions about a film are divergent from the average. We discuss this technique both as an application of social network analysis, as well as how it suggests other analyses that can be performed to help improve collaborative filtering algorithms of all types.",
"title": ""
},
{
"docid": "neg:1840375_1",
"text": "In today’s global marketplace, individual firms do not compete as independent entities rather as an integral part of a supply chain. This paper proposes a fuzzy mathematical programming model for supply chain planning which considers supply, demand and process uncertainties. The model has been formulated as a fuzzy mixed-integer linear programming model where data are ill-known andmodelled by triangular fuzzy numbers. The fuzzy model provides the decision maker with alternative decision plans for different degrees of satisfaction. This proposal is tested by using data from a real automobile supply chain. © 2009 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "neg:1840375_2",
"text": "This paper introduces and summarises the findings of a new shared task at the intersection of Natural Language Processing and Computer Vision: the generation of image descriptions in a target language, given an image and/or one or more descriptions in a different (source) language. This challenge was organised along with the Conference on Machine Translation (WMT16), and called for system submissions for two task variants: (i) a translation task, in which a source language image description needs to be translated to a target language, (optionally) with additional cues from the corresponding image, and (ii) a description generation task, in which a target language description needs to be generated for an image, (optionally) with additional cues from source language descriptions of the same image. In this first edition of the shared task, 16 systems were submitted for the translation task and seven for the image description task, from a total of 10 teams.",
"title": ""
},
{
"docid": "neg:1840375_3",
"text": "Tweets often contain a large proportion of abbreviations, alternative spellings, novel words and other non-canonical language. These features are problematic for standard language analysis tools and it can be desirable to convert them to canonical form. We propose a novel text normalization model based on learning edit operations from labeled data while incorporating features induced from unlabeled data via character-level neural text embeddings. The text embeddings are generated using an Simple Recurrent Network. We find that enriching the feature set with text embeddings substantially lowers word error rates on an English tweet normalization dataset. Our model improves on stateof-the-art with little training data and without any lexical resources.",
"title": ""
},
{
"docid": "neg:1840375_4",
"text": "Documenting underwater archaeological sites is an extremely challenging problem. Sites covering large areas are particularly daunting for traditional techniques. In this paper, we present a novel approach to this problem using both an autonomous underwater vehicle (AUV) and a diver-controlled stereo imaging platform to document the submerged Bronze Age city at Pavlopetri, Greece. The result is a three-dimensional (3D) reconstruction covering 26,600 m2 at a resolution of 2 mm/pixel, the largest-scale underwater optical 3D map, at such a resolution, in the world to date. We discuss the advances necessary to achieve this result, including i) an approach to color correct large numbers of images at varying altitudes and over varying bottom types; ii) a large-scale bundle adjustment framework that is capable of handling upward of 400,000 stereo images; and iii) a novel approach to the registration and rapid documentation of an underwater excavations area that can quickly produce maps of site change. We present visual and quantitative comparisons to the authors’ previous underwater mapping approaches. C © 2016 Wiley Periodicals, Inc.",
"title": ""
},
{
"docid": "neg:1840375_5",
"text": "Until now, most systems for Internet of Things (IoT) management, have been designed in a Cloud-centric manner, getting benefits from the unified platform that the Cloud offers. However, a Cloud-centric infrastructure mainly achieves static sensor and data streaming systems, which do not support the direct configuration management of IoT components. To address this issue, a virtualization of IoT components (Virtual Resources) is introduced at the edge of the IoT network. This research also introduces permission-based Blockchain protocols to handle the provisioning of Virtual Resources directly onto edge devices. The architecture presented by this research focuses on the use of Virtual Resources and Blockchain protocols as management tools to distribute configuration tasks towards the edge of the IoT network. Results from lab experiments demonstrate the successful deployment and communication performance (response time in milliseconds) of Virtual Resources on two edge platforms, Raspberry Pi and Edison board. This work also provides performance evaluations of two permission-based blockchain protocol approaches. The first blockchain approach is a Blockchain as a Service (BaaS) in the Cloud, Bluemix. The second blockchain approach is a private cluster hosted in a Fog network, Multichain.",
"title": ""
},
{
"docid": "neg:1840375_6",
"text": "Debates about human nature often revolve around what is built in. However, the hallmark of human nature is how much of a person's identity is not built in; rather, it is humans' great capacity to adapt, change, and grow. This nature versus nurture debate matters-not only to students of human nature-but to everyone. It matters whether people believe that their core qualities are fixed by nature (an entity theory, or fixed mindset) or whether they believe that their qualities can be developed (an incremental theory, or growth mindset). In this article, I show that an emphasis on growth not only increases intellectual achievement but can also advance conflict resolution between long-standing adversaries, decrease even chronic aggression, foster cross-race relations, and enhance willpower. I close by returning to human nature and considering how it is best conceptualized and studied.",
"title": ""
},
{
"docid": "neg:1840375_7",
"text": "There has been a dramatic increase in the number and complexity of new ventilation modes over the last 30 years. The impetus for this has been the desire to improve the safety, efficiency, and synchrony of ventilator-patient interaction. Unfortunately, the proliferation of names for ventilation modes has made understanding mode capabilities problematic. New modes are generally based on increasingly sophisticated closed-loop control systems or targeting schemes. We describe the 6 basic targeting schemes used in commercially available ventilators today: set-point, dual, servo, adaptive, optimal, and intelligent. These control systems are designed to serve the 3 primary goals of mechanical ventilation: safety, comfort, and liberation. The basic operations of these schemes may be understood by clinicians without any engineering background, and they provide the basis for understanding the wide variety of ventilation modes and their relative advantages for improving patient-ventilator synchrony. Conversely, their descriptions may provide engineers with a means to better communicate to end users.",
"title": ""
},
{
"docid": "neg:1840375_8",
"text": "Global navigation satellite system reflectometry is a multistatic radar using navigation signals as signals of opportunity. It provides wide-swath and improved spatiotemporal sampling over current space-borne missions. The lack of experimental datasets from space covering signals from multiple constellations (GPS, GLONASS, Galileo, and Beidou) at dual-band (L1 and L2) and dual-polarization (right- and left-hand circular polarization), over the ocean, land, and cryosphere remains a bottleneck to further develop these techniques. 3Cat-2 is a 6-unit (3 × 2 elementary blocks of 10 × 10 × 10 cm3) CubeSat mission designed and implemented at the Universitat Politècnica de Catalunya-BarcelonaTech to explore fundamental issues toward an improvement in the understanding of the bistatic scattering properties of different targets. Since geolocalization of the specific reflection points is determined by the geometry only, a moderate pointing accuracy is only required to correct the antenna pattern in scatterometry measurements. This paper describes the mission analysis and the current status of the assembly, integration, and verification activities of both the engineering model and the flight model performed at Universitat Politècnica de Catalunya NanoSatLab premises. 3Cat-2 launch is foreseen for the second quarter of 2016 into a Sun-Synchronous orbit of 510-km height.",
"title": ""
},
{
"docid": "neg:1840375_9",
"text": "Makeup is widely used to improve facial attractiveness and is well accepted by the public. However, different makeup styles will result in significant facial appearance changes. It remains a challenging problem to match makeup and non-makeup face images. This paper proposes a learning from generation approach for makeup-invariant face verification by introducing a bi-level adversarial network (BLAN). To alleviate the negative effects from makeup, we first generate non-makeup images from makeup ones, and then use the synthesized nonmakeup images for further verification. Two adversarial networks in BLAN are integrated in an end-to-end deep network, with the one on pixel level for reconstructing appealing facial images and the other on feature level for preserving identity information. These two networks jointly reduce the sensing gap between makeup and non-makeup images. Moreover, we make the generator well constrained by incorporating multiple perceptual losses. Experimental results on three benchmark makeup face datasets demonstrate that our method achieves state-of-the-art verification accuracy across makeup status and can produce photo-realistic non-makeup",
"title": ""
},
{
"docid": "neg:1840375_10",
"text": "We aimed to describe the surgical technique and clinical outcomes of paraspinal-approach reduction and fixation (PARF) in a group of patients with Denis type B thoracolumbar burst fracture (TLBF) with neurological deficiencies. A total of 62 patients with Denis B TLBF with neurological deficiencies were included in this study between January 2009 and December 2011. Clinical evaluations including the Frankel scale, pain visual analog scale (VAS) and radiological assessment (CT scans for fragment reduction and X-ray for the Cobb angle, adjacent superior and inferior intervertebral disc height, and vertebral canal diameter) were performed preoperatively and at 3 days, 6 months, and 1 and 2 years postoperatively. All patients underwent successful PARF, and were followed-up for at least 2 years. Average surgical time, blood loss and incision length were recorded. The sagittal vertebral canal diameter was significantly enlarged. The canal stenosis index was also improved. Kyphosis was corrected and remained at 8.6±1.4o (P>0.05) 1 year postoperatively. Adjacent disc heights remained constant. Average Frankel grades were significantly improved at the end of follow-up. All 62 patients were neurologically assessed. Pain scores decreased at 6 months postoperatively, compared to before surgery (P<0.05). PARF provided excellent reduction for traumatic segmental kyphosis, and resulted in significant spinal canal clearance, which restored and maintained the vertebral body height of patients with Denis B TLBF with neurological deficits.",
"title": ""
},
{
"docid": "neg:1840375_11",
"text": "This paper presents Dynamoth, a dynamic, scalable, channel-based pub/sub middleware targeted at large scale, distributed and latency constrained systems. Our approach provides a software layer that balances the load generated by a high number of publishers, subscribers and messages across multiple, standard pub/sub servers that can be deployed in the Cloud. In order to optimize Cloud infrastructure usage, pub/sub servers can be added or removed as needed. Balancing takes into account the live characteristics of each channel and is done in an hierarchical manner across channels (macro) as well as within individual channels (micro) to maintain acceptable performance and low latencies despite highly varying conditions. Load monitoring is performed in an unintrusive way, and rebalancing employs a lazy approach in order to minimize its temporal impact on performance while ensuring successful and timely delivery of all messages. Extensive real-world experiments that illustrate the practicality of the approach within a massively multiplayer game setting are presented. Results indicate that with a given number of servers, Dynamoth was able to handle 60% more simultaneous clients than the consistent hashing approach, and that it was properly able to deal with highly varying conditions in the context of large workloads.",
"title": ""
},
{
"docid": "neg:1840375_12",
"text": "The capability to overcome terrain irregularities or obstacles, named terrainability, is mostly dependant on the suspension mechanism of the rover and its control. For a given wheeled robot, the terrainability can be improved by using a sophisticated control, and is somewhat related to minimizing wheel slip. The proposed control method, named torque control, improves the rover terrainability by taking into account the whole mechanical structure. The rover model is based on the Newton-Euler equations and knowing the complete state of the mechanical structures allows us to compute the force distribution in the structure, and especially between the wheels and the ground. Thus, a set of torques maximizing the traction can be used to drive the rover. The torque control algorithm is presented in this paper, as well as tests showing its impact and improvement in terms of terrainability. Using the CRAB rover platform, we show that the torque control not only increases the climbing performance but also limits odometric errors and reduces the overall power consumption.",
"title": ""
},
{
"docid": "neg:1840375_13",
"text": "PURPOSE\nBasal-like breast cancer is associated with high grade, poor prognosis, and younger patient age. Clinically, a triple-negative phenotype definition [estrogen receptor, progesterone receptor, and human epidermal growth factor receptor (HER)-2, all negative] is commonly used to identify such cases. EGFR and cytokeratin 5/6 are readily available positive markers of basal-like breast cancer applicable to standard pathology specimens. This study directly compares the prognostic significance between three- and five-biomarker surrogate panels to define intrinsic breast cancer subtypes, using a large clinically annotated series of breast tumors.\n\n\nEXPERIMENTAL DESIGN\nFour thousand forty-six invasive breast cancers were assembled into tissue microarrays. All had staging, pathology, treatment, and outcome information; median follow-up was 12.5 years. Cox regression analyses and likelihood ratio tests compared the prognostic significance for breast cancer death-specific survival (BCSS) of the two immunohistochemical panels.\n\n\nRESULTS\nAmong 3,744 interpretable cases, 17% were basal using the triple-negative definition (10-year BCSS, 6 7%) and 9% were basal using the five-marker method (10-year BCSS, 62%). Likelihood ratio tests of multivariable Cox models including standard clinical variables show that the five-marker panel is significantly more prognostic than the three-marker panel. The poor prognosis of triple-negative phenotype is conferred almost entirely by those tumors positive for basal markers. Among triple-negative patients treated with adjuvant anthracycline-based chemotherapy, the additional positive basal markers identified a cohort of patients with significantly worse outcome.\n\n\nCONCLUSIONS\nThe expanded surrogate immunopanel of estrogen receptor, progesterone receptor, human HER-2, EGFR, and cytokeratin 5/6 provides a more specific definition of basal-like breast cancer that better predicts breast cancer survival.",
"title": ""
},
{
"docid": "neg:1840375_14",
"text": "One of the most important aspects in playing the piano is using the appropriate fingers to facilitate movement and transitions. The fingering arrangement depends to a ce rtain extent on the size of the musician’s hand. We hav e developed an automatic fingering system that, given a sequence of pitches, suggests which fingers should be used. The output can be personalized to agree with t he limitations of the user’s hand. We also consider this system to be the base of a more complex future system: a score reduction system that will reduce orchestra scor e to piano scores. This paper describes: • “Vertical cost” model: the stretch induced by a given hand position. • “Horizontal cost” model: transition between two hand positions. • A system that computes low-cost fingering for a given piece of music. • A machine learning technique used to learn the appropriate parameters in the models.",
"title": ""
},
{
"docid": "neg:1840375_15",
"text": "As electric vehicles (EVs) take a greater share in the personal automobile market, their penetration may bring higher peak demand at the distribution level. This may cause potential transformer overloads, feeder congestions, and undue circuit faults. This paper focuses on the impact of charging EVs on a residential distribution circuit. Different EV penetration levels, EV types, and charging profiles are considered. In order to minimize the impact of charging EVs on a distribution circuit, a demand response strategy is proposed in the context of a smart distribution network. In the proposed DR strategy, consumers will have their own choices to determine which load to control and when. Consumer comfort indices are introduced to measure the impact of demand response on consumers' lifestyle. The proposed indices can provide electric utilities a better estimation of the customer acceptance of a DR program, and the capability of a distribution circuit to accommodate EV penetration.",
"title": ""
},
{
"docid": "neg:1840375_16",
"text": "1 Background and Objective of the Survey Compared with conventional centralized systems, blockchain technologies used for transactions of value records, such as bitcoins, structurally have the characteristics that (i) enable the creation of a system that substantially ensures no downtime (ii) make falsification extremely hard, and (iii) realize inexpensive system. Blockchain technologies are expected to be utilized in diverse fields including IoT. Japanese companies just started technology verification independently, and there is a risk that the initiative might be taken by foreign companies in blockchain technologies, which are highly likely to serve as the next-generation platform for all industrial fields in the future. From such point of view, this survey was conducted for the purpose of comparing and analyzing details of numbers of blockchains and advantages/challenges therein; ascertaining promising fields in which the technology should be utilized; ascertaining the impact of the technology on society and the economy; and developing policy guidelines for encouraging industries to utilize the technology in the future. This report compiles the results of interviews with domestic and overseas companies involving blockchain technology and experts. The content of this report is mostly based on data as of the end of February 2016. As specifications of blockchains and the status of services being provided change by the minute, it is recommended to check the latest conditions when intending to utilize any related technologies in business, etc. Terms and abbreviations used in this report are defined as follows. Terms Explanations BTC Abbreviation used as a currency unit of bitcoins FinTech A coined term combining Finance and Technology; Technologies and initiatives to create new services and businesses by utilizing ICT in the financial business Virtual currency / Cryptocurrency Bitcoins or other information whose value is recognized only on the Internet Exchange Services to exchange virtual currency, such as bitcoins, with another virtual currency or with legal currency, such as Japanese yen or US dollars; Some exchange offers services for contracts for difference, such as foreign exchange margin transactions (FX transactions) Consensus A series of procedures from approving a transaction as an official one and mutually confirming said results by using the following consensus algorithm Consensus algorithm Algorithm in general for mutually approving a distributed ledger using Proof of Work and Proof of Stake, etc. Token Virtual currency unique to blockchains; Virtual currency used for paying fees for asset management, etc. on blockchains is referred to …",
"title": ""
},
{
"docid": "neg:1840375_17",
"text": "In this paper, a low profile LLC resonant converter with two planar transformers is proposed for a slim SMPS (Switching Mode Power Supply). Design procedures and voltage gain characteristics on the proposed planar transformer and converter are described in detail. Two planar transformers applied to LLC resonant converter are connected in series at primary and in parallel by the center-tap winding at secondary. Based on the theoretical analysis and simulation results of the voltage gain characteristics, a 300W LLC resonant converter for LED TV power module is designed and tested.",
"title": ""
},
{
"docid": "neg:1840375_18",
"text": "The purpose of the study was to measure objectively the home use of the reciprocating gait orthosis (RGO) and the electrically augmented (hybrid) RGO. It was hypothesised that RGO use would increase following provision of functional electrical stimulation (FES). Five adult subjects participated in the study with spinal cord lesions ranging from C2 (incomplete) to T6. Selection criteria included active RGO use and suitability for electrical stimulation. Home RGO use was measured for up to 18 months by determining the mean number of steps taken per week. During this time patients were supplied with the hybrid system. Three alternatives for the measurement of steps taken were investigated: a commercial digital pedometer, a magnetically actuated counter and a heel contact switch linked to an electronic counter. The latter was found to be the most reliable system and was used for all measurements. Additional information on RGO use was acquired using three patient diaries administered throughout the study and before and after the provision of the hybrid system. Testing of the original hypothesis was complicated by problems in finding a reliable measurement tool and difficulties with data collection. However, the results showed that overall use of the RGO, whether with or without stimulation, is low. Statistical analysis of the step counter results was not realistic. No statistically significant change in RGO use was found between the patient diaries. The study suggests that the addition of electrical stimulation does not increase RGO use. The study highlights the problem of objectively measuring orthotic use in the home.",
"title": ""
}
] |
1840376 | The roles of brand community and community engagement in building brand trust on social media | [
{
"docid": "pos:1840376_0",
"text": "Social media based brand communities are communities initiated on the platform of social media. In this article, we explore whether brand communities based on social media (a special type of online brand communities) have positive effects on the main community elements and value creation practices in the communities as well as on brand trust and brand loyalty. A survey based empirical study with 441 respondents was conducted. The results of structural equation modeling show that brand communities established on social media have positive effects on community markers (i.e., shared consciousness, shared rituals and traditions, and obligations to society), which have positive effects on value creation practices (i.e., social networking, community engagement, impressions management, and brand use). Such communities could enhance brand loyalty through brand use and impression management practices. We show that brand trust has a full mediating role in converting value creation practices into brand loyalty. Implications for practice and future research opportunities are discussed.",
"title": ""
}
] | [
{
"docid": "neg:1840376_0",
"text": "In first encounters people quickly form impressions of each other’s personality and interpersonal attitude. We conducted a study to investigate how this transfers to first encounters between humans and virtual agents. In the study, subjects’ avatars approached greeting agents in a virtual museum rendered in both first and third person perspective. Each agent exclusively exhibited nonverbal immediacy cues (smile, gaze and proximity) during the approach. Afterwards subjects judged its personality (extraversion) and interpersonal attitude (hostility/friendliness). We found that within only 12.5 seconds of interaction subjects formed impressions of the agents based on observed behavior. In particular, proximity had impact on judgments of extraversion whereas smile and gaze on friendliness. These results held for the different camera perspectives. Insights on how the interpretations might change according to the user’s own personality are also provided.",
"title": ""
},
{
"docid": "neg:1840376_1",
"text": "It has long been known that storage of information in working memory suffers as a function of proactive interference. Here we review the results of experiments using approaches from cognitive neuroscience to reveal a pattern of brain activity that is a signature of proactive interference. Many of these results derive from a single paradigm that requires one to resolve interference from a previous experimental trial. The importance of activation in left inferior frontal cortex is shown repeatedly using this task and other tasks. We review a number of models that might account for the behavioral and imaging findings about proactive interference, raising questions about the adequacy of these models.",
"title": ""
},
{
"docid": "neg:1840376_2",
"text": "This paper introduces our submission to the 2nd Facial Landmark Localisation Competition. We present a deep architecture to directly detect facial landmarks without using face detection as an initialization. The architecture consists of two stages, a Basic Landmark Prediction Stage and a Whole Landmark Regression Stage. At the former stage, given an input image, the basic landmarks of all faces are detected by a sub-network of landmark heatmap and affinity field prediction. At the latter stage, the coarse canonical face and the pose can be generated by a Pose Splitting Layer based on the visible basic landmarks. According to its pose, each canonical state is distributed to the corresponding branch of the shape regression sub-networks for the whole landmark detection. Experimental results show that our method obtains promising results on the 300-W dataset, and achieves superior performances over the baselines of the semi-frontal and the profile categories in this competition.",
"title": ""
},
{
"docid": "neg:1840376_3",
"text": "Video game playing is a popular activity and its enjoyment among frequent players has been associated with absorption and immersion experiences. This paper examines how immersion in the video game environment can influence the player during the game and afterwards (including fantasies, thoughts, and actions). This is what is described as Game Transfer Phenomena (GTP). GTP occurs when video game elements are associated with real life elements triggering subsequent thoughts, sensations and/or player actions. To investigate this further, a total of 42 frequent video game players aged between 15 and 21 years old were interviewed. Thematic analysis showed that many players experienced GTP, where players appeared to integrate elements of video game playing into their real lives. These GTP were then classified as either intentional or automatic experiences. Results also showed that players used video games for interacting with others as a form of amusement, modeling or mimicking video game content, and daydreaming about video games. Furthermore, the findings demonstrate how video games triggered intrusive thoughts, sensations, impulses, reflexes, visual illusions, and dissociations. DOI: 10.4018/ijcbpl.2011070102 16 International Journal of Cyber Behavior, Psychology and Learning, 1(3), 15-33, July-September 2011 Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited. 24/7 activity (e.g., Ng & Weimer-Hastings, 2005; Chappell, Eatough, Davies, & Griffiths, 2006; Grüsser, Thalemann, & Griffiths, 2007). Today’s video games have evolved due to technological advance, resulting in high levels of realism and emotional design that include diversity, experimentation, and (perhaps in some cases) sensory overload. Furthermore, video games have been considered as fantasy triggers because they offer ‘what if’ scenarios (Baranowski, Buday, Thompson, & Baranowski, 2008). What if the player could become someone else? What if the player could inhabit an improbable world? What if the player could interact with fantasy characters or situations (Woolley, 1995)? Entertainment media content can be very effective in capturing the minds and eliciting emotions in the individual. Research about novels, films, fairy tales and television programs has shown that entertainment can generate emotions such as joy, awe, compassion, fear and anger (Oatley, 1999; Tan 1996; Valkenburg Cantor & Peeters, 2000, cited in Jansz et al., 2005). Video games also have the capacity to generate such emotions and have the capacity for players to become both immersed in, and dissociated from, the video game. Dissociation and Immersion It is clear that dissociation is a somewhat “fuzzy” concept as there is no clear accepted definition of what it actually constitutes (Griffiths, Wood, Parke, & Parke, 2006). Most would agree that dissociation is a form of altered state of consciousness. However, dissociative behaviours lie on a continuum and range from individuals losing track of time, feeling like they are someone else, blacking out, not recalling how they got somewhere or what they did, and being in a trance like state (Griffiths et al., 2006). Studies have found that dissociation is related to an extensive involvement in fantasizing, and daydreaming (Giesbrecht, Geraerts, & Merckelbach, 2007). Dissociative phenomena of the non-pathological type include absorption and imaginative involvement (Griffith et al., 2006) and are psychological phenomena that can occur during video game playing. Anyone can, to some degree, experience dissociative states in their daily lives (Giesbrecht et al., 2007). Furthermore, these states can happen episodically and can be situationally triggered (Griffiths et al., 2006). When people become engaged in games they may experience psychological absorption. More commonly known as ‘immersion’, this refers to when individual logical integration of thoughts, feelings and experiences is suspended (Funk, Chan, Brouwer, & Curtiss, 2006; Wood, Griffiths, & Parke, 2007). This can incur an altered state of consciousness such as altered time perception and change in degree of control over cognitive functioning (Griffiths et al., 2006). Video game enjoyment has been associated with absorption and immersion experiences (IJsselsteijn, Kort, de Poels, Jurgelionis, & Belotti, 2007). How an individual can get immersed in video games has been explained by the phenomenon of ‘flow’ (Csikszentmihalyi, 1988). Flow refers to the optimum experience a person achieves when performing an activity (e.g., video game playing) and may be induced, in part, by the structural characteristics of the activity itself. Structural characteristics of video games (i.e., the game elements that are incorporated into the game by the games designers) are usually based on a balance between skill and challenge (Wood et al., 2004; King, Delfabbro, & Griffiths, 2010), and help make playing video games an intrinsically rewarding activity (Csikszentmihalyi, 1988; King, et al. 2010). Studying Video Game Playing Studying the effects of video game playing requires taking in consideration four independent dimensions suggested by Gentile and Stone (2005); amount, content, form, and mechanism. The amount is understood as the time spent playing and gaming habits. Content refers to the message and topic delivered by the video game. Form focuses on the types of activity necessary to perform in the video game. The mechanism refers to the input-output devices used, which 17 more pages are available in the full version of this document, which may be purchased using the \"Add to Cart\" button on the product's webpage: www.igi-global.com/article/game-transfer-phenomena-videogame/58041?camid=4v1 This title is available in InfoSci-Journals, InfoSci-Journal Disciplines Communications and Social Science, InfoSciCommunications, Online Engagement, and Media eJournal Collection, InfoSci-Educational Leadership, Administration, and Technologies eJournal Collection, InfoSci-Healthcare Administration, Clinical Practice, and Bioinformatics eJournal Collection, InfoSci-Select, InfoSci-Journal Disciplines Library Science, Information Studies, and Education, InfoSci-Journal Disciplines Medicine, Healthcare, and Life Science. Recommend this product to your librarian: www.igi-global.com/e-resources/libraryrecommendation/?id=2",
"title": ""
},
{
"docid": "neg:1840376_4",
"text": "It is common for organizations to maintain multiple variants of a given business process, such as multiple sales processes for different products or multiple bookkeeping processes for different countries. Conventional business process modeling languages do not explicitly support the representation of such families of process variants. This gap triggered significant research efforts over the past decade, leading to an array of approaches to business process variability modeling. In general, each of these approaches extends a conventional process modeling language with constructs to capture customizable process models. A customizable process model represents a family of process variants in a way that a model of each variant can be derived by adding or deleting fragments according to customization options or according to a domain model. This survey draws up a systematic inventory of approaches to customizable process modeling and provides a comparative evaluation with the aim of identifying common and differentiating modeling features, providing criteria for selecting among multiple approaches, and identifying gaps in the state of the art. The survey puts into evidence an abundance of customizable process-modeling languages, which contrasts with a relative scarcity of available tool support and empirical comparative evaluations.",
"title": ""
},
{
"docid": "neg:1840376_5",
"text": "The differentiation of B lymphocytes in the bone marrow is guided by the surrounding microenvironment determined by cytokines, adhesion molecules, and the extracellular matrix. These microenvironmental factors are mainly provided by stromal cells. In this paper, we report the identification of a VCAM-1-positive stromal cell population by flow cytometry. This population showed the expression of cell surface markers known to be present on stromal cells (CD10, CD13, CD90, CD105) and had a fibroblastoid phenotype in vitro. Single cell RT-PCR analysis of its cytokine expression pattern revealed transcripts for haematopoietic cytokines important for either the early B lymphopoiesis like flt3L or the survival of long-lived plasma cells like BAFF or both processes like SDF-1. Whereas SDF-1 transcripts were detectable in all VCAM-1-positive cells, flt3L and BAFF were only expressed by some cells suggesting the putative existence of different subpopulations with distinct functional properties. In summary, the VCAM-1-positive cell population seems to be a candidate stromal cell population supporting either developing B cells and/or long-lived plasma cells in human bone marrow.",
"title": ""
},
{
"docid": "neg:1840376_6",
"text": "While an al dente character of 5G is yet to emerge, network densification, miscellany of node types, split of control and data plane, network virtualization, heavy and localized cache, infrastructure sharing, concurrent operation at multiple frequency bands, simultaneous use of different medium access control and physical layers, and flexible spectrum allocations can be envisioned as some of the potential ingredients of 5G. It is not difficult to prognosticate that with such a conglomeration of technologies, the complexity of operation and OPEX can become the biggest challenge in 5G. To cope with similar challenges in the context of 3G and 4G networks, recently, self-organizing networks, or SONs, have been researched extensively. However, the ambitious quality of experience requirements and emerging multifarious vision of 5G, and the associated scale of complexity and cost, demand a significantly different, if not totally new, approach toward SONs in order to make 5G technically as well as financially feasible. In this article we first identify what challenges hinder the current self-optimizing networking paradigm from meeting the requirements of 5G. We then propose a comprehensive framework for empowering SONs with big data to address the requirements of 5G. Under this framework we first characterize big data in the context of future mobile networks, identifying its sources and future utilities. We then explicate the specific machine learning and data analytics tools that can be exploited to transform big data into the right data that provides a readily useable knowledge base to create end-to-end intelligence of the network. We then explain how a SON engine can build on the dynamic models extractable from the right data. The resultant dynamicity of a big data empowered SON (BSON) makes it more agile and can essentially transform the SON from being a reactive to proactive paradigm and hence act as a key enabler for 5G's extremely low latency requirements. Finally, we demonstrate the key concepts of our proposed BSON framework through a case study of a problem that the classic 3G/4G SON fails to solve.",
"title": ""
},
{
"docid": "neg:1840376_7",
"text": "Objective:To compare fracture rates in four diet groups (meat eaters, fish eaters, vegetarians and vegans) in the Oxford cohort of the European Prospective Investigation into Cancer and Nutrition (EPIC-Oxford).Design:Prospective cohort study of self-reported fracture risk at follow-up.Setting:The United Kingdom.Subjects:A total of 7947 men and 26 749 women aged 20–89 years, including 19 249 meat eaters, 4901 fish eaters, 9420 vegetarians and 1126 vegans, recruited by postal methods and through general practice surgeries.Methods:Cox regression.Results:Over an average of 5.2 years of follow-up, 343 men and 1555 women reported one or more fractures. Compared with meat eaters, fracture incidence rate ratios in men and women combined adjusted for sex, age and non-dietary factors were 1.01 (95% CI 0.88–1.17) for fish eaters, 1.00 (0.89–1.13) for vegetarians and 1.30 (1.02–1.66) for vegans. After further adjustment for dietary energy and calcium intake the incidence rate ratio among vegans compared with meat eaters was 1.15 (0.89–1.49). Among subjects consuming at least 525 mg/day calcium the corresponding incidence rate ratios were 1.05 (0.90–1.21) for fish eaters, 1.02 (0.90–1.15) for vegetarians and 1.00 (0.69–1.44) for vegans.Conclusions:In this population, fracture risk was similar for meat eaters, fish eaters and vegetarians. The higher fracture risk in the vegans appeared to be a consequence of their considerably lower mean calcium intake. An adequate calcium intake is essential for bone health, irrespective of dietary preferences.Sponsorship:The EPIC-Oxford study is supported by The Medical Research Council and Cancer Research UK.",
"title": ""
},
{
"docid": "neg:1840376_8",
"text": "Sentiment classification has undergone significant development in recent years. However, most existing studies assume the balance between negative and positive samples, which may not be true in reality. In this paper, we investigate imbalanced sentiment classification instead. In particular, a novel clustering-based stratified under-sampling framework and a centroid-directed smoothing strategy are proposed to address the imbalanced class and feature distribution problems respectively. Evaluation across different datasets shows the effectiveness of both the under-sampling framework and the smoothing strategy in handling the imbalanced problems in real sentiment classification applications.",
"title": ""
},
{
"docid": "neg:1840376_9",
"text": "This paper reports on a qualitative study of journal entries written by students in six health professions participating in the Interprofessional Health Mentors program at the University of British Columbia, Canada. The study examined (1) what health professions students learn about professional language and communication when given the opportunity, in an interprofessional group with a patient or client, to explore the uses, meanings, and effects of common health care terms, and (2) how health professional students write about their experience of discussing common health care terms, and what this reveals about how students see their development of professional discourse and participation in a professional discourse community. Using qualitative thematic analysis to address the first question, the study found that discussion of these health care terms provoked learning and reflection on how words commonly used in one health profession can be understood quite differently in other health professions, as well as on how health professionals' language choices may be perceived by patients and clients. Using discourse analysis to address the second question, the study further found that many of the students emphasized accuracy and certainty in language through clear definitions and intersubjective agreement. However, when prompted by the discussion they were willing to consider other functions and effects of language.",
"title": ""
},
{
"docid": "neg:1840376_10",
"text": "This paper presents a Linux kernel module, DigSig, which helps system administrators control Executable and Linkable Format (ELF) binary execution and library loading based on the presence of a valid digital signature. By preventing attackers from replacing libraries and sensitive, privileged system daemons with malicious code, DigSig increases the difficulty of hiding illicit activities such as access to compromised systems. DigSig provides system administrators with an efficient tool which mitigates the risk of running malicious code at run time. This tool adds extra functionality previously unavailable for the Linux operating system: kernel level RSA signature verification with caching and revocation of signatures.",
"title": ""
},
{
"docid": "neg:1840376_11",
"text": "A hybrid particle swarm optimization (PSO) for the job shop problem (JSP) is proposed in this paper. In previous research, PSO particles search solutions in a continuous solution space. Since the solution space of the JSP is discrete, we modified the particle position representation, particle movement, and particle velocity to better suit PSO for the JSP. We modified the particle position based on preference list-based representation, particle movement based on swap operator, and particle velocity based on the tabu list concept in our algorithm. Giffler and Thompson’s heuristic is used to decode a particle position into a schedule. Furthermore, we applied tabu search to improve the solution quality. The computational results show that the modified PSO performs better than the original design, and that the hybrid PSO is better than other traditional metaheuristics. 2006 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "neg:1840376_12",
"text": "Navigation research is attracting renewed interest with the advent of learning-based methods. However, this new line of work is largely disconnected from well-established classic navigation approaches. In this paper, we take a step towards coordinating these two directions of research. We set up classic and learning-based navigation systems in common simulated environments and thoroughly evaluate them in indoor spaces of varying complexity, with access to different sensory modalities. Additionally, we measure human performance in the same environments. We find that a classic pipeline, when properly tuned, can perform very well in complex cluttered environments. On the other hand, learned systems can operate more robustly with a limited sensor suite. Both approaches are still far from human-level performance.",
"title": ""
},
{
"docid": "neg:1840376_13",
"text": "Tahoe is a system for secure, distributed storage. It uses capabilities for access control, cryptography for confidentiality and integrity, and erasure coding for fault-tolerance. It has been deployed in a commercial backup service and is currently operational. The implementation is Open Source.",
"title": ""
},
{
"docid": "neg:1840376_14",
"text": "The local field potential (LFP) reflects activity of many neurons in the vicinity of the recording electrode and is therefore useful for studying local network dynamics. Much of the nature of the LFP is, however, still unknown. There are, for instance, contradicting reports on the spatial extent of the region generating the LFP. Here, we use a detailed biophysical modeling approach to investigate the size of the contributing region by simulating the LFP from a large number of neurons around the electrode. We find that the size of the generating region depends on the neuron morphology, the synapse distribution, and the correlation in synaptic activity. For uncorrelated activity, the LFP represents cells in a small region (within a radius of a few hundred micrometers). If the LFP contributions from different cells are correlated, the size of the generating region is determined by the spatial extent of the correlated activity.",
"title": ""
},
{
"docid": "neg:1840376_15",
"text": "A 10-bit LCD column driver, consisting of piecewise linear digital to analog converters (DACs), is proposed. Piecewise linear compensation is utilized to reduce the die area and to increase the effective color depth. The data conversion is carried out by a resistor string type DAC (R-DAC) and a charge sharing DAC, which are used for the most significant bit and least significant bit data conversions, respectively. Gamma correction voltages are applied to the R-DAC to lit the inverse of the liquid crystal trans-mittance-voltage characteristic. The gamma correction can also be digitally fine-tuned in the timing controller or column drivers. A prototype 10-bit LCD column driver implemented in a 0.35-mum CMOS technology demonstrates that the settling time is within 3 mus and the average die size per channel is 0.063 mm2, smaller than those of column drivers based exclusively on R-DACs.",
"title": ""
},
{
"docid": "neg:1840376_16",
"text": "Applying Data Mining (DM) in education is an emerging interdisciplinary research field also known as Educational Data Mining (EDM). Ensemble techniques have been successfully applied in the context of supervised learning to increase the accuracy and stability of prediction. In this paper, we present a hybrid procedure based on ensemble classification and clustering that enables academicians to firstly predict students’ academic performance and then place each student in a well-defined cluster for further advising. Additionally, it endows instructors an anticipated estimation of their students’ capabilities during team forming and in-class participation. For ensemble classification, we use multiple classifiers (Decision Trees-J48, Naïve Bayes and Random Forest) to improve the quality of student data by eliminating noisy instances, and hence improving predictive accuracy. We then use the approach of bootstrap (sampling with replacement) averaging, which consists of running k-means clustering algorithm to convergence of the training data and averaging similar cluster centroids to obtain a single model. We empirically compare our technique with other ensemble techniques on real world education datasets.",
"title": ""
},
{
"docid": "neg:1840376_17",
"text": "BACKROUND\nSuperior Mesenteric Artery Syndrome (SMAS) is a rare disorder caused by compression of the third portion of the duodenum by the SMA. Once a conservative approach fails, usual surgical strategies include Duodenojejunostomy and Strong's procedure. The latter avoids potential anastomotic risks and complications. Robotic Strong's procedure (RSP) combines both the benefits of a minimal invasive approach and also enchased robotic accuracy and efficacy.\n\n\nMETHODS\nFor a young girl who was unsuccessfully treated conservatively, the paper describes the RSP surgical technique. To the authors' knowledge, this is the first report in the literature.\n\n\nRESULTS\nMinimal blood loss, short operative time, short hospital stay and early recovery were the short-term benefits. Significant weight gain was achieved three months after the surgery.\n\n\nCONCLUSION\nBased on primary experience, it is suggested that RSP is a very effective alternative in treating SMAS.",
"title": ""
},
{
"docid": "neg:1840376_18",
"text": "The problem of large-scale image search has been traditionally addressed with the bag-of-visual-words (BOV). In this article, we propose to use as an alternative the Fisher kernel framework. We first show why the Fisher representation is well-suited to the retrieval problem: it describes an image by what makes it different from other images. One drawback of the Fisher vector is that it is high-dimensional and, as opposed to the BOV, it is dense. The resulting memory and computational costs do not make Fisher vectors directly amenable to large-scale retrieval. Therefore, we compress Fisher vectors to reduce their memory footprint and speed-up the retrieval. We compare three binarization approaches: a simple approach devised for this representation and two standard compression techniques. We show on two publicly available datasets that compressed Fisher vectors perform very well using as little as a few hundreds of bits per image, and significantly better than a very recent compressed BOV approach.",
"title": ""
},
{
"docid": "neg:1840376_19",
"text": "Logo detection is a challenging task with many practical applications in our daily life and intellectual property protection. The two main obstacles here are lack of public logo datasets and effective design of logo detection structure. In this paper, we first manually collected and annotated 6,400 images and mix them with FlickrLogo-32 dataset, forming a larger dataset. Secondly, we constructed Faster R-CNN frameworks with several widely used classification models for logo detection. Furthermore, the transfer learning method was introduced in the training process. Finally, clustering was used to guarantee suitable hyper-parameters and more precise anchors of RPN. Experimental results show that the proposed framework outper-forms the state of-the-art methods with a noticeable margin.",
"title": ""
}
] |
1840377 | Maximum Power Point Tracking for PV system under partial shading condition via particle swarm optimization | [
{
"docid": "pos:1840377_0",
"text": "Photovolatic systems normally use a maximum power point tracking (MPPT) technique to continuously deliver the highest possible power to the load when variations in the insolation and temperature occur. It overcomes the problem of mismatch between the solar arrays and the given load. A simple method of tracking the maximum power points (MPP’s) and forcing the system to operate close to these points is presented. The principle of energy conservation is used to derive the largeand small-signal model and transfer function. By using the proposed model, the drawbacks of the state-space-averaging method can be overcome. The TI320C25 digital signal processor (DSP) was used to implement the proposed MPPT controller, which controls the dc/dc converter in the photovoltaic system. Simulations and experimental results show excellent performance.",
"title": ""
}
] | [
{
"docid": "neg:1840377_0",
"text": "This paper presents automatic parallel parking for car-like vehicle, with highlights on a path planning algorithm for arbitrary initial angle using two tangential arcs of different radii. The algorithm is divided into three parts. Firstly, a simple kinematic model of the vehicle is established based on Ackerman steering geometry; secondly, not only a minimal size of the parking space is analyzed based on the size and the performance of the vehicle but also an appropriate target point is chosen based on the size of the parking space and the vehicle; Finally, a path is generated based on two tangential arcs of different radii. The simulation results show that the feasibility of the proposed algorithm.",
"title": ""
},
{
"docid": "neg:1840377_1",
"text": "BACKGROUND\nSchizophrenia causes great suffering for patients and families. Today, patients are treated with medications, but unfortunately many still have persistent symptoms and an impaired quality of life. During the last 20 years of research in cognitive behavioral therapy (CBT) for schizophrenia, evidence has been found that the treatment is good for patients but it is not satisfactory enough, and more studies are being carried out hopefully to achieve further improvement.\n\n\nPURPOSE\nClinical trials and meta-analyses are being used to try to prove the efficacy of CBT. In this article, we summarize recent research using the cognitive model for people with schizophrenia.\n\n\nMETHODS\nA systematic search was carried out in PubMed (Medline). Relevant articles were selected if they contained a description of cognitive models for schizophrenia or psychotic disorders.\n\n\nRESULTS\nThere is now evidence that positive and negative symptoms exist in a continuum, from normality (mild form and few symptoms) to fully developed disease (intensive form with many symptoms). Delusional patients have reasoning bias such as jumping to conclusions, and those with hallucination have impaired self-monitoring and experience their own thoughts as voices. Patients with negative symptoms have negative beliefs such as low expectations regarding pleasure and success. In the entire patient group, it is common to have low self-esteem.\n\n\nCONCLUSIONS\nThe cognitive model integrates very well with the aberrant salience model. It takes into account neurobiology, cognitive, emotional and social processes. The therapist uses this knowledge when he or she chooses techniques for treatment of patients.",
"title": ""
},
{
"docid": "neg:1840377_2",
"text": "Personality testing is a popular method that used to be commonly employed in selection decisions in organizational settings. However, it is also a controversial practice according to a number researcher who claims that especially explicit measures of personality may be prone to the negative effects of faking and response distortion. The first aim of the present paper is to summarize Morgeson, Morgeson, Campion, Dipboye, Hollenbeck, Murphy and Schmitt’s paper that discussed the limitations of personality testing for performance ratings in relation to its basic conclusions about faking and response distortion. Secondly, the results of Rosse, Stecher, Miller and Levin’s study that investigated the effects of faking in personality testing on selection decisions will be discussed in detail. Finally, recent research findings related to implicit personality measures will be introduced along with the examples of the results related to the implications of those measures for response distortion in personality research and the suggestions for future research.",
"title": ""
},
{
"docid": "neg:1840377_3",
"text": "We undertook a meta-analysis of six Crohn's disease genome-wide association studies (GWAS) comprising 6,333 affected individuals (cases) and 15,056 controls and followed up the top association signals in 15,694 cases, 14,026 controls and 414 parent-offspring trios. We identified 30 new susceptibility loci meeting genome-wide significance (P < 5 × 10−8). A series of in silico analyses highlighted particular genes within these loci and, together with manual curation, implicated functionally interesting candidate genes including SMAD3, ERAP2, IL10, IL2RA, TYK2, FUT2, DNMT3A, DENND1B, BACH2 and TAGAP. Combined with previously confirmed loci, these results identify 71 distinct loci with genome-wide significant evidence for association with Crohn's disease.",
"title": ""
},
{
"docid": "neg:1840377_4",
"text": "We present a vertical-silicon-nanowire-based p-type tunneling field-effect transistor (TFET) using CMOS-compatible process flow. Following our recently reported n-TFET , a low-temperature dopant segregation technique was employed on the source side to achieve steep dopant gradient, leading to excellent tunneling performance. The fabricated p-TFET devices demonstrate a subthreshold swing (SS) of 30 mV/decade averaged over a decade of drain current and an Ion/Ioff ratio of >; 105. Moreover, an SS of 50 mV/decade is maintained for three orders of drain current. This demonstration completes the complementary pair of TFETs to implement CMOS-like circuits.",
"title": ""
},
{
"docid": "neg:1840377_5",
"text": "We introduce Evenly Cascaded convolutional Network (ECN), a neural network taking inspiration from the cascade algorithm of wavelet analysis. ECN employs two feature streams - a low-level and high-level steam. At each layer these streams interact, such that low-level features are modulated using advanced perspectives from the high-level stream. ECN is evenly structured through resizing feature map dimensions by a consistent ratio, which removes the burden of ad-hoc specification of feature map dimensions. ECN produces easily interpretable features maps, a result whose intuition can be understood in the context of scale-space theory. We demonstrate that ECN’s design facilitates the training process through providing easily trainable shortcuts. We report new state-of-the-art results for small networks, without the need for additional treatment such as pruning or compression - a consequence of ECN’s simple structure and direct training. A 6-layered ECN design with under 500k parameters achieves 95.24% and 78.99% accuracy on CIFAR-10 and CIFAR-100 datasets, respectively, outperforming the current state-of-the-art on small parameter networks, and a 3 million parameter ECN produces results competitive to the state-of-the-art.",
"title": ""
},
{
"docid": "neg:1840377_6",
"text": "Advanced Driver Assistance Systems (ADAS) based on video camera tends to be generalized in today's automotive. However, if most of these systems perform nicely in good weather conditions, they perform very poorly under adverse weather particularly under rain. We present a novel approach that aims at detecting raindrops on a car windshield using only images from an in-vehicle camera. Based on the photometric properties of raindrops, the algorithm relies on image processing technics to highlight raindrops. Its results can be further used for image restoration and vision enhancement and hence it is a valuable tool for ADAS.",
"title": ""
},
{
"docid": "neg:1840377_7",
"text": "The presentation of news articles to meet research needs has traditionally been a document-centric process. Yet users often want to monitor developing news stories based on an event, rather than by examining an exhaustive list of retrieved documents. In this work, we illustrate a news retrieval system, eventNews, and an underlying algorithm which is event-centric. Through this system, news articles are clustered around a single news event or an event and its sub-events. The algorithm presented can leverage the creation of new Reuters stories and their compact labels as seed documents for the clustering process. The system is configured to generate top-level clusters for news events based on an editorially supplied topical label, known as a ‘slugline,’ and to generate sub-topic-focused clusters based on the algorithm. The system uses an agglomerative clustering algorithm to gather and structure documents into distinct result sets. Decisions on whether to merge related documents or clusters are made according to the similarity of evidence derived from two distinct sources, one, relying on a digital signature based on the unstructured text in the document, the other based on the presence of named entity tags that have been assigned to the document by a named entity tagger, in this case Thomson Reuters’ Calais engine. Copyright c © 2016 for the individual papers by the paper’s authors. Copying permitted for private and academic purposes. This volume is published and copyrighted by its editors. In: M. Martinez, U. Kruschwitz, G. Kazai, D. Corney, F. Hopfgartner, R. Campos and D. Albakour (eds.): Proceedings of the NewsIR’16 Workshop at ECIR, Padua, Italy, 20-March-2016, published at http://ceur-ws.org",
"title": ""
},
{
"docid": "neg:1840377_8",
"text": "Previous research on relation classification has verified the effectiveness of using dependency shortest paths or subtrees. In this paper, we further explore how to make full use of the combination of these dependency information. We first propose a new structure, termed augmented dependency path (ADP), which is composed of the shortest dependency path between two entities and the subtrees attached to the shortest path. To exploit the semantic representation behind the ADP structure, we develop dependency-based neural networks (DepNN): a recursive neural network designed to model the subtrees, and a convolutional neural network to capture the most important features on the shortest path. Experiments on the SemEval-2010 dataset show that our proposed method achieves state-of-art results.",
"title": ""
},
{
"docid": "neg:1840377_9",
"text": "This chapter gives an extended introduction to the lightweight profiles OWL EL, OWL QL, and OWL RL of the Web Ontology Language OWL. The three ontology language standards are sublanguages of OWL DL that are restricted in ways that significantly simplify ontological reasoning. Compared to OWL DL as a whole, reasoning algorithms for the OWL profiles show higher performance, are easier to implement, and can scale to larger amounts of data. Since ontological reasoning is of great importance for designing and deploying OWL ontologies, the profiles are highly attractive for many applications. These advantages come at a price: various modelling features of OWL are not available in all or some of the OWL profiles. Moreover, the profiles are mutually incomparable in the sense that each of them offers a combination of features that is available in none of the others. This chapter provides an overview of these differences and explains why some of them are essential to retain the desired properties. To this end, we recall the relationship between OWL and description logics (DLs), and show how each of the profiles is typically treated in reasoning algorithms.",
"title": ""
},
{
"docid": "neg:1840377_10",
"text": "Due to increasing number of internet users, popularity of Broadband Internet also increasing. Hence the connection cost should be decrease due to Wi Fi connectivity and built-in sensors in devices as well the maximum number of devices should be connected through a common medium. To meet all these requirements, the technology so called Internet of Things is evolved. Internet of Things (IoT) can be considered as a connection of computing devices like smart phones, coffee maker, washing machines, wearable device with an internet. IoT create network and connect \"things\" and people together by creating relationship between either people-people, people-things or things-things. As the number of device connection is increased, it increases the Security risk. Security is the biggest issue for IoT at any companies across the globe. Furthermore, privacy and data sharing can again be considered as a security concern for IoT. Companies, those who use IoT technique, need to find a way to store, track, analyze and make sense of the large amounts of data that will be generated. Few security techniques of IoT are necessary to implement to protect your confidential and important data as well for device protection through some internet security threats.",
"title": ""
},
{
"docid": "neg:1840377_11",
"text": "Septic arthritis/tenosynovitis in the horse can have life-threatening consequences. The purpose of this cross-sectional retrospective study was to describe ultrasound characteristics of septic arthritis/tenosynovitis in a group of horses. Diagnosis of septic arthritis/tenosynovitis was based on historical and clinical findings as well as the results of the synovial fluid analysis and/or positive synovial culture. Ultrasonographic findings recorded were degree of joint/sheath effusion, degree of synovial membrane thickening, echogenicity of the synovial fluid, and presence of hyperechogenic spots and fibrinous loculations. Ultrasonographic findings were tested for dependence on the cause of sepsis, time between admission and beginning of clinical signs, and the white blood cell counts in the synovial fluid. Thirty-eight horses with confirmed septic arthritis/tenosynovitis of 43 joints/sheaths were included. Degree of effusion was marked in 81.4% of cases, mild in 16.3%, and absent in 2.3%. Synovial thickening was mild in 30.9% of cases and moderate/severe in 69.1%. Synovial fluid was anechogenic in 45.2% of cases and echogenic in 54.8%. Hyperechogenic spots were identified in 32.5% of structures and fibrinous loculations in 64.3%. Relationships between the degree of synovial effusion, degree of the synovial thickening, presence of fibrinous loculations, and the time between admission and beginning of clinical signs were identified, as well as between the presence of fibrinous loculations and the cause of sepsis (P ≤ 0.05). Findings indicated that ultrasonographic findings of septic arthritis/tenosynovitis may vary in horses, and may be influenced by time between admission and beginning of clinical signs.",
"title": ""
},
{
"docid": "neg:1840377_12",
"text": "Dynamically changing (reconfiguring) the membership of a replicated distributed system while preserving data consistency and system availability is a challenging problem. In this paper, we show that reconfiguration can be simplified by taking advantage of certain properties commonly provided by Primary/Backup systems. We describe a new reconfiguration protocol, recently implemented in Apache Zookeeper. It fully automates configuration changes and minimizes any interruption in service to clients while maintaining data consistency. By leveraging the properties already provided by Zookeeper our protocol is considerably simpler than state of the art.",
"title": ""
},
{
"docid": "neg:1840377_13",
"text": "High utilization of cargo volume is an essential factor in the success of modern enterprises in the market. Although mathematical models have been presented for container loading problems in the literature, there is still a lack of studies that consider practical constraints. In this paper, a Mixed Integer Linear Programming is developed for the problem of packing a subset of rectangular boxes inside a container such that the total value of the packed boxes is maximized while some realistic constraints, such as vertical stability, are considered. The packing is orthogonal, and the boxes can be freely rotated into any of the six orientations. Moreover, a sequence triple-based solution methodology is proposed, simulated annealing is used as modeling technique, and the situation where some boxes are preplaced in the container is investigated. These preplaced boxes represent potential obstacles. Numerical experiments are conducted for containers with and without obstacles. The results show that the simulated annealing approach is successful and can handle large number of packing instances.",
"title": ""
},
{
"docid": "neg:1840377_14",
"text": "In recent years many popular data visualizations have emerged that are created largely by designers whose main area of expertise is not computer science. Designers generate these visualizations using a handful of design tools and environments. To better inform the development of tools intended for designers working with data, we set out to understand designers' challenges and perspectives. We interviewed professional designers, conducted observations of designers working with data in the lab, and observed designers working with data in team settings in the wild. A set of patterns emerged from these observations from which we extract a number of themes that provide a new perspective on design considerations for visualization tool creators, as well as on known engineering problems.",
"title": ""
},
{
"docid": "neg:1840377_15",
"text": "Memories today expose an all-or-nothing correctness model that incurs significant costs in performance, energy, area, and design complexity. But not all applications need high-precision storage for all of their data structures all of the time. This article proposes mechanisms that enable applications to store data approximately and shows that doing so can improve the performance, lifetime, or density of solid-state memories. We propose two mechanisms. The first allows errors in multilevel cells by reducing the number of programming pulses used to write them. The second mechanism mitigates wear-out failures and extends memory endurance by mapping approximate data onto blocks that have exhausted their hardware error correction resources. Simulations show that reduced-precision writes in multilevel phase-change memory cells can be 1.7 × faster on average and using failed blocks can improve array lifetime by 23% on average with quality loss under 10%.",
"title": ""
},
{
"docid": "neg:1840377_16",
"text": "Since 2012, citizens in Alaska, Colorado, Oregon, and Washington have voted to legalize the recreational use of marijuana by adults. Advocates of legalization have argued that prohibition wastes scarce law enforcement resources by selectively arresting minority users of a drug that has fewer adverse health effects than alcohol.1,2 It would be better, they argue, to legalize, regulate, and tax marijuana, like alcohol.3 Opponents of legalization argue that it will increase marijuana use among youth because it will make marijuana more available at a cheaper price and reduce the perceived risks of its use.4 Cerdá et al5 have assessed these concerns by examining the effects of marijuana legalization in Colorado and Washington on attitudes toward marijuana and reported marijuana use among young people. They used surveys from Monitoring the Future between 2010 and 2015 to examine changes in the perceived risks of occasional marijuana use and self-reported marijuana use in the last 30 days among students in eighth, 10th, and 12th grades in Colorado and Washington before and after legalization. They compared these changes with changes among students in states in the contiguous United States that had not legalized marijuana (excluding Oregon, which legalized in 2014). The perceived risks of using marijuana declined in all states, but there was a larger decline in perceived risks and a larger increase in marijuana use in the past 30 days among eighth and 10th graders from Washington than among students from other states. They did not find any such differences between students in Colorado and students in other US states that had not legalized, nor did they find any of these changes in 12th graders in Colorado or Washington. If the changes observed in Washington are attributable to legalization, why were there no changes found in Colorado? The authors suggest that this may have been because Colorado’s medical marijuana laws were much more liberal before legalization than those in Washington. After 2009, Colorado permitted medical marijuana to be supplied through for-profit dispensaries and allowed advertising of medical marijuana products. This hypothesisissupportedbyotherevidencethattheperceivedrisks of marijuana use decreased and marijuana use increased among young people in Colorado after these changes in 2009.6",
"title": ""
},
{
"docid": "neg:1840377_17",
"text": "Supernumerary or accessory nostrils are a very rare type of congenital nasal anomaly, with only a few cases reported in the literature. They can be associated with such malformations as facial clefts and they can be unilateral or bilateral, with most cases reported being unilateral. The accessory nostril may or may not communicate with the ipsilateral nasal cavity, probably depending on the degree of embryological progression of the anomaly. A case of simple supernumerary left nostril with no nasal cavity communication and with a normally developed nose is presented. The surgical treatment is described and the different speculative theories related to the embryogenesis of supernumerary nostrils are also reviewed.",
"title": ""
},
{
"docid": "neg:1840377_18",
"text": "Documents come naturally with structure: a section contains paragraphs which itself contains sentences; a blog page contains a sequence of comments and links to related blogs. Structure, of course, implies something about shared topics. In this paper we take the simplest form of structure, a document consisting of multiple segments, as the basis for a new form of topic model. To make this computationally feasible, and to allow the form of collapsed Gibbs sampling that has worked well to date with topic models, we use the marginalized posterior of a two-parameter Poisson-Dirichlet process (or Pitman-Yor process) to handle the hierarchical modelling. Experiments using either paragraphs or sentences as segments show the method significantly outperforms standard topic models on either whole document or segment, and previous segmented models, based on the held-out perplexity measure.",
"title": ""
},
{
"docid": "neg:1840377_19",
"text": "Ranking and scoring are ubiquitous. We consider the setting in which an institution, called a ranker, evaluates a set of individuals based on demographic, behavioral or other characteristics. The final output is a ranking that represents the relative quality of the individuals. While automatic and therefore seemingly objective, rankers can, and often do, discriminate against individuals and systematically disadvantage members of protected groups. This warrants a careful study of the fairness of a ranking scheme, to enable data science for social good applications, among others.\n In this paper we propose fairness measures for ranked outputs. We develop a data generation procedure that allows us to systematically control the degree of unfairness in the output, and study the behavior of our measures on these datasets. We then apply our proposed measures to several real datasets, and detect cases of bias. Finally, we show preliminary results of incorporating our ranked fairness measures into an optimization framework, and show potential for improving fairness of ranked outputs while maintaining accuracy.\n The code implementing all parts of this work is publicly available at https://github.com/DataResponsibly/FairRank.",
"title": ""
}
] |
1840378 | Deep Q-learning From Demonstrations | [
{
"docid": "pos:1840378_0",
"text": "Model-free episodic reinforcement learning problems define the environment reward with functions that often provide only sparse information throughout the task. Consequently, agents are not given enough feedback about the fitness of their actions until the task ends with success or failure. Previous work addresses this problem with reward shaping. In this paper we introduce a novel approach to improve modelfree reinforcement learning agents’ performance with a three step approach. Specifically, we collect demonstration data, use the data to recover a linear function using inverse reinforcement learning and we use the recovered function for potential-based reward shaping. Our approach is model-free and scalable to high dimensional domains. To show the scalability of our approach we present two sets of experiments in a two dimensional Maze domain, and the 27 dimensional Mario AI domain. We compare the performance of our algorithm to previously introduced reinforcement learning from demonstration algorithms. Our experiments show that our approach outperforms the state-of-the-art in cumulative reward, learning rate and asymptotic performance.",
"title": ""
}
] | [
{
"docid": "neg:1840378_0",
"text": "Vincent Larivière École de bibliothéconomie et des sciences de l’information, Université de Montréal, C.P. 6128, Succ. CentreVille, Montréal, QC H3C 3J7, Canada, and Observatoire des Sciences et des Technologies (OST), Centre Interuniversitaire de Recherche sur la Science et la Technologie (CIRST), Université du Québec à Montréal, CP 8888, Succ. Centre-Ville, Montréal, QC H3C 3P8, Canada. E-mail: vincent.lariviere@umontreal.ca",
"title": ""
},
{
"docid": "neg:1840378_1",
"text": "English vocabulary learning and ubiquitous learning have separately received considerable attention in recent years. However, research on English vocabulary learning in ubiquitous learning contexts has been less studied. In this study, we develop a ubiquitous English vocabulary learning (UEVL) system to assist students in experiencing a systematic vocabulary learning process in which ubiquitous technology is used to develop the system, and video clips are used as the material. Afterward, the technology acceptance model and partial least squares approach are used to explore students’ perspectives on the UEVL system. The results indicate that (1) both the system characteristics and the material characteristics of the UEVL system positively and significantly influence the perspectives of all students on the system; (2) the active students are interested in perceived usefulness; (3) the passive students are interested in perceived ease of use. 2011 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "neg:1840378_2",
"text": "Keshif is an open-source, web-based data exploration environment that enables data analytics novices to create effective visual and interactive dashboards and explore relations with minimal learning time, and data analytics experts to explore tabular data in multiple perspectives rapidly with minimal setup time. In this paper, we present a high-level overview of the exploratory features and design characteristics of Keshif, as well as its API and a selection of its implementation specifics. We conclude with a discussion of its use as an open-source project.",
"title": ""
},
{
"docid": "neg:1840378_3",
"text": "Non-frontal lip views contain useful information which can be used to enhance the performance of frontal view lipreading. However, the vast majority of recent lipreading works, including the deep learning approaches which significantly outperform traditional approaches, have focused on frontal mouth images. As a consequence, research on joint learning of visual features and speech classification from multiple views is limited. In this work, we present an end-to-end multi-view lipreading system based on Bidirectional Long-Short Memory (BLSTM) networks. To the best of our knowledge, this is the first model which simultaneously learns to extract features directly from the pixels and performs visual speech classification from multiple views and also achieves state-of-the-art performance. The model consists of multiple identical streams, one for each view, which extract features directly from different poses of mouth images. The temporal dynamics in each stream/view are modelled by a BLSTM and the fusion of multiple streams/views takes place via another BLSTM. An absolute average improvement of 3% and 3.8% over the frontal view performance is reported on the OuluVS2 database when the best two (frontal and profile) and three views (frontal, profile, 45◦) are combined, respectively. The best three-view model results in a 10.5% absolute improvement over the current multi-view state-of-the-art performance on OuluVS2, without using external databases for training, achieving a maximum classification accuracy of 96.9%.",
"title": ""
},
{
"docid": "neg:1840378_4",
"text": "OBJECTIVE\nTo estimate the current prevalence of limb loss in the United States and project the future prevalence to the year 2050.\n\n\nDESIGN\nEstimates were constructed using age-, sex-, and race-specific incidence rates for amputation combined with age-, sex-, and race-specific assumptions about mortality. Incidence rates were derived from the 1988 to 1999 Nationwide Inpatient Sample of the Healthcare Cost and Utilization Project, corrected for the likelihood of reamputation among those undergoing amputation for vascular disease. Incidence rates were assumed to remain constant over time and applied to historic mortality and population data along with the best available estimates of relative risk, future mortality, and future population projections. To investigate the sensitivity of our projections to increasing or decreasing incidence, we developed alternative sets of estimates of limb loss related to dysvascular conditions based on assumptions of a 10% or 25% increase or decrease in incidence of amputations for these conditions.\n\n\nSETTING\nCommunity, nonfederal, short-term hospitals in the United States.\n\n\nPARTICIPANTS\nPersons who were discharged from a hospital with a procedure code for upper-limb or lower-limb amputation or diagnosis code of traumatic amputation.\n\n\nINTERVENTIONS\nNot applicable.\n\n\nMAIN OUTCOME MEASURES\nPrevalence of limb loss by age, sex, race, etiology, and level in 2005 and projections to the year 2050.\n\n\nRESULTS\nIn the year 2005, 1.6 million persons were living with the loss of a limb. Of these subjects, 42% were nonwhite and 38% had an amputation secondary to dysvascular disease with a comorbid diagnosis of diabetes mellitus. It is projected that the number of people living with the loss of a limb will more than double by the year 2050 to 3.6 million. If incidence rates secondary to dysvascular disease can be reduced by 10%, this number would be lowered by 225,000.\n\n\nCONCLUSIONS\nOne in 190 Americans is currently living with the loss of a limb. Unchecked, this number may double by the year 2050.",
"title": ""
},
{
"docid": "neg:1840378_5",
"text": "The demand for video content is continuously increasing as video sharing on the Internet is becoming enormously popular recently. This demand, with its high bandwidth requirements, has a considerable impact on the load of the network infrastructure. As more users access videos from their mobile devices, the load on the current wireless infrastructure (which has limited capacity) will be even more significant. Based on observations from many local video sharing scenarios, in this paper, we study the tradeoffs of using Wi-Fi ad-hoc mode versus infrastructure mode for video streaming between adjacent devices. We thus show the potential of direct device-to-device communication as a way to reduce the load on the wireless infrastructure and to improve user experiences. Setting up experiments for WiFi devices connected in ad-hoc mode, we collect measurements for various video streaming scenarios and compare them to the case where the devices are connected through access points. The results show the improvements in latency, jitter and loss rate. More importantly, the results show that the performance in direct device-to-device streaming is much more stable in contrast to the access point case, where different factors affect the performance causing widely unpredictable qualities.",
"title": ""
},
{
"docid": "neg:1840378_6",
"text": "We address the problem of optimizing recommender systems for multiple relevance objectives that are not necessarily aligned. Specifically, given a recommender system that optimizes for one aspect of relevance, semantic matching (as defined by any notion of similarity between source and target of recommendation; usually trained on CTR), we want to enhance the system with additional relevance signals that will increase the utility of the recommender system, but that may simultaneously sacrifice the quality of the semantic match. The issue is that semantic matching is only one relevance aspect of the utility function that drives the recommender system, albeit a significant aspect. In talent recommendation systems, job posters want candidates who are a good match to the job posted, but also prefer those candidates to be open to new opportunities. Recommender systems that recommend discussion groups must ensure that the groups are relevant to the users' interests, but also need to favor active groups over inactive ones. We refer to these additional relevance signals (job-seeking intent and group activity) as extraneous features, and they account for aspects of the utility function that are not captured by the semantic match (i.e. post-CTR down-stream utilities that reflect engagement: time spent reading, sharing, commenting, etc). We want to include these extraneous features into the recommendations, but we want to do so while satisfying the following requirements: 1) we do not want to drastically sacrifice the quality of the semantic match, and 2) we want to quantify exactly how the semantic match would be affected as we control the different aspects of the utility function. In this paper, we present an approach that satisfies these requirements.\n We frame our approach as a general constrained optimization problem and suggest ways in which it can be solved efficiently by drawing from recent research on optimizing non-smooth rank metrics for information retrieval. Our approach features the following characteristics: 1) it is model and feature agnostic, 2) it does not require additional labeled training data to be collected, and 3) it can be easily incorporated into an existing model as an additional stage in the computation pipeline. We validate our approach in a revenue-generating recommender system that ranks billions of candidate recommendations on a daily basis and show that a significant improvement in the utility of the recommender system can be achieved with an acceptable and predictable degradation in the semantic match quality of the recommendations.",
"title": ""
},
{
"docid": "neg:1840378_7",
"text": "We present DeepPicar, a low-cost deep neural network based autonomous car platform. DeepPicar is a small scale replication of a real self-driving car called DAVE-2 by NVIDIA. DAVE-2 uses a deep convolutional neural network (CNN), which takes images from a front-facing camera as input and produces car steering angles as output. DeepPicar uses the same network architecture—9 layers, 27 million connections and 250K parameters—and can drive itself in real-time using a web camera and a Raspberry Pi 3 quad-core platform. Using DeepPicar, we analyze the Pi 3's computing capabilities to support end-to-end deep learning based real-time control of autonomous vehicles. We also systematically compare other contemporary embedded computing platforms using the DeepPicar's CNN-based real-time control workload. We find that all tested platforms, including the Pi 3, are capable of supporting the CNN-based real-time control, from 20 Hz up to 100 Hz, depending on hardware platform. However, we find that shared resource contention remains an important issue that must be considered in applying CNN models on shared memory based embedded computing platforms; we observe up to 11.6X execution time increase in the CNN based control loop due to shared resource contention. To protect the CNN workload, we also evaluate state-of-the-art cache partitioning and memory bandwidth throttling techniques on the Pi 3. We find that cache partitioning is ineffective, while memory bandwidth throttling is an effective solution.",
"title": ""
},
{
"docid": "neg:1840378_8",
"text": "Fact-related information contained in fictional narratives may induce substantial changes in readers’ real-world beliefs. Current models of persuasion through fiction assume that these effects occur because readers are psychologically transported into the fictional world of the narrative. Contrary to general dual-process models of persuasion, models of persuasion through fiction also imply that persuasive effects of fictional narratives are persistent and even increase over time (absolute sleeper effect). In an experiment designed to test this prediction, 81 participants read either a fictional story that contained true as well as false assertions about realworld topics or a control story. There were large short-term persuasive effects of false information, and these effects were even larger for a group with a two-week assessment delay. Belief certainty was weakened immediately after reading but returned to baseline level after two weeks, indicating that beliefs acquired by reading fictional narratives are integrated into realworld knowledge.",
"title": ""
},
{
"docid": "neg:1840378_9",
"text": "Familial hypercholesterolaemia (FH) leads to elevated plasma levels of LDL-cholesterol and increased risk of premature atherosclerosis. Dietary treatment is recommended to all patients with FH in combination with lipid-lowering drug therapy. Little is known about how children with FH and their parents respond to dietary advice. The aim of the present study was to characterise the dietary habits in children with FH. A total of 112 children and young adults with FH and a non-FH group of children (n 36) were included. The children with FH had previously received dietary counselling. The FH subjects were grouped as: 12-14 years (FH (12-14)) and 18-28 years (FH (18-28)). Dietary data were collected by SmartDiet, a short self-instructing questionnaire on diet and lifestyle where the total score forms the basis for an overall assessment of the diet. Clinical and biochemical data were retrieved from medical records. The SmartDiet scores were significantly improved in the FH (12-14) subjects compared with the non-FH subjects (SmartDiet score of 31 v. 28, respectively). More FH (12-14) subjects compared with non-FH children consumed low-fat milk (64 v. 18 %, respectively), low-fat cheese (29 v. 3%, respectively), used margarine with highly unsaturated fat (74 v. 14 %, respectively). In all, 68 % of the FH (12-14) subjects and 55 % of the non-FH children had fish for dinner twice or more per week. The FH (18-28) subjects showed the same pattern in dietary choices as the FH (12-14) children. In contrast to the choices of low-fat dietary items, 50 % of the FH (12-14) subjects consumed sweet spreads or sweet drinks twice or more per week compared with only 21 % in the non-FH group. In conclusion, ordinary out-patient dietary counselling of children with FH seems to have a long-lasting effect, as the diet of children and young adults with FH consisted of more products that are favourable with regard to the fatty acid composition of the diet.",
"title": ""
},
{
"docid": "neg:1840378_10",
"text": "In this paper, we consider the grayscale template-matching problem, invariant to rotation, scale, translation, brightness and contrast, without previous operations that discard grayscale information, like detection of edges, detection of interest points or segmentation/binarization of the images. The obvious “brute force” solution performs a series of conventional template matchings between the image to analyze and the template query shape rotated by every angle, translated to every position and scaled by every factor (within some specified range of scale factors). Clearly, this takes too long and thus is not practical. We propose a technique that substantially accelerates this searching, while obtaining the same result as the original brute force algorithm. In some experiments, our algorithm was 400 times faster than the brute force algorithm. Our algorithm consists of three cascaded filters. These filters successively exclude pixels that have no chance of matching the template from further processing.",
"title": ""
},
{
"docid": "neg:1840378_11",
"text": "Despite a strong nonlinear behavior and a complex design, the interior permanent-magnet (IPM) machine is proposed as a good candidate among the PM machines owing to its interesting peculiarities, i.e., higher torque in flux-weakening operation, higher fault tolerance, and ability to adopt low-cost PMs. A second trend in designing PM machines concerns the adoption of fractional-slot (FS) nonoverlapped coil windings, which reduce the end winding length and consequently the Joule losses and the cost. Therefore, the adoption of an IPM machine with an FS winding aims to combine both advantages: high torque and efficiency in a wide operating region. However, the combination of an anisotropic rotor and an FS winding stator causes some problems. The interaction between the magnetomotive force harmonics due to the stator current and the rotor anisotropy causes a very high torque ripple. This paper illustrates a procedure in designing an IPM motor with the FS winding exhibiting a low torque ripple. The design strategy is based on two consecutive steps: at first, the winding is optimized by taking a multilayer structure, and then, the rotor geometry is optimized by adopting a nonsymmetric structure. As an example, a 12-slot 10-pole IPM machine is considered, achieving a torque ripple lower than 1.5% at full load.",
"title": ""
},
{
"docid": "neg:1840378_12",
"text": "This chapter provides a self-contained first introduction to description logics (DLs). The main concepts and features are explained with examples before syntax and semantics of the DL SROIQ are defined in detail. Additional sections review light-weight DL languages, discuss the relationship to the Web Ontology Language OWL and give pointers to further reading.",
"title": ""
},
{
"docid": "neg:1840378_13",
"text": "We describe the CoNLL-2002 shared task: language-independent named entity recognition. We give background information on the data sets and the evaluation method, present a general overview of the systems that have taken part in the task and discuss their performance.",
"title": ""
},
{
"docid": "neg:1840378_14",
"text": "In medical research, continuous variables are often converted into categorical variables by grouping values into two or more categories. We consider in detail issues pertaining to creating just two groups, a common approach in clinical research. We argue that the simplicity achieved is gained at a cost; dichotomization may create rather than avoid problems, notably a considerable loss of power and residual confounding. In addition, the use of a data-derived 'optimal' cutpoint leads to serious bias. We illustrate the impact of dichotomization of continuous predictor variables using as a detailed case study a randomized trial in primary biliary cirrhosis. Dichotomization of continuous data is unnecessary for statistical analysis and in particular should not be applied to explanatory variables in regression models.",
"title": ""
},
{
"docid": "neg:1840378_15",
"text": "Time-series classification has attracted considerable research attention due to the various domains where time-series data are observed, ranging from medicine to econometrics. Traditionally, the focus of time-series classification has been on short time-series data composed of a few patterns exhibiting variabilities, while recently there have been attempts to focus on longer series composed of multiple local patrepeating with an arbitrary irregularity. The primary contribution of this paper relies on presenting a method which can detect local patterns in repetitive time-series via fitting local polynomial functions of a specified degree. We capture the repetitiveness degrees of time-series datasets via a new measure. Furthermore, our method approximates local polynomials in linear time and ensures an overall linear running time complexity. The coefficients of the polynomial functions are converted to symbolic words via equi-area discretizations of the coefficients' distributions. The symbolic polynomial words enable the detection of similar local patterns by assigning the same word to similar polynomials. Moreover, a histogram of the frequencies of the words is constructed from each time-series' bag of words. Each row of the histogram enables a new representation for the series and symbolizes the occurrence of local patterns and their frequencies. In an experimental comparison against state-of-the-art baselines on repetitive datasets, our method demonstrates significant improvements in terms of prediction accuracy.",
"title": ""
},
{
"docid": "neg:1840378_16",
"text": "This article provides an alternative perspective for measuring author impact by applying PageRank algorithm to a coauthorship network. A weighted PageRank algorithm considering citation and coauthorship network topology is proposed. We test this algorithm under different damping factors by evaluating author impact in the informetrics research community. In addition, we also compare this weighted PageRank with the h-index, citation, and program committee (PC) membership of the International Society for Scientometrics and Informetrics (ISSI) conferences. Findings show that this weighted PageRank algorithm provides reliable results in measuring author impact.",
"title": ""
},
{
"docid": "neg:1840378_17",
"text": "Virtualization is increasingly being used to address server management and administration issues like flexible resource allocation, service isolation and workload migration. In a virtualized environment, the virtual machine monitor (VMM) is the primary resource manager and is an attractive target for implementing system features like scheduling, caching, and monitoring. However, the lackof runtime information within the VMM about guest operating systems, sometimes called the semantic gap, is a significant obstacle to efficiently implementing some kinds of services.In this paper we explore techniques that can be used by a VMM to passively infer useful information about a guest operating system's unified buffer cache and virtual memory system. We have created a prototype implementation of these techniques inside the Xen VMM called Geiger and show that it can accurately infer when pages are inserted into and evicted from a system's buffer cache. We explore several nuances involved in passively implementing eviction detection that have not previously been addressed, such as the importance of tracking disk block liveness, the effect of file system journaling, and the importance of accounting for the unified caches found in modern operating systems.Using case studies we show that the information provided by Geiger enables a VMM to implement useful VMM-level services. We implement a novel working set size estimator which allows the VMM to make more informed memory allocation decisions. We also show that a VMM can be used to drastically improve the hit rate in remote storage caches by using eviction-based cache placement without modifying the application or operating system storage interface. Both case studies hint at a future where inference techniques enable a broad new class of VMM-level functionality.",
"title": ""
},
{
"docid": "neg:1840378_18",
"text": "Considerable data and analysis support the detection of one or more supernovae (SNe) at a distance of about 50 pc, ∼2.6 million years ago. This is possibly related to the extinction event around that time and is a member of a series of explosions that formed the Local Bubble in the interstellar medium. We build on previous work, and propagate the muon flux from SN-initiated cosmic rays from the surface to the depths of the ocean. We find that the radiation dose from the muons will exceed the total present surface dose from all sources at depths up to 1 km and will persist for at least the lifetime of marine megafauna. It is reasonable to hypothesize that this increase in radiation load may have contributed to a newly documented marine megafaunal extinction at that time.",
"title": ""
},
{
"docid": "neg:1840378_19",
"text": "BACKGROUND\nThere exists some ambiguity regarding the exact anatomical limits of the orbicularis retaining ligament, particularly its medial boundary in both the superior and inferior orbits. Precise understanding of this anatomy is necessary during periorbital rejuvenation.\n\n\nMETHODS\nSixteen fresh hemifacial cadaver dissections were performed in the anatomy laboratory to evaluate the anatomy of the orbicularis retaining ligament. Dissection was assisted by magnification with loupes and the operating microscope.\n\n\nRESULTS\nA ligamentous system was found that arises from the inferior and superior orbital rim that is truly periorbital. This ligament spans the entire circumference of the orbit from the medial to the lateral canthus. There exists a fusion line between the orbital septum and the orbicularis retaining ligament in the superior orbit, indistinguishable from the arcus marginalis of the inferior orbital rim. Laterally, the orbicularis retaining ligament contributes to the lateral canthal ligament, consistent with previous studies. No contribution to the medial canthus was identified in this study.\n\n\nCONCLUSIONS\nThe orbicularis retaining ligament is a true, circumferential \"periorbital\" structure. This ligament may serve two purposes: (1) to act as a fixation point for the orbicularis muscle of the upper and lower eyelids and (2) to protect the ocular globe. With techniques of periorbital injection with fillers and botulinum toxin becoming ever more popular, understanding the orbicularis retaining ligament's function as a partitioning membrane is mandatory for avoiding ocular complications. As a support structure, examples are shown of how manipulation of this ligament may benefit canthopexy, septal reset, and brow-lift procedures as described by Hoxworth.",
"title": ""
}
] |
1840379 | Forecasting time series with complex seasonal patterns using exponential smoothing | [
{
"docid": "pos:1840379_0",
"text": "This paper considers univariate online electricity demand forecasting for lead times from a half-hour-ahead to a day-ahead. A time series of demand recorded at half-hourly intervals contains more than one seasonal pattern. A within-day seasonal cycle is apparent from the similarity of the demand profile from one day to the next, and a within-week seasonal cycle is evident when one compares the demand on the corresponding day of adjacent weeks. There is strong appeal in using a forecasting method that is able to capture both seasonalities. The multiplicative seasonal ARIMA model has been adapted for this purpose. In this paper, we adapt the Holt-Winters exponential smoothing formulation so that it can accommodate two seasonalities. We correct for residual autocorrelation using a simple autoregressive model. The forecasts produced by the new double seasonal Holt-Winters method outperform those from traditional Holt-Winters and from a well-specified multiplicative double seasonal ARIMA model.",
"title": ""
}
] | [
{
"docid": "neg:1840379_0",
"text": "OSHA Region VIII office and the HBA of Metropolitan Denver who made this research possible and the Centers for Disease Control and Prevention, the National Institute for Occupational Safety and Health (NIOSH) for their support and funding via the awards 1 R03 OH04199-0: Occupational Low Back Pain in Residential Carpentry: Ergonomic Elements of Posture and Strain within the HomeSafe Pilot Program sponsored by OSHA and the HBA. Correspondence and requests for offprints should be sent to David P. Gilkey, Department of Environmental and Radiological Health Sciences, Colorado State University, Ft. Collins, CO 80523-1681, USA. E-mail: <dgilkey@colostate.edu>. Low Back Pain Among Residential Carpenters: Ergonomic Evaluation Using OWAS and 2D Compression Estimation",
"title": ""
},
{
"docid": "neg:1840379_1",
"text": "INTRODUCTION\nLiquid injectable silicone (LIS) has been used for soft tissue augmentation in excess of 50 years. Until recently, all literature on penile augmentation with LIS consisted of case reports or small cases series, most involving surgical intervention to correct the complications of LIS. New formulations of LIS and new methodologies for injection have renewed interest in this procedure.\n\n\nAIM\nWe reported a case of penile augmentation with LIS and reviewed the pertinent literature.\n\n\nMETHODS\nComprehensive literature review was performed using PubMed. We performed additional searches based on references from relevant review articles.\n\n\nRESULTS\nInjection of medical grade silicone for soft tissue augmentation has a role in carefully controlled study settings. Historically, the use of LIS for penile augmentation has had poor outcomes and required surgical intervention to correct complications resulting from LIS.\n\n\nCONCLUSIONS\nWe currently discourage the use of LIS for penile augmentation until carefully designed and evaluated trials have been completed.",
"title": ""
},
{
"docid": "neg:1840379_2",
"text": "Cloud service brokerage has been identified as a key concern for future cloud technology development and research. We compare service brokerage solutions. A range of specific concerns like architecture, programming and quality will be looked at. We apply a 2-pronged classification and comparison framework. We will identify challenges and wider research objectives based on an identification of cloud broker architecture concerns and technical requirements for service brokerage solutions. We will discuss complex cloud architecture concerns such as commoditisation and federation of integrated, vertical cloud stacks.",
"title": ""
},
{
"docid": "neg:1840379_3",
"text": "An integrated high performance, highly reliable, scalable, and secure communications network is critical for the successful deployment and operation of next-generation electricity generation, transmission, and distribution systems — known as “smart grids.” Much of the work done to date to define a smart grid communications architecture has focused on high-level service requirements with little attention to implementation challenges. This paper investigates in detail a smart grid communication network architecture that supports today's grid applications (such as supervisory control and data acquisition [SCADA], mobile workforce communication, and other voice and data communication) and new applications necessitated by the introduction of smart metering and home area networking, support of demand response applications, and incorporation of renewable energy sources in the grid. We present design principles for satisfying the diverse quality of service (QoS) and reliability requirements of smart grids.",
"title": ""
},
{
"docid": "neg:1840379_4",
"text": "Large datasets are increasingly common and are often difficult to interpret. Principal component analysis (PCA) is a technique for reducing the dimensionality of such datasets, increasing interpretability but at the same time minimizing information loss. It does so by creating new uncorrelated variables that successively maximize variance. Finding such new variables, the principal components, reduces to solving an eigenvalue/eigenvector problem, and the new variables are defined by the dataset at hand, not a priori, hence making PCA an adaptive data analysis technique. It is adaptive in another sense too, since variants of the technique have been developed that are tailored to various different data types and structures. This article will begin by introducing the basic ideas of PCA, discussing what it can and cannot do. It will then describe some variants of PCA and their application.",
"title": ""
},
{
"docid": "neg:1840379_5",
"text": "People increasingly use the Internet for obtaining information regarding diseases, diagnoses and available treatments. Currently, many online health portals already provide non-personalized health information in the form of articles. However, it can be challenging to find information relevant to one's condition, interpret this in context, and understand the medical terms and relationships. Recommender Systems (RS) already help these systems perform precise information filtering. In this short paper, we look one step ahead and show the progress made towards RS helping users find personalized, complex medical interventions or support them with preventive healthcare measures. We identify key challenges that need to be addressed for RS to offer the kind of decision support needed in high-risk domains like healthcare.",
"title": ""
},
{
"docid": "neg:1840379_6",
"text": "An increasing number of disasters (natural and man-made) with a large number of victims and significant social and economical losses are observed in the past few years. Although particular events can always be attributed to fate, it is improving the disaster management that have to contribute to decreasing damages and ensuring proper care for citizens in affected areas. Some of the lessons learned in the last several years give clear indications that the availability, management and presentation of geo-information play a critical role in disaster management. However, all the management techniques that are being developed are understood by, and confined to the intellectual community and hence lack mass participation. Awareness of the disasters is the only effective way in which one can bring about mass participation. Hence, any disaster management is successful only when the general public has some awareness about the disaster. In the design of such awareness program, intelligent mapping through analysis and data sharing also plays a very vital role. The analytical capabilities of GIS support all aspects of disaster management: planning, response and recovery, and records management. The proposed GIS based awareness program in this paper would improve the currently practiced disaster management programs and if implemented, would result in a proper dosage of awareness and caution to the general public, which in turn would help to cope with the dangerous activities of disasters in future.",
"title": ""
},
{
"docid": "neg:1840379_7",
"text": "In this paper, we consider the problem of single image super-resolution and propose a novel algorithm that outperforms state-of-the-art methods without the need of learning patches pairs from external data sets. We achieve this by modeling images and, more precisely, lines of images as piecewise smooth functions and propose a resolution enhancement method for this type of functions. The method makes use of the theory of sampling signals with finite rate of innovation (FRI) and combines it with traditional linear reconstruction methods. We combine the two reconstructions by leveraging from the multi-resolution analysis in wavelet theory and show how an FRI reconstruction and a linear reconstruction can be fused using filter banks. We then apply this method along vertical, horizontal, and diagonal directions in an image to obtain a single-image super-resolution algorithm. We also propose a further improvement of the method based on learning from the errors of our super-resolution result at lower resolution levels. Simulation results show that our method outperforms state-of-the-art algorithms under different blurring kernels.",
"title": ""
},
{
"docid": "neg:1840379_8",
"text": "In conventional supervised training, a model is trained to fit all the training examples. However, having a monolithic model may not always be the best strategy, as examples could vary widely. In this work, we explore a different learning protocol that treats each example as a unique pseudo-task, by reducing the original learning problem to a few-shot meta-learning scenario with the help of a domain-dependent relevance function.1 When evaluated on the WikiSQL dataset, our approach leads to faster convergence and achieves 1.1%–5.4% absolute accuracy gains over the non-meta-learning counterparts.",
"title": ""
},
{
"docid": "neg:1840379_9",
"text": "We present a logic for stating properties such as, “after a request for service there is at least a 98% probability that the service will be carried out within 2 seconds”. The logic extends the temporal logic CTL by Emerson, Clarke and Sistla with time and probabilities. Formulas are interpreted over discrete time Markov chains. We give algorithms for checking that a given Markov chain satisfies a formula in the logic. The algorithms require a polynomial number of arithmetic operations, in size of both the formula and the Markov chain. A simple example is included to illustrate the algorithms.",
"title": ""
},
{
"docid": "neg:1840379_10",
"text": "To solve the problem of the false matching and low robustness in detecting copy-move forgeries, a new method was proposed in this study. It involves the following steps: first, establish a Gaussian scale space; second, extract the orientated FAST key points and the ORB features in each scale space; thirdly, revert the coordinates of the orientated FAST key points to the original image and match the ORB features between every two different key points using the hamming distance; finally, remove the false matched key points using the RANSAC algorithm and then detect the resulting copy-move regions. The experimental results indicate that the new algorithm is effective for geometric transformation, such as scaling and rotation, and exhibits high robustness even when an image is distorted by Gaussian blur, Gaussian white noise and JPEG recompression; the new algorithm even has great detection on the type of hiding object forgery.",
"title": ""
},
{
"docid": "neg:1840379_11",
"text": "In traffic environment, conventional FMCW radar with triangular transmit waveform may bring out many false targets in multi-target situations and result in a high false alarm rate. An improved FMCW waveform and multi-target detection algorithm for vehicular applications is presented. The designed waveform in each small cycle is composed of two-segment: LFM section and constant frequency section. They have the same duration, yet in two adjacent small cycles the two LFM slopes are opposite sign and different size. Then the two adjacent LFM bandwidths are unequal. Within a determinate frequency range, the constant frequencies are modulated by a unique PN code sequence for different automotive radar in a big period. Corresponding to the improved waveform, which combines the advantages of both FSK and FMCW formats, a judgment algorithm is used in the continuous small cycle to further eliminate the false targets. The combination of unambiguous ranges and relative velocities can confirm and cancel most false targets in two adjacent small cycles.",
"title": ""
},
{
"docid": "neg:1840379_12",
"text": "BACKGROUND\nAlthough evidence-based and effective treatments are available for people with depression, a substantial number does not seek or receive help. Therefore, it is important to gain a better understanding of the reasons why people do or do not seek help. This study examined what predisposing and need factors are associated with help-seeking among people with major depression.\n\n\nMETHODS\nA cross-sectional study was conducted in 102 subjects with major depression. Respondents were recruited from the general population in collaboration with three Municipal Health Services (GGD) across different regions in the Netherlands. Inclusion criteria were: being aged 18 years or older, a high score on a screening instrument for depression (K10 > 20), and a diagnosis of major depression established through the Composite International Diagnostic Interview (CIDI 2.1).\n\n\nRESULTS\nOf the total sample, 65 % (n = 66) had received help in the past six months. Results showed that respondents with a longer duration of symptoms and those with lower personal stigma were more likely to seek help. Other determinants were not significantly related to help-seeking.\n\n\nCONCLUSIONS\nLonger duration of symptoms was found to be an important determinant of help-seeking among people with depression. It is concerning that stigma was related to less help-seeking. Knowledge and understanding of depression should be promoted in society, hopefully leading to reduced stigma and increased help-seeking.",
"title": ""
},
{
"docid": "neg:1840379_13",
"text": "While pulmonary embolism (PE) causes approximately 100 000-180 000 deaths per year in the United States, mortality is restricted to patients who have massive or submassive PEs. This state of the art review familiarizes the reader with these categories of PE. The review discusses the following topics: pathophysiology, clinical presentation, rationale for stratification, imaging, massive PE management and outcomes, submassive PE management and outcomes, and future directions. It summarizes the most up-to-date literature on imaging, systemic thrombolysis, surgical embolectomy, and catheter-directed therapy for submassive and massive PE and gives representative examples that reflect modern practice. © RSNA, 2017.",
"title": ""
},
{
"docid": "neg:1840379_14",
"text": "RPL, the routing protocol proposed by IETF for IPv6/6LoWPAN Low Power and Lossy Networks has significant complexity. Another protocol called LOADng, a lightweight variant of AODV, emerges as an alternative solution. In this paper, we compare the performance of the two protocols in a Home Automation scenario with heterogenous traffic patterns including a mix of multipoint-to-point and point-to-multipoint routes in realistic dense non-uniform network topologies. We use Contiki OS and Cooja simulator to evaluate the behavior of the ContikiRPL implementation and a basic non-optimized implementation of LOADng. Unlike previous studies, our results show that RPL provides shorter delays, less control overhead, and requires less memory than LOADng. Nevertheless, enhancing LOADng with more efficient flooding and a better route storage algorithm may improve its performance.",
"title": ""
},
{
"docid": "neg:1840379_15",
"text": "Recently, IT trends such as big data, cloud computing, internet of things (IoT), 3D visualization, network, and so on demand terabyte/s bandwidth computer performance in a graphics card. In order to meet these performance, terabyte/s bandwidth graphics module using 2.5D-IC with high bandwidth memory (HBM) technology has been emerged. Due to the difference in scale of interconnect pitch between GPU or HBM and package substrate, the HBM interposer is certainly required for terabyte/s bandwidth graphics module. In this paper, the electrical performance of the HBM interposer channel in consideration of the manufacturing capabilities is analyzed by simulation both the frequency- and time-domain. Furthermore, although the silicon substrate is most widely employed for the HBM interposer fabrication, the organic and glass substrate are also proposed to replace the high cost and high loss silicon substrate. Therefore, comparison and analysis of the electrical performance of the HBM interposer channel using silicon, organic, and glass substrate are conducted.",
"title": ""
},
{
"docid": "neg:1840379_16",
"text": "Spike timing-dependent plasticity (STDP) as a Hebbian synaptic learning rule has been demonstrated in various neural circuits over a wide spectrum of species, from insects to humans. The dependence of synaptic modification on the order of pre- and postsynaptic spiking within a critical window of tens of milliseconds has profound functional implications. Over the past decade, significant progress has been made in understanding the cellular mechanisms of STDP at both excitatory and inhibitory synapses and of the associated changes in neuronal excitability and synaptic integration. Beyond the basic asymmetric window, recent studies have also revealed several layers of complexity in STDP, including its dependence on dendritic location, the nonlinear integration of synaptic modification induced by complex spike trains, and the modulation of STDP by inhibitory and neuromodulatory inputs. Finally, the functional consequences of STDP have been examined directly in an increasing number of neural circuits in vivo.",
"title": ""
},
{
"docid": "neg:1840379_17",
"text": "Cracking-resistant password vaults have been recently proposed with the goal of thwarting offline attacks. This requires the generation of synthetic password vaults that are statistically indistinguishable from real ones. In this work, we establish a conceptual link between this problem and steganography, where the stego objects must be undetectable among cover objects. We compare the two frameworks and highlight parallels and differences. Moreover, we transfer results obtained in the steganography literature into the context of decoy generation. Our results include the infeasibility of perfectly secure decoy vaults and the conjecture that secure decoy vaults are at least as hard to construct as secure steganography.",
"title": ""
},
{
"docid": "neg:1840379_18",
"text": "While it has a long history, the last 30 years have brought considerable advances to the discipline of forensic anthropology worldwide. Every so often it is essential that these advances are noticed and trends assessed. It is also important to identify those research areas that are needed for the forthcoming years. The purpose of this special issue is to examine some of the examples of research that might identify the trends in the 21st century. Of the 14 papers 5 dealt with facial features and identification such as facial profile determination and skull-photo superimposition. Age (fetus and cranial thickness), sex (supranasal region, arm and leg bones) and stature (from the arm bones) estimation were represented by five articles. Others discussed the estimation of time since death, skull color and diabetes, and a case study dealing with a mummy and skeletal analysis in comparison with DNA identification. These papers show that age, sex, and stature are still important issues of the discipline. Research on the human face is moving from hit and miss case studies to a more scientifically sound direction. A lack of studies on trauma and taphonomy is very clear. Anthropologists with other scientists can develop research areas to make the identification process more reliable. Research should include the assessment of animal attacks on human remains, factors affecting decomposition rates, and aging of the human face. Lastly anthropologists should be involved in the education of forensic pathologists about osteological techniques and investigators regarding archaeology of crime scenes.",
"title": ""
}
] |
1840380 | Crowdsourcing and language studies: the new generation of linguistic data | [
{
"docid": "pos:1840380_0",
"text": "The neural mechanisms underlying the processing of conventional and novel conceptual metaphorical sentences were examined with event-related potentials (ERPs). Conventional metaphors were created based on the Contemporary Theory of Metaphor and were operationally defined as familiar and readily interpretable. Novel metaphors were unfamiliar and harder to interpret. Using a sensicality judgment task, we compared ERPs elicited by the same target word when it was used to end anomalous, novel metaphorical, conventional metaphorical and literal sentences. Amplitudes of the N400 ERP component (320-440 ms) were more negative for anomalous sentences, novel metaphors, and conventional metaphors compared with literal sentences. Within a later window (440-560 ms), ERPs associated with conventional metaphors converged to the same level as literal sentences while the novel metaphors stayed anomalous throughout. The reported results were compatible with models assuming an initial stage for metaphor mappings from one concept to another and that these mappings are cognitively taxing.",
"title": ""
},
{
"docid": "pos:1840380_1",
"text": "The authors present several versions of a general model, titled the E-Z Reader model, of eye movement control in reading. The major goal of the modeling is to relate cognitive processing (specifically aspects of lexical access) to eye movements in reading. The earliest and simplest versions of the model (E-Z Readers 1 and 2) merely attempt to explain the total time spent on a word before moving forward (the gaze duration) and the probability of fixating a word; later versions (E-Z Readers 3-5) also attempt to explain the durations of individual fixations on individual words and the number of fixations on individual words. The final version (E-Z Reader 5) appears to be psychologically plausible and gives a good account of many phenomena in reading. It is also a good tool for analyzing eye movement data in reading. Limitations of the model and directions for future research are also discussed.",
"title": ""
},
{
"docid": "pos:1840380_2",
"text": "This paper presents a general statistical methodology for the analysis of multivariate categorical data arising from observer reliability studies. The procedure essentially involves the construction of functions of the observed proportions which are directed at the extent to which the observers agree among themselves and the construction of test statistics for hypotheses involving these functions. Tests for interobserver bias are presented in terms of first-order marginal homogeneity and measures of interobserver agreement are developed as generalized kappa-type statistics. These procedures are illustrated with a clinical diagnosis example from the epidemiological literature.",
"title": ""
}
] | [
{
"docid": "neg:1840380_0",
"text": "Chatbots use a database of responses often culled from a corpus of text generated for a different purpose, for example film scripts or interviews. One consequence of this approach is a mismatch between the data and the inputs generated by participants. We describe an approach that while starting from an existing corpus (of interviews) makes use of crowdsourced data to augment the response database, focusing on responses that people judge as inappropriate. The long term goal is to create a data set of more appropriate chat responses; the short term consequence appears to be the identification and replacement of particularly inappropriate responses. We found the version with the expanded database was rated significantly better in terms of the response level appropriateness and the overall ability to engage users. We also describe strategies we developed that target certain breakdowns discovered during data collection. Both the source code of the chatbot, TickTock, and the data collected are publicly available.",
"title": ""
},
{
"docid": "neg:1840380_1",
"text": "Responses to domestic violence have focused, to date, primarily on intervention after the problem has already been identified and harm has occurred. There are, however, new domestic violence prevention strategies emerging, and prevention approaches from the public health field can serve as models for further development of these strategies. This article describes two such models. The first involves public health campaigns that identify and address the underlying causes of a problem. Although identifying the underlying causes of domestic violence is difficult--experts do not agree on causation, and several different theories exist--these theories share some common beliefs that can serve as a foundation for prevention strategies. The second public health model can be used to identify opportunities for domestic violence prevention along a continuum of possible harm: (1) primary prevention to reduce the incidence of the problem before it occurs; (2) secondary prevention to decrease the prevalence after early signs of the problem; and (3) tertiary prevention to intervene once the problem is already clearly evident and causing harm. Examples of primary prevention include school-based programs that teach students about domestic violence and alternative conflict-resolution skills, and public education campaigns to increase awareness of the harms of domestic violence and of services available to victims. Secondary prevention programs could include home visiting for high-risk families and community-based programs on dating violence for adolescents referred through child protective services (CPS). Tertiary prevention includes the many targeted intervention programs already in place (and described in other articles in this journal issue). Early evaluations of existing prevention programs show promise, but results are still preliminary and programs remain small, locally based, and scattered throughout the United States and Canada. What is needed is a broadly based, comprehensive prevention strategy that is supported by sound research and evaluation, receives adequate public backing, and is based on a policy of zero tolerance for domestic violence.",
"title": ""
},
{
"docid": "neg:1840380_2",
"text": "Spotting anomalies in large multi-dimensional databases is a crucial task with many applications in finance, health care, security, etc. We introduce COMPREX, a new approach for identifying anomalies using pattern-based compression. Informally, our method finds a collection of dictionaries that describe the norm of a database succinctly, and subsequently flags those points dissimilar to the norm---with high compression cost---as anomalies.\n Our approach exhibits four key features: 1) it is parameter-free; it builds dictionaries directly from data, and requires no user-specified parameters such as distance functions or density and similarity thresholds, 2) it is general; we show it works for a broad range of complex databases, including graph, image and relational databases that may contain both categorical and numerical features, 3) it is scalable; its running time grows linearly with respect to both database size as well as number of dimensions, and 4) it is effective; experiments on a broad range of datasets show large improvements in both compression, as well as precision in anomaly detection, outperforming its state-of-the-art competitors.",
"title": ""
},
{
"docid": "neg:1840380_3",
"text": "Data-flow testing (DFT) is a family of testing strategies designed to verify the interactions between each program variable’s definition and its uses. Such a test objective of interest is referred to as a def-use pair. DFT selects test data with respect to various test adequacy criteria (i.e., data-flow coverage criteria) to exercise each pair. The original conception of DFT was introduced by Herman in 1976. Since then, a number of studies have been conducted, both theoretically and empirically, to analyze DFT’s complexity and effectiveness. In the past four decades, DFT has been continuously concerned, and various approaches from different aspects are proposed to pursue automatic and efficient data-flow testing. This survey presents a detailed overview of data-flow testing, including challenges and approaches in enforcing and automating it: (1) it introduces the data-flow analysis techniques that are used to identify def-use pairs; (2) it classifies and discusses techniques for data-flow-based test data generation, such as search-based testing, random testing, collateral-coverage-based testing, symbolic-execution-based testing, and model-checking-based testing; (3) it discusses techniques for tracking data-flow coverage; (4) it presents several DFT applications, including software fault localization, web security testing, and specification consistency checking; and (5) it summarizes recent advances and discusses future research directions toward more practical data-flow testing.",
"title": ""
},
{
"docid": "neg:1840380_4",
"text": "We present a joint theoretical and experimental investigation of the absorption spectra of silver clusters Ag(n) (4<or=n<or=22). The experimental spectra of clusters isolated in an Ar matrix are compared with the calculated ones in the framework of the time-dependent density functional theory. The analysis of the molecular transitions indicates that the s-electrons are responsible for the optical response of small clusters (n<or=8) while the d-electrons play a crucial role in the optical excitations for larger n values.",
"title": ""
},
{
"docid": "neg:1840380_5",
"text": "We propose a general method called truncated gradient to induce sparsity in the weights of online learning algorithms with convex loss functions. This method has several essential properties: 1. The degree of sparsity is continuous a parameter controls the rate of sparsi cation from no sparsi cation to total sparsi cation. 2. The approach is theoretically motivated, and an instance of it can be regarded as an online counterpart of the popular L1-regularization method in the batch setting. We prove that small rates of sparsi cation result in only small additional regret with respect to typical online learning guarantees. 3. The approach works well empirically. We apply the approach to several datasets and nd that for datasets with large numbers of features, substantial sparsity is discoverable.",
"title": ""
},
{
"docid": "neg:1840380_6",
"text": "With the advent of deep neural networks, learning-based approaches for 3D reconstruction have gained popularity. However, unlike for images, in 3D there is no canonical representation which is both computationally and memory efficient yet allows for representing high-resolution geometry of arbitrary topology. Many of the state-of-the-art learningbased 3D reconstruction approaches can hence only represent very coarse 3D geometry or are limited to a restricted domain. In this paper, we propose occupancy networks, a new representation for learning-based 3D reconstruction methods. Occupancy networks implicitly represent the 3D surface as the continuous decision boundary of a deep neural network classifier. In contrast to existing approaches, our representation encodes a description of the 3D output at infinite resolution without excessive memory footprint. We validate that our representation can efficiently encode 3D structure and can be inferred from various kinds of input. Our experiments demonstrate competitive results, both qualitatively and quantitatively, for the challenging tasks of 3D reconstruction from single images, noisy point clouds and coarse discrete voxel grids. We believe that occupancy networks will become a useful tool in a wide variety of learning-based 3D tasks.",
"title": ""
},
{
"docid": "neg:1840380_7",
"text": "Lean and simulation analysis are driven by the same objective, how to better design and improve processes making the companies more competitive. The adoption of lean has been widely spread in companies from public to private sectors and simulation is nowadays becoming more and more popular. Several authors have pointed out the benefits of combining simulation and lean, however, they are still rarely used together in practice. Optimization as an additional technique to this combination is even a more powerful approach especially when designing and improving complex processes with multiple conflicting objectives. This paper presents the mutual benefits that are gained when combining lean, simulation and optimization and how they overcome each other's limitations. A framework including the three concepts, some of the barriers for its implementation and a real-world industrial example are also described.",
"title": ""
},
{
"docid": "neg:1840380_8",
"text": "Feature Oriented Programming (FOP) is an emerging paradigmfor application synthesis, analysis, and optimization. Atarget application is specified declaratively as a set of features,like many consumer products (e.g., personal computers,automobiles). FOP technology translates suchdeclarative specifications into efficient programs.",
"title": ""
},
{
"docid": "neg:1840380_9",
"text": "As a promising area in artificial intelligence, a new learning paradigm, called Small Sample Learning (SSL), has been attracting prominent research attention in the recent years. In this paper, we aim to present a survey to comprehensively introduce the current techniques proposed on this topic. Specifically, current SSL techniques can be mainly divided into two categories. The first category of SSL approaches can be called “concept learning”, which emphasizes learning new concepts from only few related observations. The purpose is mainly to simulate human learning behaviors like recognition, generation, imagination, synthesis and analysis. The second category is called “experience learning”, which usually co-exists with the large sample learning manner of conventional machine learning. This category mainly focuses on learning with insufficient samples, and can also be called small data learning in some literatures. More extensive surveys on both categories of SSL techniques are introduced and some neuroscience evidences are provided to clarify the rationality of the entire SSL regime, and the relationship with human learning process. Some discussions on the main challenges and possible future research directions along this line are also presented.",
"title": ""
},
{
"docid": "neg:1840380_10",
"text": "The grand challenge of neuromorphic computation is to develop a flexible brain-inspired architecture capable of a wide array of real-time applications, while striving towards the ultra-low power consumption and compact size of biological neural systems. Toward this end, we fabricated a building block of a modular neuromorphic architecture, a neurosynaptic core. Our implementation consists of 256 integrate-and-fire neurons and a 1,024×256 SRAM crossbar memory for synapses that fits in 4.2mm2 using a 45nm SOI process and consumes just 45pJ per spike. The core is fully configurable in terms of neuron parameters, axon types, and synapse states and its fully digital implementation achieves one-to-one correspondence with software simulation models. One-to-one correspondence allows us to introduce an abstract neural programming model for our chip, a contract guaranteeing that any application developed in software functions identically in hardware. This contract allows us to rapidly test and map applications from control, machine vision, and classification. To demonstrate, we present four test cases (i) a robot driving in a virtual environment, (ii) the classic game of pong, (iii) visual digit recognition and (iv) an autoassociative memory.",
"title": ""
},
{
"docid": "neg:1840380_11",
"text": "This content analytic study investigated the approaches of two mainstream newspapers—The New York Times and the Chicago Tribune—to cover the gay marriage issue. The study used the Massachusetts legitimization of gay marriage as a dividing point to look at what kinds of specific political or social topics related to gay marriage were highlighted in the news media. The study examined how news sources were framed in the coverage of gay marriage, based upon the newspapers’ perspectives and ideologies. The results indicated that The New York Times was inclined to emphasize the topic of human equality related to the legitimization of gay marriage. After the legitimization, The New York Times became an activist for gay marriage. Alternatively, the Chicago Tribune highlighted the importance of human morality associated with the gay marriage debate. The perspective of the Chicago Tribune was not dramatically influenced by the legitimization. It reported on gay marriage in terms of defending American traditions and family values both before and after the gay marriage legitimization. Published by Elsevier Inc on behalf of Western Social Science Association. Gay marriage has been a controversial issue in the United States, especially since the Massachusetts Supreme Judicial Court officially authorized it. Although the practice has been widely discussed for several years, the acceptance of gay marriage does not seem to be concordant with mainstream American values. This is in part because gay marriage challenges the traditional value of the family institution. In the United States, people’s perspectives of and attitudes toward gay marriage have been mostly polarized. Many people optimistically ∗ Corresponding author. E-mail addresses: ppan@astate.edu, polinpanpp@gmail.com (P.-L. Pan). 0362-3319/$ – see front matter. Published by Elsevier Inc on behalf of Western Social Science Association. doi:10.1016/j.soscij.2010.02.002 P.-L. Pan et al. / The Social Science Journal 47 (2010) 630–645 631 support gay legal rights and attempt to legalize it in as many states as possible, while others believe legalizing homosexuality may endanger American society and moral values. A number of forces and factors may expand this divergence between the two polarized perspectives, including family, religion and social influences. Mass media have a significant influence on socialization that cultivates individual’s belief about the world as well as affects individual’s values on social issues (Comstock & Paik, 1991). Moreover, news media outlets become a strong factor in influencing people’s perceptions of and attitudes toward gay men and lesbians because the news is one of the most powerful media to influence people’s attitudes toward gay marriage (Anderson, Fakhfakh, & Kondylis, 1999). Some mainstream newspapers are considered as media elites (Lichter, Rothman, & Lichter, 1986). Furthermore, numerous studies have demonstrated that mainstream newspapers would produce more powerful influences on people’s perceptions of public policies and political issues than television news (e.g., Brians & Wattenberg, 1996; Druckman, 2005; Eveland, Seo, & Marton, 2002) Gay marriage legitimization, a specific, divisive issue in the political and social dimensions, is concerned with several political and social issues that have raised fundamental questions about Constitutional amendments, equal rights, and American family values. The role of news media becomes relatively important while reporting these public debates over gay marriage, because not only do the news media affect people’s attitudes toward gays and lesbians by positively or negatively reporting the gay and lesbian issue, but also shape people’s perspectives of the same-sex marriage policy by framing the recognition of gay marriage in the news coverage. The purpose of this study is designed to examine how gay marriage news is described in the news coverage of The New York Times and the Chicago Tribune based upon their divisive ideological framings. 1. Literature review 1.1. Homosexual news coverage over time Until the 1940s, news media basically ignored the homosexual issue in the United States (Alwood, 1996; Bennett, 1998). According to Bennett (1998), of the 356 news stories about gays and lesbians that appeared in Time and Newsweek from 1947 to 1997, the Kinsey report on male sexuality published in 1948 was the first to draw reporters to the subject of homosexuality. From the 1940s to 1950s, the homosexual issue was reported as a social problem. Approximately 60% of the articles described homosexuals as a direct threat to the strength of the U.S. military, the security of the U.S. government, and the safety of ordinary Americans during this period. By the 1960s, the gay and lesbian issue began to be discussed openly in the news media. However, these portrayals were covered in the context of crime stories and brief items that ridiculed effeminate men or masculine women (Miller, 1991; Streitmatter, 1993). In 1963, a cover story, “Let’s Push Homophile Marriage,” was the first to treat gay marriage as a matter of winning legal recognition (Stewart-Winter, 2006). However, this cover story did not cause people to pay positive attention to gay marriage, but raised national debates between punishment and pity of homosexuals. Specifically speaking, although numerous arti632 P.-L. Pan et al. / The Social Science Journal 47 (2010) 630–645 cles reported before the 1960s provided growing visibility for homosexuals, they were still highly critical of them (Bennett, 1998). In September 1967, the first hard-hitting gay newspaper—the Los Angeles Advocate—began publication. Different from other earlier gay and lesbian publications, its editorial mix consisted entirely of non-fiction materials, including news stories, editorials, and columns (Cruikshank, 1992; Streitmatter, 1993). The Advocate was the first gay publication to operate as an independent business financed entirely by advertising and circulation, rather than by subsidies from a membership organization (Streitmatter, 1995a, 1995b). After the Stonewall Rebellion in June 1969 in New York City ignited the modern phase of the gay and lesbian liberation movement, the number and circulation of the gay and lesbian press exploded (Streitmatter, 1998). Therefore, gay rights were discussed in the news media during the early 1970s. Homosexuals began to organize a series of political actions associated with gay rights, which was widely covered by the news media, while a backlash also appeared against the gay-rights movements, particularly among fundamentalist Christians (Alwood, 1996; Bennett, 1998). Later in the 1970s, the genre entered a less political phrase by exploring the dimensions of the developing culture of gay and lesbian. The news media plumbed the breadth and depth of topics ranging from the gay and lesbian sensibility in art and literature to sex, spirituality, personal appearance, dyke separatism, lesbian mothers, drag queen, leather men, and gay bathhouses (Streitmatter, 1995b). In the 1980s, the gay and lesbian issue confronted a most formidable enemy when AIDS/HIV, one of the most devastating diseases in the history of medicine, began killing gay men at an alarming rate. Accordingly, AIDS/HIV became the biggest gay story reported by the news media. Numerous news media outlets linked the AIDS/HIV epidemic with homosexuals, which implied the notion of the promiscuous gay and lesbian lifestyle. The gays and lesbians, therefore, were described as a dangerous minority in the news media during the 1980s (Altman, 1986; Cassidy, 2000). In the 1990s, issues about the growing visibility of gays and lesbians and their campaign for equal rights were frequently covered in the news media, primarily because of AIDS and the debate over whether the ban on gays in the military should be lifted. The increasing visibility of gay people resulted in the emergence of lifestyle magazines (Bennett, 1998; Streitmatter, 1998). The Out, a lifestyle magazine based in New York City but circulated nationally, led the new phase, since its upscale design and fashion helped attract mainstream advertisers. This magazine, which devalued news in favor of stories on entertainment and fashions, became the first gay and lesbian publication sold in mainstream bookstores and featured on the front page of The New York Times (Streitmatter, 1998). From the late 1990s to the first few years of the 2000s, homosexuals were described as a threat to children’s development as well as a danger to family values in the news media. The legitimacy of same-sex marriage began to be discussed, because news coverage dominated the issue of same-sex marriage more frequently than before (Bennett, 1998). According to Gibson (2004), The New York Times first announced in August 2002 that its Sunday Styles section would begin publishing reports of same-sex commitment ceremonies along with the traditional heterosexual wedding announcements. Moreover, many newspapers joined this trend. Gibson (2004) found that not only the national newspapers, such as The New York Times, but also other regional newspapers, such as the Houston Chronicle and the Seattle Times, reported surprisingly large P.-L. Pan et al. / The Social Science Journal 47 (2010) 630–645 633 number of news stories about the everyday lives of gays and lesbians, especially since the Massachusetts Supreme Judicial Court ruled in November 2003 that same-sex couples had the same right to marry as heterosexuals. Previous studies investigated the increased amount of news coverage of gay and lesbian issues in the past six decades, but they did not analyze how homosexuals are framed in the news media in terms of public debates on the gay marriage issue. These studies failed to examine how newspapers report this national debate on gay marriage as well as what kinds of news frames are used in reporting this controversial issue. 1.2. Framing gay and lesbian partnersh",
"title": ""
},
{
"docid": "neg:1840380_12",
"text": "Person re-identification has been usually solved as either the matching of single-image representation (SIR) or the classification of cross-image representation (CIR). In this work, we exploit the connection between these two categories of methods, and propose a joint learning frame-work to unify SIR and CIR using convolutional neural network (CNN). Specifically, our deep architecture contains one shared sub-network together with two sub-networks that extract the SIRs of given images and the CIRs of given image pairs, respectively. The SIR sub-network is required to be computed once for each image (in both the probe and gallery sets), and the depth of the CIR sub-network is required to be minimal to reduce computational burden. Therefore, the two types of representation can be jointly optimized for pursuing better matching accuracy with moderate computational cost. Furthermore, the representations learned with pairwise comparison and triplet comparison objectives can be combined to improve matching performance. Experiments on the CUHK03, CUHK01 and VIPeR datasets show that the proposed method can achieve favorable accuracy while compared with state-of-the-arts.",
"title": ""
},
{
"docid": "neg:1840380_13",
"text": "Use of reporter genes provides a convenient way to study the activity and regulation of promoters and examine the rate and control of gene transcription. Many reporter genes and transfection methods can be efficiently used for this purpose. To investigate gene regulation and signaling pathway interactions during ovarian follicle development, we have examined promoter activities of several key follicle-regulating genes in the mouse ovary. In this chapter, we describe use of luciferase and beta-galactosidase genes as reporters and a cationic liposome mediated cell transfection method for studying regulation of activin subunit- and estrogen receptor alpha (ERalpha)-promoter activities. We have demonstrated that estrogen suppresses activin subunit gene promoter activity while activin increases ERalpha promoter activity and increases functional ER activity, suggesting a reciprocal regulation between activin and estrogen signaling in the ovary. We also discuss more broadly some key considerations in the use of reporter genes and cell-based transfection assays in endocrine research.",
"title": ""
},
{
"docid": "neg:1840380_14",
"text": "We searched for quantitative trait loci (QTL) associated with the palm oil fatty acid composition of mature fruits of the oil palm E. guineensis Jacq. in comparison with its wild relative E. oleifera (H.B.K) Cortés. The oil palm cross LM2T x DA10D between two heterozygous parents was considered in our experiment as an intraspecific representative of E. guineensis. Its QTLs were compared to QTLs published for the same traits in an interspecific Elaeis pseudo-backcross used as an indirect representative of E. oleifera. Few correlations were found in E. guineensis between pulp fatty acid proportions and yield traits, allowing for the rather independent selection of both types of traits. Sixteen QTLs affecting palm oil fatty acid proportions and iodine value were identified in oil palm. The phenotypic variation explained by the detected QTLs was low to medium in E. guineensis, ranging between 10% and 36%. The explained cumulative variation was 29% for palmitic acid C16:0 (one QTL), 68% for stearic acid C18:0 (two QTLs), 50% for oleic acid C18:1 (three QTLs), 25% for linoleic acid C18:2 (one QTL), and 40% (two QTLs) for the iodine value. Good marker co-linearity was observed between the intraspecific and interspecific Simple Sequence Repeat (SSR) linkage maps. Specific QTL regions for several traits were found in each mapping population. Our comparative QTL results in both E. guineensis and interspecific materials strongly suggest that, apart from two common QTL zones, there are two specific QTL regions with major effects, which might be one in E. guineensis, the other in E. oleifera, which are independent of each other and harbor QTLs for several traits, indicating either pleiotropic effects or linkage. Using QTL maps connected by highly transferable SSR markers, our study established a good basis to decipher in the future such hypothesis at the Elaeis genus level.",
"title": ""
},
{
"docid": "neg:1840380_15",
"text": "We propose a new approach to the task of fine grained entity type classifications based on label embeddings that allows for information sharing among related labels. Specifically, we learn an embedding for each label and each feature such that labels which frequently co-occur are close in the embedded space. We show that it outperforms state-of-the-art methods on two fine grained entity-classification benchmarks and that the model can exploit the finer-grained labels to improve classification of standard coarse types.",
"title": ""
},
{
"docid": "neg:1840380_16",
"text": "Purpose. The aim of the present prospective study was to investigate correlations between 3D facial soft tissue scan and lateral cephalometric radiography measurements. Materials and Methods. The study sample comprised 312 subjects of Caucasian ethnic origin. Exclusion criteria were all the craniofacial anomalies, noticeable asymmetries, and previous or current orthodontic treatment. A cephalometric analysis was developed employing 11 soft tissue landmarks and 14 sagittal and 14 vertical angular measurements corresponding to skeletal cephalometric variables. Cephalometric analyses on lateral cephalometric radiographies were performed for all subjects. The measurements were analysed in terms of their reliability and gender-age specific differences. Then, the soft tissue values were analysed for any correlations with lateral cephalometric radiography variables using Pearson correlation coefficient analysis. Results. Low, medium, and high correlations were found for sagittal and vertical measurements. Sagittal measurements seemed to be more reliable in providing a soft tissue diagnosis than vertical measurements. Conclusions. Sagittal parameters seemed to be more reliable in providing a soft tissue diagnosis similar to lateral cephalometric radiography. Vertical soft tissue measurements meanwhile showed a little less correlation with the corresponding cephalometric values perhaps due to the low reproducibility of cranial base and mandibular landmarks.",
"title": ""
},
{
"docid": "neg:1840380_17",
"text": "An efficient vehicle tracking system is designed and implemented for tracking the movement of any equipped vehicle from any location at any time. The proposed system made good use of a popular technology that combines a Smartphone application with a microcontroller. This will be easy to make and inexpensive compared to others. The designed in-vehicle device works using Global Positioning System (GPS) and Global system for mobile communication / General Packet Radio Service (GSM/GPRS) technology that is one of the most common ways for vehicle tracking. The device is embedded inside a vehicle whose position is to be determined and tracked in real-time. A microcontroller is used to control the GPS and GSM/GPRS modules. The vehicle tracking system uses the GPS module to get geographic coordinates at regular time intervals. The GSM/GPRS module is used to transmit and update the vehicle location to a database. A Smartphone application is also developed for continuously monitoring the vehicle location. The Google Maps API is used to display the vehicle on the map in the Smartphone application. Thus, users will be able to continuously monitor a moving vehicle on demand using the Smartphone application and determine the estimated distance and time for the vehicle to arrive at a given destination. In order to show the feasibility and effectiveness of the system, this paper presents experimental results of the vehicle tracking system and some experiences on practical implementations.",
"title": ""
},
{
"docid": "neg:1840380_18",
"text": "360° videos give viewers a spherical view and immersive experience of surroundings. However, one challenge of watching 360° videos is continuously focusing and re-focusing intended targets. To address this challenge, we developed two Focus Assistance techniques: Auto Pilot (directly bringing viewers to the target), and Visual Guidance (indicating the direction of the target). We conducted an experiment to measure viewers' video-watching experience and discomfort using these techniques and obtained their qualitative feedback. We showed that: 1) Focus Assistance improved ease of focus. 2) Focus Assistance techniques have specificity to video content. 3) Participants' preference of and experience with Focus Assistance depended not only on individual difference but also on their goal of watching the video. 4) Factors such as view-moving-distance, salience of the intended target and guidance, and language comprehension affected participants' video-watching experience. Based on these findings, we provide design implications for better 360° video focus assistance.",
"title": ""
}
] |
1840381 | Predicting Bike Usage for New York City's Bike Sharing System | [
{
"docid": "pos:1840381_0",
"text": "AN INDIVIDUAL CORRELATION is a correlation in which the statistical object or thing described is indivisible. The correlation between color and illiteracy for persons in the United States, shown later in Table I, is an individual correlation, because the kind of thing described is an indivisible unit, a person. In an individual correlation the variables are descriptive properties of individuals, such as height, income, eye color, or race, and not descriptive statistical constants such as rates or means. In an ecological correlation the statistical object is a group of persons. The correlation between the percentage of the population which is Negro and the percentage of the population which is illiterate for the 48 states, shown later as Figure 2, is an ecological correlation. The thing described is the population of a state, and not a single individual. The variables are percentages, descriptive properties of groups, and not descriptive properties of individuals. Ecological correlations are used in an impressive number of quantitative sociological studies, some of which by now have attained the status of classics: Cowles’ ‘‘Statistical Study of Climate in Relation to Pulmonary Tuberculosis’’; Gosnell’s ‘‘Analysis of the 1932 Presidential Vote in Chicago,’’ Factorial and Correlational Analysis of the 1934 Vote in Chicago,’’ and the more elaborate factor analysis in Machine Politics; Ogburn’s ‘‘How women vote,’’ ‘‘Measurement of the Factors in the Presidential Election of 1928,’’ ‘‘Factors in the Variation of Crime Among Cities,’’ and Groves and Ogburn’s correlation analyses in American Marriage and Family Relationships; Ross’ study of school attendance in Texas; Shaw’s Delinquency Areas study of the correlates of delinquency, as well as The more recent analyses in Juvenile Delinquency in Urban Areas; Thompson’s ‘‘Some Factors Influencing the Ratios of Children to Women in American Cities, 1930’’; Whelpton’s study of the correlates of birth rates, in ‘‘Geographic and Economic Differentials in Fertility;’’ and White’s ‘‘The Relation of Felonies to Environmental Factors in Indianapolis.’’ Although these studies and scores like them depend upon ecological correlations, it is not because their authors are interested in correlations between the properties of areas as such. Even out-and-out ecologists, in studying delinquency, for example, rely primarily upon data describing individuals, not areas. In each study which uses ecological correlations, the obvious purpose is to discover something about the behavior of individuals. Ecological correlations are used simply because correlations between the properties of individuals are not available. In each instance, however, the substitution is made tacitly rather than explicitly. The purpose of this paper is to clarify the ecological correlation problem by stating, mathematically, the exact relation between ecological and individual correlations, and by showing the bearing of that relation upon the practice of using ecological correlations as substitutes for individual correlations.",
"title": ""
}
] | [
{
"docid": "neg:1840381_0",
"text": "In this paper, we investigate how the Gauss–Newton Hessian matrix affects the basin of convergence in Newton-type methods. Although the Newton algorithm is theoretically superior to the Gauss–Newton algorithm and the Levenberg–Marquardt (LM) method as far as their asymptotic convergence rate is concerned, the LM method is often preferred in nonlinear least squares problems in practice. This paper presents a theoretical analysis of the advantage of the Gauss–Newton Hessian matrix. It is proved that the Gauss–Newton approximation function is the only nonnegative convex quadratic approximation that retains a critical property of the original objective function: taking the minimal value of zero on an (n − 1)-dimensional manifold (or affine subspace). Due to this property, the Gauss–Newton approximation does not change the zero-on-(n − 1)-D “structure” of the original problem, explaining the reason why the Gauss–Newton Hessian matrix is preferred for nonlinear least squares problems, especially when the initial point is far from the solution.",
"title": ""
},
{
"docid": "neg:1840381_1",
"text": "In an increasing number of scientific disciplines, large data collections are emerging as important community resources. In domains as diverse as global climate change, high energy physics, and computational genomics, the volume of interesting data is already measured in terabytes and will soon total petabytes. The communities of researchers that need to access and analyze this data (often using sophisticated and computationally expensive techniques) are often large and are almost always geographically distributed, as are the computing and storage resources that these communities rely upon to store and analyze their data [17]. This combination of large dataset size, geographic distribution of users and resources, and computationally intensive analysis results in complex and stringent performance demands that are not satisfied by any existing data management infrastructure. A large scientific collaboration may generate many queries, each involving access to—or supercomputer-class computations on—gigabytes or terabytes of data. Efficient and reliable execution of these queries may require careful management of terabyte caches, gigabit/s data transfer over wide area networks, coscheduling of data transfers and supercomputer computation, accurate performance estimations to guide the selection of dataset replicas, and other advanced techniques that collectively maximize use of scarce storage, networking, and computing resources. The literature offers numerous point solutions that address these issues (e.g., see [17, 14, 19, 3]). But no integrating architecture exists that allows us to identify requirements and components common to different systems and hence apply different technologies in a coordinated fashion to a range of dataintensive petabyte-scale application domains. Motivated by these considerations, we have launched a collaborative effort to design and produce such an integrating architecture. We call this architecture the data grid, to emphasize its role as a specialization and extension of the “Grid” that has emerged recently as an integrating infrastructure for distributed computation [10, 20, 15]. Our goal in this effort is to define the requirements that a data grid must satisfy and the components and APIs that will be required in its implementation. We hope that the definition of such an architecture will accelerate progress on petascale data-intensive computing by enabling the integration of currently disjoint approaches, encouraging the deployment of basic enabling technologies, and revealing technology gaps that require further research and development. In addition, we plan to construct a reference implementation for this architecture so as to enable large-scale experimentation.",
"title": ""
},
{
"docid": "neg:1840381_2",
"text": "Radiation therapy as a mode of cancer treatment is well-established. Telecobalt and telecaesium units were used extensively during the early days. Now, medical linacs offer more options for treatment delivery. However, such systems are prohibitively expensive and beyond the reach of majority of the worlds population living in developing and under-developed countries. In India, there is shortage of cancer treatment facilities, mainly due to the high cost of imported machines. Realizing the need of technology for affordable radiation therapy machines, Bhabha Atomic Research Centre (BARC), the premier nuclear research institute of Government of India, started working towards a sophisticated telecobalt machine. The Bhabhatron is the outcome of the concerted efforts of BARC and Panacea Medical Technologies Pvt. Ltd., India. It is not only less expensive, but also has a number of advanced features. It incorporates many safety and automation features hitherto unavailable in the most advanced telecobalt machine presently available. This paper describes various features available in Bhabhatron-II. The authors hope that this machine has the potential to make safe and affordable radiation therapy accessible to the common people in India as well as many other countries.",
"title": ""
},
{
"docid": "neg:1840381_3",
"text": "It is a well-known issue that attack primitives which exploit memory corruption vulnerabilities can abuse the ability of processes to automatically restart upon termination. For example, network services like FTP and HTTP servers are typically restarted in case a crash happens and this can be used to defeat Address Space Layout Randomization (ASLR). Furthermore, recently several techniques evolved that enable complete process memory scanning or code-reuse attacks against diversified and unknown binaries based on automated restarts of server applications. Until now, it is believed that client applications are immune against exploit primitives utilizing crashes. Due to their hard crash policy, such applications do not restart after memory corruption faults, making it impossible to touch memory more than once with wrong permissions. In this paper, we show that certain client application can actually survive crashes and are able to tolerate faults, which are normally critical and force program termination. To this end, we introduce a crash-resistance primitive and develop a novel memory scanning method with memory oracles without the need for control-flow hijacking. We show the practicability of our methods for 32-bit Internet Explorer 11 on Windows 8.1, and Mozilla Firefox 64-bit (Windows 8.1 and Linux 3.17.1). Furthermore, we demonstrate the advantages an attacker gains to overcome recent code-reuse defenses. Latest advances propose fine-grained re-randomization of the address space and code layout, or hide sensitive information such as code pointers to thwart tampering or misuse. We show that these defenses need improvements since crash-resistance weakens their security assumptions. To this end, we introduce the concept of CrashResistant Oriented Programming (CROP). We believe that our results and the implications of memory oracles will contribute to future research on defensive schemes against code-reuse attacks.",
"title": ""
},
{
"docid": "neg:1840381_4",
"text": " In many applications within the engineering world, an isolated generator is needed (e.g. in ships). Diesel units (diesel engine and synchronous generator) are the most common solution. However, the diesel engine can be eliminated if the energy from another source (e.g. the prime mover in a ship) is used to move the generator. This is the case for the Shaft Coupled Generator, where the coupling between the mover and the generator is made via a hydrostatic transmission. So that the mover can have different speeds and the generator is able to keep a constant frequency. The main problem of this system is the design of a speed governor that make possible the desired behaviour. In this paper a simulation model is presented in order to analyse the behaviour of this kind of systems and to help in the speed governor design. The model is achieved with an parameter identification process also depicted in the paper. A comparison between simulation results and measurements is made to shown the model validity. KeywordsModelling, Identification, Hydrostatic Transmission.",
"title": ""
},
{
"docid": "neg:1840381_5",
"text": "Orthogonal frequency division multiplexing (OFDM) has been widely adopted in modern wireless communication systems due to its robustness against the frequency selectivity of wireless channels. For coherent detection, channel estimation is essential for receiver design. Channel estimation is also necessary for diversity combining or interference suppression where there are multiple receive antennas. In this paper, we will present a survey on channel estimation for OFDM. This survey will first review traditional channel estimation approaches based on channel frequency response (CFR). Parametric model (PM)-based channel estimation, which is particularly suitable for sparse channels, will be also investigated in this survey. Following the success of turbo codes and low-density parity check (LDPC) codes, iterative processing has been widely adopted in the design of receivers, and iterative channel estimation has received a lot of attention since that time. Iterative channel estimation will be emphasized in this survey as the emerging iterative receiver improves system performance significantly. The combination of multiple-input multiple-output (MIMO) and OFDM has been widely accepted in modern communication systems, and channel estimation in MIMO-OFDM systems will also be addressed in this survey. Open issues and future work are discussed at the end of this paper.",
"title": ""
},
{
"docid": "neg:1840381_6",
"text": "Deep residual networks (ResNets) have significantly pushed forward the state-ofthe-art on image classification, increasing in performance as networks grow both deeper and wider. However, memory consumption becomes a bottleneck, as one needs to store the activations in order to calculate gradients using backpropagation. We present the Reversible Residual Network (RevNet), a variant of ResNets where each layer’s activations can be reconstructed exactly from the next layer’s. Therefore, the activations for most layers need not be stored in memory during backpropagation. We demonstrate the effectiveness of RevNets on CIFAR-10, CIFAR-100, and ImageNet, establishing nearly identical classification accuracy to equally-sized ResNets, even though the activation storage requirements are independent of depth.",
"title": ""
},
{
"docid": "neg:1840381_7",
"text": "Patients with Parkinson's disease may have difficulties in speaking because of the reduced coordination of the muscles that control breathing, phonation, articulation and prosody. Symptoms that may occur because of changes are weakening of the volume of the voice, voice monotony, changes in the quality of the voice, speed of speech, uncontrolled repetition of words. The evaluation of some of the disorders mentioned can be achieved through measuring the variation of parameters in an objective manner. It may be done to evaluate the response to the treatments with intra-daily frequency pre / post-treatment, as well as in the long term. Software systems allow these measurements also by recording the patient's voice. This allows to carry out a large number of tests by means of a larger number of patients and a higher frequency of the measurements. The main goal of our work was to design and realize Voxtester, an effective and simple to use software system useful to measure whether changes in voice emission are sensitive to pharmacologic treatments. Doctors and speech therapists can easily use it without going into the technical details, and we think that this goal is reached only by Voxtester, up to date.",
"title": ""
},
{
"docid": "neg:1840381_8",
"text": "Artificial neural networks have been recognized as a powerful tool for pattern classification problems, but a number of researchers have also suggested that straightforward neural-network approaches to pattern recognition are largely inadequate for difficult problems such as handwritten numeral recognition. In this paper, we present three sophisticated neural-network classifiers to solve complex pattern recognition problems: multiple multilayer perceptron (MLP) classifier, hidden Markov model (HMM)/MLP hybrid classifier, and structure-adaptive self-organizing map (SOM) classifier. In order to verify the superiority of the proposed classifiers, experiments were performed with the unconstrained handwritten numeral database of Concordia University, Montreal, Canada. The three methods have produced 97.35%, 96.55%, and 96.05% of the recognition rates, respectively, which are better than those of several previous methods reported in the literature on the same database.",
"title": ""
},
{
"docid": "neg:1840381_9",
"text": "We present Wave menus, a variant of multi-stroke marking menus designed for improving the novice mode of marking while preserving their efficiency in the expert mode of marking. Focusing on the novice mode, a criteria-based analysis of existing marking menus motivates the design of Wave menus. Moreover a user experiment is presented that compares four hierarchical marking menus in novice mode. Results show that Wave and compound-stroke menus are significantly faster and more accurate than multi-stroke menus in novice mode, while it has been shown that in expert mode the multi-stroke menus and therefore the Wave menus outperform the compound-stroke menus. Wave menus also require significantly less screen space than compound-stroke menus. As a conclusion, Wave menus offer the best performance for both novice and expert modes in comparison with existing multi-level marking menus, while requiring less screen space than compound-stroke menus.",
"title": ""
},
{
"docid": "neg:1840381_10",
"text": "Social media platforms facilitate the emergence of citizen communities that discuss real-world events. Their content reflects a variety of intent ranging from social good (e.g., volunteering to help) to commercial interest (e.g., criticizing product features). Hence, mining intent from social data can aid in filtering social media to support organizations, such as an emergency management unit for resource planning. However, effective intent mining is inherently challenging due to ambiguity in interpretation, and sparsity of relevant behaviors in social data. In this paper, we address the problem of multiclass classification of intent with a use-case of social data generated during crisis events. Our novel method exploits a hybrid feature representation created by combining top-down processing using knowledge-guided patterns with bottom-up processing using a bag-of-tokens model. We employ pattern-set creation from a variety of knowledge sources including psycholinguistics to tackle the ambiguity challenge, social behavior about conversations to enrich context, and contrast patterns to tackle the sparsity challenge. Our results show a significant absolute gain up to 7% in the F1 score relative to a baseline using bottom-up processing alone, within the popular multiclass frameworks of One-vs-One and One-vs-All. Intent mining can help design efficient cooperative information systems between citizens and organizations for serving organizational information needs.",
"title": ""
},
{
"docid": "neg:1840381_11",
"text": "Self-Organizing Map is an unsupervised neural network which combines vector quantization and vector projection. This makes it a powerful visualization tool. SOM Toolbox implements the SOM in the Matlab 5 computing environment. In this paper, computational complexity of SOM and the applicability of the Toolbox are investigated. It is seen that the Toolbox is easily applicable to small data sets (under 10000 records) but can also be applied in case of medium sized data sets. The prime limiting factor is map size: the Toolbox is mainly suitable for training maps with 1000 map units or less.",
"title": ""
},
{
"docid": "neg:1840381_12",
"text": "Understanding language requires both linguistic knowledge and knowledge about how the world works, also known as common-sense knowledge. We attempt to characterize the kinds of common-sense knowledge most often involved in recognizing textual entailments. We identify 20 categories of common-sense knowledge that are prevalent in textual entailment, many of which have received scarce attention from researchers building collections of knowledge.",
"title": ""
},
{
"docid": "neg:1840381_13",
"text": "Software testing is the process to uncover requirement, design and coding errors in the program. It is used to identify the correctness, completeness, security and quality of software products against a specification. Software testing is the process used to measure the quality of developed computer software. It exhibits all mistakes, errors and flaws in the developed software. There are many approaches to software testing, but effective testing of complex product is essentially a process of investigation, not merely a matter of creating and following route procedure. It is not possible to find out all the errors in the program. This fundamental problem in testing thus throws an open question, as to what would be the strategy we should adopt for testing. In our paper, we have described and compared the three most prevalent and commonly used software testing techniques for detecting errors, they are: white box testing, black box testing and grey box testing. KeywordsBlack Box; Grey Box; White Box.",
"title": ""
},
{
"docid": "neg:1840381_14",
"text": "PURPOSE OF REVIEW\nMany patients requiring cardiac arrhythmia device surgery are on chronic oral anticoagulation therapy. The periprocedural management of their anticoagulation presents a dilemma to physicians, particularly in the subset of patients with moderate-to-high risk of arterial thromboembolic events. Physicians have responded by treating patients with bridging anticoagulation while oral anticoagulation is temporarily discontinued. However, there are a number of downsides to bridging anticoagulation around device surgery; there is a substantial risk of significant device pocket hematoma with important clinical sequelae; bridging anticoagulation may lead to more arterial thromboembolic events and bridging anticoagulation is expensive.\n\n\nRECENT FINDINGS\nIn response to these issues, a number of centers have explored the option of performing device surgery without cessation of oral anticoagulation. The observational data suggest a greatly reduced hematoma rate with this strategy. Despite these encouraging results, most physicians are reluctant to move to operating on continued Coumadin in the absence of confirmatory data from a randomized trial.\n\n\nSUMMARY\nWe have designed a prospective, single-blind, randomized, controlled trial to address this clinical question. In the conventional arm, patients will be bridged. In the experimental arm, patients will continue on oral anticoagulation and the primary outcome is clinically significant hematoma. Our study has clinical relevance to at least 70 000 patients per year in North America.",
"title": ""
},
{
"docid": "neg:1840381_15",
"text": "We present data-dependent learning bounds for the general scenario of non-stationary nonmixing stochastic processes. Our learning guarantees are expressed in terms of a datadependent measure of sequential complexity and a discrepancy measure that can be estimated from data under some mild assumptions. We also also provide novel analysis of stable time series forecasting algorithm using this new notion of discrepancy that we introduce. We use our learning bounds to devise new algorithms for non-stationary time series forecasting for which we report some preliminary experimental results. An extended abstract has appeared in (Kuznetsov and Mohri, 2015).",
"title": ""
},
{
"docid": "neg:1840381_16",
"text": "The goals of our work are twofold: gain insight into how humans interact with complex data and visualizations thereof in order to make discoveries; and use our findings to develop a dialogue system for exploring data visualizations. Crucial to both goals is understanding and modeling of multimodal referential expressions, in particular those that include deictic gestures. In this paper, we discuss how context information affects the interpretation of requests and their attendant referring expressions in our data. To this end, we have annotated our multimodal dialogue corpus for context and both utterance and gesture information; we have analyzed whether a gesture co-occurs with a specific request or with the context surrounding the request; we have started addressing multimodal co-reference resolution by using Kinect to detect deictic gestures; and we have started identifying themes found in the annotated context, especially in what follows the request.",
"title": ""
},
{
"docid": "neg:1840381_17",
"text": "Smart grid initiatives will produce a grid that is increasingly dependent on its cyber infrastructure in order to support the numerous power applications necessary to provide improved grid monitoring and control capabilities. However, recent findings documented in government reports and other literature, indicate the growing threat of cyber-based attacks in numbers and sophistication targeting the nation's electric grid and other critical infrastructures. Specifically, this paper discusses cyber-physical security of Wide-Area Monitoring, Protection and Control (WAMPAC) from a coordinated cyber attack perspective and introduces a game-theoretic approach to address the issue. Finally, the paper briefly describes how cyber-physical testbeds can be used to evaluate the security research and perform realistic attack-defense studies for smart grid type environments.",
"title": ""
},
{
"docid": "neg:1840381_18",
"text": "This paper introduces a method to detect a fault associated with critical components/subsystems of an engineered system. It is required, in this case, to detect the fault condition as early as possible, with specified degree of confidence and a prescribed false alarm rate. Innovative features of the enabling technologies include a Bayesian estimation algorithm called particle filtering, which employs features or condition indicators derived from sensor data in combination with simple models of the system's degrading state to detect a deviation or discrepancy between a baseline (no-fault) distribution and its current counterpart. The scheme requires a fault progression model describing the degrading state of the system in the operation. A generic model based on fatigue analysis is provided and its parameters adaptation is discussed in detail. The scheme provides the probability of abnormal condition and the presence of a fault is confirmed for a given confidence level. The efficacy of the proposed approach is illustrated with data acquired from bearings typically found on aircraft and monitored via a properly instrumented test rig.",
"title": ""
},
{
"docid": "neg:1840381_19",
"text": "© 2010 ETRI Journal, Volume 32, Number 4, August 2010 In this paper, we present a low-voltage low-dropout voltage regulator (LDO) for a system-on-chip (SoC) application which, exploiting the multiplication of the Miller effect through the use of a current amplifier, is frequency compensated up to 1-nF capacitive load. The topology and the strategy adopted to design the LDO and the related compensation frequency network are described in detail. The LDO works with a supply voltage as low as 1.2 V and provides a maximum load current of 50 mA with a drop-out voltage of 200 mV: the total integrated compensation capacitance is about 40 pF. Measurement results as well as comparison with other SoC LDOs demonstrate the advantage of the proposed topology.",
"title": ""
}
] |
1840382 | Automatic Sentiment Analysis for Unstructured Data | [
{
"docid": "pos:1840382_0",
"text": "The main applications and challenges of one of the hottest research areas in computer science.",
"title": ""
}
] | [
{
"docid": "neg:1840382_0",
"text": "Massively parallel architectures such as the GPU are becoming increasingly important due to the recent proliferation of data. In this paper, we propose a key class of hybrid parallel graphlet algorithms that leverages multiple CPUs and GPUs simultaneously for computing k-vertex induced subgraph statistics (called graphlets). In addition to the hybrid multi-core CPU-GPU framework, we also investigate single GPU methods (using multiple cores) and multi-GPU methods that leverage all available GPUs simultaneously for computing induced subgraph statistics. Both methods leverage GPU devices only, whereas the hybrid multicore CPU-GPU framework leverages all available multi-core CPUs and multiple GPUs for computing graphlets in large networks. Compared to recent approaches, our methods are orders of magnitude faster, while also more cost effective enjoying superior performance per capita and per watt. In particular, the methods are up to 300 times faster than a recent state-of-the-art method. To the best of our knowledge, this is the first work to leverage multiple CPUs and GPUs simultaneously for computing induced subgraph statistics.",
"title": ""
},
{
"docid": "neg:1840382_1",
"text": "Recommender Systems are software tools and techniques for suggesting items to users by considering their preferences in an automated fashion. The suggestions provided are aimed at support users in various decisionmaking processes. Technically, recommender system has their origins in different fields such as Information Retrieval (IR), text classification, machine learning and Decision Support Systems (DSS). Recommender systems are used to address the Information Overload (IO) problem by recommending potentially interesting or useful items to users. They have proven to be worthy tools for online users to deal with the IO and have become one of the most popular and powerful tools in E-commerce. Many existing recommender systems rely on the Collaborative Filtering (CF) and have been extensively used in E-commerce .They have proven to be very effective with powerful techniques in many famous E-commerce companies. This study presents an overview of the field of recommender systems with current generation of recommendation methods and examines comprehensively CF systems with its algorithms.",
"title": ""
},
{
"docid": "neg:1840382_2",
"text": "2.1 Summary ............................................... 5 2.2 Definition .............................................. 6 2.3 History ................................................... 6 2.4 Overview of Currently Used Classification Systems and Terminology 7 2.5 Currently Used Terms in Classification of Osteomyelitis of the Jaws .................. 11 2.5.1 Acute/Subacute Osteomyelitis .............. 11 2.5.2 Chronic Osteomyelitis ........................... 11 2.5.3 Chronic Suppurative Osteomyelitis: Secondary Chronic Osteomyelitis .......... 11 2.5.4 Chronic Non-suppurative Osteomyelitis 11 2.5.5 Diffuse Sclerosing Osteomyelitis, Primary Chronic Osteomyelitis, Florid Osseous Dysplasia, Juvenile Chronic Osteomyelitis ............. 11 2.5.6 SAPHO Syndrome, Chronic Recurrent Multifocal Osteomyelitis (CRMO) ........... 13 2.5.7 Periostitis Ossificans, Garrès Osteomyelitis ............................. 13 2.5.8 Other Commonly Used Terms ................ 13 2.6 Osteomyelitis of the Jaws: The Zurich Classification System ........... 16 2.6.1 General Aspects of the Zurich Classification System ............................. 16 2.6.2 Acute Osteomyelitis and Secondary Chronic Osteomyelitis ........................... 17 2.6.3 Clinical Presentation ............................. 26 2.6.4 Primary Chronic Osteomyelitis .............. 34 2.7 Differential Diagnosis ............................ 48 2.7.1 General Considerations ......................... 48 2.7.2 Differential Diagnosis of Acute and Secondary Chronic Osteomyelitis ... 50 2.7.3 Differential Diagnosis of Primary Chronic Osteomyelitis ........................... 50 2.1 Summary",
"title": ""
},
{
"docid": "neg:1840382_3",
"text": "Autonomous driving with high velocity is a research hotspot which challenges the scientists and engineers all over the world. This paper proposes a scheme of indoor autonomous car based on ROS which combines the method of Deep Learning using Convolutional Neural Network (CNN) with statistical approach using liDAR images and achieves a robust obstacle avoidance rate in cruise mode. In addition, the design and implementation of autonomous car are also presented in detail which involves the design of Software Framework, Hector Simultaneously Localization and Mapping (Hector SLAM) by Teleoperation, Autonomous Exploration, Path Plan, Pose Estimation, Command Processing, and Data Recording (Co- collection). what’s more, the schemes of outdoor autonomous car, communication, and security are also discussed. Finally, all functional modules are integrated in nVidia Jetson TX1.",
"title": ""
},
{
"docid": "neg:1840382_4",
"text": "Crimes will somehow influence organizations and institutions when occurred frequently in a society. Thus, it seems necessary to study reasons, factors and relations between occurrence of different crimes and finding the most appropriate ways to control and avoid more crimes. The main objective of this paper is to classify clustered crimes based on occurrence frequency during different years. Data mining is used extensively in terms of analysis, investigation and discovery of patterns for occurrence of different crimes. We applied a theoretical model based on data mining techniques such as clustering and classification to real crime dataset recorded by police in England and Wales within 1990 to 2011. We assigned weights to the features in order to improve the quality of the model and remove low value of them. The Genetic Algorithm (GA) is used for optimizing of Outlier Detection operator parameters using RapidMiner tool. Keywords—crime; clustering; classification; genetic algorithm; weighting; rapidminer",
"title": ""
},
{
"docid": "neg:1840382_5",
"text": "Convolutional networks for image classification progressively reduce resolution until the image is represented by tiny feature maps in which the spatial structure of the scene is no longer discernible. Such loss of spatial acuity can limit image classification accuracy and complicate the transfer of the model to downstream applications that require detailed scene understanding. These problems can be alleviated by dilation, which increases the resolution of output feature maps without reducing the receptive field of individual neurons. We show that dilated residual networks (DRNs) outperform their non-dilated counterparts in image classification without increasing the models depth or complexity. We then study gridding artifacts introduced by dilation, develop an approach to removing these artifacts (degridding), and show that this further increases the performance of DRNs. In addition, we show that the accuracy advantage of DRNs is further magnified in downstream applications such as object localization and semantic segmentation.",
"title": ""
},
{
"docid": "neg:1840382_6",
"text": "This paper presents a robust stereo-vision-based drivable road detection and tracking system that was designed to navigate an intelligent vehicle through challenging traffic scenarios and increment road safety in such scenarios with advanced driver-assistance systems (ADAS). This system is based on a formulation of stereo with homography as a maximum a posteriori (MAP) problem in a Markov random held (MRF). Under this formulation, we develop an alternating optimization algorithm that alternates between computing the binary labeling for road/nonroad classification and learning the optimal parameters from the current input stereo pair itself. Furthermore, online extrinsic camera parameter reestimation and automatic MRF parameter tuning are performed to enhance the robustness and accuracy of the proposed system. In the experiments, the system was tested on our experimental intelligent vehicles under various real challenging scenarios. The results have substantiated the effectiveness and the robustness of the proposed system with respect to various challenging road scenarios such as heterogeneous road materials/textures, heavy shadows, changing illumination and weather conditions, and dynamic vehicle movements.",
"title": ""
},
{
"docid": "neg:1840382_7",
"text": "Natural Language Inference (NLI) task requires an agent to determine the logical relationship between a natural language premise and a natural language hypothesis. We introduce Interactive Inference Network (IIN), a novel class of neural network architectures that is able to achieve high-level understanding of the sentence pair by hierarchically extracting semantic features from interaction space. We show that an interaction tensor (attention weight) contains semantic information to solve natural language inference, and a denser interaction tensor contains richer semantic information. One instance of such architecture, Densely Interactive Inference Network (DIIN), demonstrates the state-of-the-art performance on large scale NLI copora and large-scale NLI alike corpus. It’s noteworthy that DIIN achieve a greater than 20% error reduction on the challenging Multi-Genre NLI (MultiNLI; Williams et al. 2017) dataset with respect to the strongest published system.",
"title": ""
},
{
"docid": "neg:1840382_8",
"text": "3D mesh segmentation has become a crucial part of many applications in 3D shape analysis. In this paper, a comprehensive survey on 3D mesh segmentation methods is presented. Analysis of the existing methodologies is addressed taking into account a new categorization along with the performance evaluation frameworks which aim to support meaningful benchmarks not only qualitatively but also in a quantitative manner. This survey aims to capture the essence of current trends in 3D mesh segmentation.",
"title": ""
},
{
"docid": "neg:1840382_9",
"text": "To date, the growth of electronic personal data leads to a trend that data owners prefer to remotely outsource their data to clouds for the enjoyment of the high-quality retrieval and storage service without worrying the burden of local data management and maintenance. However, secure share and search for the outsourced data is a formidable task, which may easily incur the leakage of sensitive personal information. Efficient data sharing and searching with security is of critical importance. This paper, for the first time, proposes a searchable attribute-based proxy reencryption system. When compared with the existing systems only supporting either searchable attribute-based functionality or attribute-based proxy reencryption, our new primitive supports both abilities and provides flexible keyword update service. In particular, the system enables a data owner to efficiently share his data to a specified group of users matching a sharing policy and meanwhile, the data will maintain its searchable property but also the corresponding search keyword(s) can be updated after the data sharing. The new mechanism is applicable to many real-world applications, such as electronic health record systems. It is also proved chosen ciphertext secure in the random oracle model.",
"title": ""
},
{
"docid": "neg:1840382_10",
"text": "The Industry 4.0 is a vision that includes connecting more intensively physical systems with their virtual counterparts in computers. This computerization of manufacturing will bring many advantages, including allowing data gathering, integration and analysis in the scale not seen earlier. In this paper we describe our Semantic Big Data Historian that is intended to handle large volumes of heterogeneous data gathered from distributed data sources. We describe the approach and implementation with a special focus on using Semantic Web technologies for integrating the data.",
"title": ""
},
{
"docid": "neg:1840382_11",
"text": "Stakeholder marketing has established foundational support for redefining and broadening the marketing discipline. An extensive literature review of 58 marketing articles that address six primary stakeholder groups (i.e., customers, suppliers, employees, shareholders, regulators, and the local community) provides evidence of the important role the groups play in stakeholder marketing. Based on this review and in conjunction with established marketing theory, we define stakeholder marketing as “activities and processes within a system of social institutions that facilitate and maintain value through exchange relationships with multiple stakeholders.” In an effort to focus on the stakeholder marketing field of study, we offer both a conceptual framework for understanding the pivotal role of stakeholder marketing and research questions for examining the linkages among stakeholder exchanges, value creation, and marketing outcomes.",
"title": ""
},
{
"docid": "neg:1840382_12",
"text": "The advent of social media and microblogging platforms has radically changed the way we consume information and form opinions. In this paper, we explore the anatomy of the information space on Facebook by characterizing on a global scale the news consumption patterns of 376 million users over a time span of 6 y (January 2010 to December 2015). We find that users tend to focus on a limited set of pages, producing a sharp community structure among news outlets. We also find that the preferences of users and news providers differ. By tracking how Facebook pages \"like\" each other and examining their geolocation, we find that news providers are more geographically confined than users. We devise a simple model of selective exposure that reproduces the observed connectivity patterns.",
"title": ""
},
{
"docid": "neg:1840382_13",
"text": "In this paper we present a neural network based system for automated e-mail filing into folders and antispam filtering. The experiments show that it is more accurate than several other techniques. We also investigate the effects of various feature selection, weighting and normalization methods, and also the portability of the anti-spam filter across different users.",
"title": ""
},
{
"docid": "neg:1840382_14",
"text": "The automatic patch-based exploit generation problem is: given a program P and a patched version of the program P', automatically generate an exploit for the potentially unknown vulnerability present in P but fixed in P'. In this paper, we propose techniques for automatic patch-based exploit generation, and show that our techniques can automatically generate exploits for 5 Microsoft programs based upon patches provided via Windows Update. Although our techniques may not work in all cases, a fundamental tenant of security is to conservatively estimate the capabilities of attackers. Thus, our results indicate that automatic patch-based exploit generation should be considered practical. One important security implication of our results is that current patch distribution schemes which stagger patch distribution over long time periods, such as Windows Update, may allow attackers who receive the patch first to compromise the significant fraction of vulnerable hosts who have not yet received the patch.",
"title": ""
},
{
"docid": "neg:1840382_15",
"text": "In Theory III we characterize with a mix of theory and experiments the generalization properties of Stochastic Gradient Descent in overparametrized deep convolutional networks. We show that Stochastic Gradient Descent (SGD) selects with high probability solutions that 1) have zero (or small) empirical error, 2) are degenerate as shown in Theory II and 3) have maximum generalization. This work was supported by the Center for Brains, Minds and Machines (CBMM), funded by NSF STC award CCF 123 1216. H.M. is supported in part by ARO Grant W911NF-15-10385.",
"title": ""
},
{
"docid": "neg:1840382_16",
"text": "Additive manufacturing technology using inkjet offers several improvements to electronics manufacturing compared to current nonadditive masking technologies. Manufacturing processes can be made more efficient, straightforward and flexible compared to subtractive masking processes, several time-consuming and expensive steps can be omitted. Due to the additive process, material loss is minimal, because material is never removed as with etching processes. The amounts of used material and waste are smaller, which is advantageous in both productivity and environmental means. Furthermore, the additive inkjet manufacturing process is flexible allowing fast prototyping, easy design changes and personalization of products. Additive inkjet processing offers new possibilities to electronics integration, by enabling direct writing on various surfaces, and component interconnection without a specific substrate. The design and manufacturing of inkjet printed modules differs notably from the traditional way to manufacture electronics. In this study a multilayer inkjet interconnection process to integrate functional systems was demonstrated, and the issues regarding the design and manufacturing were considered. r 2008 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "neg:1840382_17",
"text": "In areas approaching malaria elimination, human mobility patterns are important in determining the proportion of malaria cases that are imported or the result of low-level, endemic transmission. A convenience sample of participants enrolled in a longitudinal cohort study in the catchment area of Macha Hospital in Choma District, Southern Province, Zambia, was selected to carry a GPS data logger for one month from October 2013 to August 2014. Density maps and activity space plots were created to evaluate seasonal movement patterns. Time spent outside the household compound during anopheline biting times, and time spent in malaria high- and low-risk areas, were calculated. There was evidence of seasonal movement patterns, with increased long-distance movement during the dry season. A median of 10.6% (interquartile range (IQR): 5.8-23.8) of time was spent away from the household, which decreased during anopheline biting times to 5.6% (IQR: 1.7-14.9). The per cent of time spent in malaria high-risk areas for participants residing in high-risk areas ranged from 83.2% to 100%, but ranged from only 0.0% to 36.7% for participants residing in low-risk areas. Interventions targeted at the household may be more effective because of restricted movement during the rainy season, with limited movement between high- and low-risk areas.",
"title": ""
},
{
"docid": "neg:1840382_18",
"text": "The publisher does not give any warranty express or implied or make any representation that the contents will be complete or accurate or up to date. The accuracy of any instructions, formulae and drug doses should be independently verified with primary sources. The publisher shall not be liable for any loss, actions, claims, proceedings, demand or costs or damages whatsoever or howsoever caused arising directly or indirectly in connection with or arising out of the use of this material.",
"title": ""
},
{
"docid": "neg:1840382_19",
"text": "Currently, most top-performing text detection networks tend to employ fixed-size anchor boxes to guide the search for text instances. ey usually rely on a large amount of anchors with different scales to discover texts in scene images, thus leading to high computational cost. In this paper, we propose an end-to-end boxbased text detector with scale-adaptive anchors, which can dynamically adjust the scales of anchors according to the sizes of underlying texts by introducing an additional scale regression layer. e proposed scale-adaptive anchors allow us to use a few number of anchors to handle multi-scale texts and therefore significantly improve the computational efficiency. Moreover, compared to discrete scales used in previous methods, the learned continuous scales are more reliable, especially for small texts detection. Additionally, we propose Anchor convolution to beer exploit necessary feature information by dynamically adjusting the sizes of receptive fields according to the learned scales. Extensive experiments demonstrate that the proposed detector is fast, taking only 0.28 second per image, while outperforming most state-of-the-art methods in accuracy.",
"title": ""
}
] |
1840383 | Knowledge sharing and social media: Altruism, perceived online attachment motivation, and perceived online relationship commitment | [
{
"docid": "pos:1840383_0",
"text": "a r t i c l e i n f o a b s t r a c t The success of knowledge management initiatives depends on knowledge sharing. This paper reviews qualitative and quantitative studies of individual-level knowledge sharing. Based on the literature review we developed a framework for understanding knowledge sharing research. The framework identifies five areas of emphasis of knowledge sharing research: organizational context, interpersonal and team characteristics, cultural characteristics, individual characteristics, and motivational factors. For each emphasis area the paper discusses the theoretical frameworks used and summarizes the empirical research results. The paper concludes with a discussion of emerging issues, new research directions, and practical implications of knowledge sharing research. Knowledge is a critical organizational resource that provides a sustainable competitive advantage in a competitive and dynamic economy (e. To gain a competitive advantage it is necessary but insufficient for organizations to rely on staffing and training systems that focus on selecting employees who have specific knowledge, skills, abilities, or competencies or helping employees acquire them (e.g., Brown & Duguid, 1991). Organizations must also consider how to transfer expertise and knowledge from experts who have it to novices who need to know (Hinds, Patterson, & Pfeffer, 2001). That is, organizations need to emphasize and more effectively exploit knowledge-based resources that already exist within the organization As one knowledge-centered activity, knowledge sharing is the fundamental means through which employees can contribute to knowledge application, innovation, and ultimately the competitive advantage of the organization (Jackson, Chuang, Harden, Jiang, & Joseph, 2006). Knowledge sharing between employees and within and across teams allows organizations to exploit and capitalize on knowledge-based resources Research has shown that knowledge sharing and combination is positively related to reductions in production costs, faster completion of new product development projects, team performance, firm innovation capabilities, and firm performance including sales growth and revenue from new products and services (e. Because of the potential benefits that can be realized from knowledge sharing, many organizations have invested considerable time and money into knowledge management (KM) initiatives including the development of knowledge management systems (KMS) which use state-of-the-art technology to facilitate the collection, storage, and distribution of knowledge. However, despite these investments it has been estimated that at least $31.5 billion are lost per year by Fortune 500",
"title": ""
},
{
"docid": "pos:1840383_1",
"text": "This study reports on an exploratory survey conducted to investigate the use of social media technologies for sharing information. This paper explores the issue of credibility of the information shared in the context of computer-mediated communication. Four categories of information were explored: sensitive, sensational, political and casual information, across five popular social media technologies: social networking sites, micro-blogging sites, wikis, online forums, and online blogs. One hundred and fourteen active users of social media technologies participated in the study. The exploratory analysis conducted in this study revealed that information producers use different cues to indicate credibility of the information they share on different social media sites. Organizations can leverage findings from this study to improve targeted engagement with their customers. The operationalization of how information credibility is codified by information producers contributes to knowledge in social media research. 2013 Elsevier Ltd. All rights reserved.",
"title": ""
}
] | [
{
"docid": "neg:1840383_0",
"text": "Basic concepts of ANNs together with three most widely used ANN learning strategies (error back-propagation, Kohonen, and counterpropagation) are explained and discussed. In order to show how the explained methods can be applied to chemical problems, one simple example, the classification and the prediction of the origin of different olive oil samples, each represented by eigtht fatty acid concentrations, is worked out in detail.",
"title": ""
},
{
"docid": "neg:1840383_1",
"text": "In this paper, we propose a method for training neural networks when we have a large set of data with weak labels and a small amount of data with true labels. In our proposed model, we train two neural networks: a target network, the learner and a confidence network, the meta-learner. The target network is optimized to perform a given task and is trained using a large set of unlabeled data that are weakly annotated. We propose to control the magnitude of the gradient updates to the target network using the scores provided by the second confidence network, which is trained on a small amount of supervised data. Thus we avoid that the weight updates computed from noisy labels harm the quality of the target networkmodel.",
"title": ""
},
{
"docid": "neg:1840383_2",
"text": "This paper presents an overview of the inaugural Amazon Picking Challenge along with a summary of a survey conducted among the 26 participating teams. The challenge goal was to design an autonomous robot to pick items from a warehouse shelf. This task is currently performed by human workers, and there is hope that robots can someday help increase efficiency and throughput while lowering cost. We report on a 28-question survey posed to the teams to learn about each team’s background, mechanism design, perception apparatus, planning, and control approach. We identify trends in this data, correlate it with each team’s success in the competition, and discuss observations and lessons learned based on survey results and the authors’ personal experiences during the challenge.Note to Practitioners—Perception, motion planning, grasping, and robotic system engineering have reached a level of maturity that makes it possible to explore automating simple warehouse tasks in semistructured environments that involve high-mix, low-volume picking applications. This survey summarizes lessons learned from the first Amazon Picking Challenge, highlighting mechanism design, perception, and motion planning algorithms, as well as software engineering practices that were most successful in solving a simplified order fulfillment task. While the choice of mechanism mostly affects execution speed, the competition demonstrated the systems challenges of robotics and illustrated the importance of combining reactive control with deliberative planning.",
"title": ""
},
{
"docid": "neg:1840383_3",
"text": "Scene parsing is challenging for unrestricted open vocabulary and diverse scenes. In this paper, we exploit the capability of global context information by different-region-based context aggregation through our pyramid pooling module together with the proposed pyramid scene parsing network (PSPNet). Our global prior representation is effective to produce good quality results on the scene parsing task, while PSPNet provides a superior framework for pixel-level prediction. The proposed approach achieves state-of-the-art performance on various datasets. It came first in ImageNet scene parsing challenge 2016, PASCAL VOC 2012 benchmark and Cityscapes benchmark. A single PSPNet yields the new record of mIoU accuracy 85.4% on PASCAL VOC 2012 and accuracy 80.2% on Cityscapes.",
"title": ""
},
{
"docid": "neg:1840383_4",
"text": "[This corrects the article on p. 662 in vol. 60, PMID: 27729694.].",
"title": ""
},
{
"docid": "neg:1840383_5",
"text": "ibrant public securities markets rely on complex systems of supporting institutions that promote the governance of publicly traded companies. Corporate governance structures serve: 1) to ensure that minority shareholders receive reliable information about the value of firms and that a company’s managers and large shareholders do not cheat them out of the value of their investments, and 2) to motivate managers to maximize firm value instead of pursuing personal objectives.1 Institutions promoting the governance of firms include reputational intermediaries such as investment banks and audit firms, securities laws and regulators such as the Securities and Exchange Commission (SEC) in the United States, and disclosure regimes that produce credible firm-specific information about publicly traded firms. In this paper, we discuss economics-based research focused primarily on the governance role of publicly reported financial accounting information. Financial accounting information is the product of corporate accounting and external reporting systems that measure and routinely disclose audited, quantitative data concerning the financial position and performance of publicly held firms. Audited balance sheets, income statements, and cash-flow statements, along with supporting disclosures, form the foundation of the firm-specific information set available to investors and regulators. Developing and maintaining a sophisticated financial disclosure regime is not cheap. Countries with highly developed securities markets devote substantial resources to producing and regulating the use of extensive accounting and disclosure rules that publicly traded firms must follow. Resources expended are not only financial, but also include opportunity costs associated with deployment of highly educated human capital, including accountants, lawyers, academicians, and politicians. In the United States, the SEC, under the oversight of the U.S. Congress, is responsible for maintaining and regulating the required accounting and disclosure rules that firms must follow. These rules are produced both by the SEC itself and through SEC oversight of private standards-setting bodies such as the Financial Accounting Standards Board and the Emerging Issues Task Force, which in turn solicit input from business leaders, academic researchers, and regulators around the world. In addition to the accounting standards-setting investments undertaken by many individual countries and securities exchanges, there is currently a major, well-funded effort in progress, under the auspices of the International Accounting Standards Board (IASB), to produce a single set of accounting standards that will ultimately be acceptable to all countries as the basis for cross-border financing transactions.2 The premise behind governance research in accounting is that a significant portion of the return on investment in accounting regimes derives from enhanced governance of firms, which in turn facilitates the operation of securities Robert M. Bushman and Abbie J. Smith",
"title": ""
},
{
"docid": "neg:1840383_6",
"text": "This paper describes an efficient method to make individual faces for animation from several possible inputs. We present a method to reconstruct 3D facial model for animation from two orthogonal pictures taken from front and side views or from range data obtained from any available resources. It is based on extracting features on a face in a semiautomatic way and modifying a generic model with detected feature points. Then the fine modifications follow if range data is available. Automatic texture mapping is employed using a composed image from the two images. The reconstructed 3Dface can be animated immediately with given expression parameters. Several faces by one methodology applied to different input data to get a final animatable face are illustrated.",
"title": ""
},
{
"docid": "neg:1840383_7",
"text": "This paper identifies the possibility of using electronic compasses and accelerometers in mobile phones, as a simple and scalable method of localization without war-driving. The idea is not fundamentally different from ship or air navigation systems, known for centuries. Nonetheless, directly applying the idea to human-scale environments is non-trivial. Noisy phone sensors and complicated human movements present practical research challenges. We cope with these challenges by recording a person's walking patterns, and matching it against possible path signatures generated from a local electronic map. Electronic maps enable greater coverage, while eliminating the reliance on WiFi infrastructure and expensive war-driving. Measurements on Nokia phones and evaluation with real users confirm the anticipated benefits. Results show a location accuracy of less than 11m in regions where today's localization services are unsatisfactory or unavailable.",
"title": ""
},
{
"docid": "neg:1840383_8",
"text": "The emerging three-dimensional (3D) chip architectures, with their intrinsic capability of reducing the wire length, is one of the promising solutions to mitigate the interconnect problem in modern microprocessor designs. 3D memory stacking also enables much higher memory bandwidth for future chip-multiprocessor design, mitigating the ``memory wall\" problem. In addition, heterogenous integration enabled by 3D technology can also result in innovation designs for future microprocessors. This paper serves as a survey of various approaches to design future 3D microprocessors, leveraging the benefits of fast latency, higher bandwidth, and heterogeneous integration capability that are offered by 3D technology.",
"title": ""
},
{
"docid": "neg:1840383_9",
"text": "BACKGROUND\nSkin atrophy is a common manifestation of aging and is frequently accompanied by ulceration and delayed wound healing. With an increasingly aging patient population, management of skin atrophy is becoming a major challenge in the clinic, particularly in light of the fact that there are no effective therapeutic options at present.\n\n\nMETHODS AND FINDINGS\nAtrophic skin displays a decreased hyaluronate (HA) content and expression of the major cell-surface hyaluronate receptor, CD44. In an effort to develop a therapeutic strategy for skin atrophy, we addressed the effect of topical administration of defined-size HA fragments (HAF) on skin trophicity. Treatment of primary keratinocyte cultures with intermediate-size HAF (HAFi; 50,000-400,000 Da) but not with small-size HAF (HAFs; <50,000 Da) or large-size HAF (HAFl; >400,000 Da) induced wild-type (wt) but not CD44-deficient (CD44-/-) keratinocyte proliferation. Topical application of HAFi caused marked epidermal hyperplasia in wt but not in CD44-/- mice, and significant skin thickening in patients with age- or corticosteroid-related skin atrophy. The effect of HAFi on keratinocyte proliferation was abrogated by antibodies against heparin-binding epidermal growth factor (HB-EGF) and its receptor, erbB1, which form a complex with a particular isoform of CD44 (CD44v3), and by tissue inhibitor of metalloproteinase-3 (TIMP-3).\n\n\nCONCLUSIONS\nOur observations provide a novel CD44-dependent mechanism for HA oligosaccharide-induced keratinocyte proliferation and suggest that topical HAFi application may provide an attractive therapeutic option in human skin atrophy.",
"title": ""
},
{
"docid": "neg:1840383_10",
"text": "We have developed a method for rigidly aligning images of tubes. This paper presents an evaluation of the consistency of that method for three-dimensional images of human vasculature. Vascular images may contain alignment ambiguities, poorly corresponding vascular networks, and non-rigid deformations, yet the Monte Carlo experiments presented in this paper show that our method registers vascular images with sub-voxel consistency in a matter of seconds. Furthermore, we show that the method's insensitivity to non-rigid deformations enables the localization, quantification, and visualization of those deformations. Our method aligns a source image with a target image by registering a model of the tubes in the source image directly with the target image. Time can be spent to extract an accurate model of the tubes in the source image. Multiple target images can then be registered with that model without additional extractions. Our registration method builds upon the principles of our tubular object segmentation work that combines dynamic-scale central ridge traversal with radius estimation. In particular, our registration method's consistency stems from incorporating multi-scale ridge and radius measures into the model-image match metric. Additionally, the method's speed is due in part to the use of coarse-to-fine optimization strategies that are enabled by measures made during model extraction and by the parameters inherent to the model-image match metric.",
"title": ""
},
{
"docid": "neg:1840383_11",
"text": "Decentralized partially observable Markov decision processes (Dec-POMDPs) are a powerful tool for modeling multi-agent planning and decision-making under uncertainty. Prevalent Dec-POMDP solution techniques require centralized computation given full knowledge of the underlying model. Multi-agent reinforcement learning (MARL) based approaches have been recently proposed for distributed solution of during learning and policy execution are identical. In some practical scenarios this may not be the case. We propose a novel MARL approach in which agents are allowed to rehearse with information that will not be available during policy execution. The key is for the agents to learn policies that do not explicitly rely on these rehearsal features. We also establish a weak convergence result for our algorithm, RLaR, demonstrating that RLaR converges in probability when certain conditions are met. We show experimentally that incorporating rehearsal features can enhance the learning rate compared to non-rehearsalbased learners, and demonstrate fast, (near) optimal performance on many existing benchmark DecPOMDP problems. We also compare RLaR against an existing approximate Dec-POMDP solver which, like RLaR, does not assume a priori knowledge of the model. While RLaR's policy representation is not as scalable, we show that RLaR produces higher quality policies for most problems and horizons studied. & 2016 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "neg:1840383_12",
"text": "With the advancement of radio access networks, more and more mobile data content needs to be transported by optical networks. Mobile fronthaul is an important network segment that connects centralized baseband units (BBUs) with remote radio units in cloud radio access networks (C-RANs). It enables advanced wireless technologies such as coordinated multipoint and massive multiple-input multiple-output. Mobile backhaul, on the other hand, connects BBUs with core networks to transport the baseband data streams to their respective destinations. Optical access networks are well positioned to meet the first optical communication demands of C-RANs. To better address the stringent requirements of future generations of wireless networks, such as the fifth-generation (5G) wireless, optical access networks need to be improved and enhanced. In this paper, we review emerging optical access network technologies that aim to support 5G wireless with high capacity, low latency, and low cost and power per bit. Advances in high-capacity passive optical networks (PONs), such as 100 Gbit/s PON, will be reviewed. Among the topics discussed are advanced modulation and detection techniques, digital signal processing tailored for optical access networks, and efficient mobile fronthaul techniques. We also discuss the need for coordination between RAN and PON to simplify the overall network, reduce the network latency, and improve the network cost efficiency and power efficiency.",
"title": ""
},
{
"docid": "neg:1840383_13",
"text": "Traditional companies are increasingly turning towards platform strategies to gain speed in the development of digital value propositions and prepare for the challenges arising from digitalization. This paper reports on the digitalization journey of the LEGO Group to elaborate how brick-and-mortar companies can break away from a drifting information infrastructure and trigger its transformation into a digital platform. Conceptualizing information infrastructure evolution as path-dependent process, the case study explores how mindful deviations by Enterprise Architects guide installed base cultivation through collective action and trigger the creation of a new ‘platformization’ path. Additionally, the findings portrait Enterprise Architecture management as a process of socio-technical path constitution that is equally shaped by deliberate human interventions and emergent forces through path dependencies.",
"title": ""
},
{
"docid": "neg:1840383_14",
"text": "Although within SDN community, the notion of logically centralized network control is well understood and agreed upon, many different approaches exist on how one should deliver such a logically centralized view to multiple distributed controller instances. In this paper, we survey and investigate those approaches. We discover that we can classify the methods into several design choices that are trending among SDN adopters. Each design choice may influence several SDN issues such as scalability, robustness, consistency, and privacy. Thus, we further analyze the pros and cons of each model regarding these matters. We conclude that each design begets some characteristics. One may excel in resolving one issue but perform poor in another. We also present which design combinations one should pick to build distributed controller that is scalable, robust, consistent",
"title": ""
},
{
"docid": "neg:1840383_15",
"text": "This study was undertaken to report clinical outcomes after high tibial osteotomy (HTO) in patients with a discoid lateral meniscus and to determine (1) whether discoid lateral meniscus degeneration by magnetic resonance imaging (MRI) progresses after HTO and (2) whether this progression adversely affects clinical results. The records of 292 patients (292 knees) who underwent medial opening HTO were retrospectively reviewed, and discoid types and grades of lateral meniscus degeneration as determined by MRI were recorded preoperatively. Of the 292 patients, 17 (5.8 %) had a discoid lateral meniscus, and postoperative MR images were obtained at least 2 years after HTO for 15 of these 17 patients. American Knee Society (AKS) pain, knee and function scores significantly improved in the 15 patients after surgery (p < 0.001). Eight (53 %) had an incomplete and 7 (47 %) had a complete discoid lateral meniscus. By preoperative MRI, the distribution of meniscal degeneration was as follows: grade 1, 4 patients; grade 2, 7 patients; and grade 3, 4 patients. At the final follow-up, the distribution of degeneration was as follows: grade 1, 2 patients; grade 2, 5 patients; and grade 3, 8 patients. Two patients with grade 3 degeneration who did not undergo partial meniscectomy showed tear progression. Thus, 8 of the 15 patients (53 %) experienced progressive discoid meniscal degeneration after HTO. Median AKS pain score was significantly lower in the progression group than in the non-progression group (40 vs 45, respectively). The results of this study suggest that increased load on the lateral compartment after HTO can accelerate discoid lateral meniscus degeneration by MRI and caution that when a discoid lateral meniscus is found by preoperative MRI, progressive degeneration may occur after HTO and clinical outcome may be adversely affected. Therapeutic study, Level IV.",
"title": ""
},
{
"docid": "neg:1840383_16",
"text": "The relationship between games and story remains a divisive question among game fans, designers, and scholars alike. At a recent academic Games Studies conference, for example, a blood feud threatened to erupt between the self-proclaimed Ludologists, who wanted to see the focus shift onto the mechanics of game play, and the Narratologists, who were interested in studying games alongside other storytelling media.(1) Consider some recent statements made on this issue:",
"title": ""
},
{
"docid": "neg:1840383_17",
"text": "This paper describes work in progress. Our research is focused on efficient construction of effective models for spam detection. Clustering messages allows for efficient labeling of a representative sample of messages for learning a spam detection model using a Random Forest for classification and active learning for refining the classification model. Results are illustrated for the 2007 TREC Public Spam Corpus. The area under the Receiver Operating Characteristic (ROC) curve is competitive with other solutions while requiring much fewer labeled training examples.",
"title": ""
},
{
"docid": "neg:1840383_18",
"text": "In this article, we have reviewed the state of the art of IPT systems and have explored the suitability of the technology to wirelessly charge battery powered vehicles. the review shows that the IPT technology has merits for stationary charging (when the vehicle is parked), opportunity charging (when the vehicle is stopped for a short period of time, for example, at a bus stop), and dynamic charging (when the vehicle is moving along a dedicated lane equipped with an IPT system). Dynamic wireless charging holds promise to partially or completely eliminate the overnight charging through a compact network of dynamic chargers installed on the roads that would keep the vehicle batteries charged at all times, consequently reducing the range anxiety and increasing the reliability of EVs. Dynamic charging can help lower the price of EVs by reducing the size of the battery pack. Indeed, if the recharging energy is readily available, the batteries do not have to support the whole driving range but only supply power when the IPT system is not available. Depending on the power capability, the use of dynamic charging may increase driving range and reduce the size of the battery pack.",
"title": ""
},
{
"docid": "neg:1840383_19",
"text": "F anthropology plays a vital role in medicolegal investigations of death. Today, forensic anthropologists are intimately involved in many aspects of these investigations; they may participate in search and recovery efforts, develop a biological profile, identify and document trauma, determine postmortem interval, and offer expert witness courtroom testimony. However, few forensic anthropology textbooks include substantial discussions of our medicolegal and judicial systems. Forensic Anthropology: Contemporary Theory and Practice, by Debra A. Komar and Jane E. Buikstra, not only examines current forensic anthropology from a theoretical perspective, but it also includes an introduction to elements of our legal system. Further, the text integrates these important concepts with bioanthropological theories and methods. Komar and Buikstra begin with an introductory chapter that traces the history of forensic anthropology in the United States. The careers of several founding members of the American Board of Forensic Anthropology are recognized for their contribution to advancing the profession. We are reminded that the field has evolved through the years from biological anthropologists doing forensic anthropology to modern students, who need training in both the medical and physical sciences, as well as traditional foundations in biological anthropology. In Chapters Two and Three, the authors introduce the reader to the medicolegal and judicial systems respectively. They present the medicolegal system with interesting discussions of important topics such as jurisdiction, death investigations, cause and manner of death, elements of a crime (actus reus and mens rea), and postmortem examinations. The chapter on the judicial system begins with the different classifications and interpretations of evidence, followed by an overview. Key components of this chapter include the rules governing expert witness testimony and scientific evidence in the courtroom. The authors also review the United States Supreme Court landmark decision, Daubert v. Merrell Dow Pharmaceuticals 1993, which established more stringent criteria that federal judges must follow regarding the admissibility of scientific evidence in federal courtrooms. The authors note that in the Daubert decision, the Supreme Court modified the “Frye test”, removing the general acceptability criterion formerly required. In light of the Daubert ruling, the authors demonstrate the need for anthropologists to refine techniques and continue to develop biological profiling methods that will meet the rigorous Daubert standards. Anthropology is not alone among the forensic sciences that seek to refine methods and techniques. For example, forensic odontology has recently come under scrutiny in cases where defendants have been wrongfully convicted based on bite mark evidence (Saks and Koehler 2005). Additionally, Saks and Koehler also remark upon 86 DNA exoneration cases and note that 63% of these wrongful convictions are attributed to forensic science testing errors. Chapter Four takes a comprehensive look at the role of forensic anthropologists during death investigations. The authors note that “the participation of forensic anthropologists can be invaluable to the proper handling of the death scene” (p. 65). To this end, the chapter includes discussions of identifying remains of medicolegal and nonmedicolegal significance, jurisdiction issues, search strategies, and proper handling of evidence. Readers may find the detailed treatment of differentiating human from nonhuman material particularly useful. The following two chapters deal with developing a biological profile, and pathology and trauma. A detailed review of sex and age estimation for both juvenile and adult skeletal remains is provided, as well as an assessment of the estimation of ancestry and stature. A welcome discussion on scientific testing and the error rates of different methods is highlighted throughout their ‘reference’ packed discussion. In their critical review of biological profile development, Komar and Buikstra discuss the various estimation methods; they note that more recent techniques may need testing on additional skeletal samples to survive potential challenges under the Daubert ruling. We also are reminded that in forensic science, flawed methods may result in the false imprisonment of innocent persons, therefore an emphasis is placed on developing and refining techniques that improve both the accuracy and reliability of biological profile estimates. Students will find that the descriptions and discussions of the different categories of both pathology and trauma assessments are beneficial for understanding postmortem examinations. One also may find that the reviews of blunt and sharp force trauma, gunshot wounds, and fracture terminology are particularly useful. Komar and Buikstra continue their remarkable book with a chapter focusing on forensic taphonomy. They begin with an introduction and an outline of the goals of forensic taphonomy which includes time since death estimation, mechanisms of bone modification, and reconstructing perimortem events. The reader is drawn to the case studies that",
"title": ""
}
] |
1840384 | Revisiting the role of language in spatial cognition: Categorical perception of spatial relations in English and Korean speakers. | [
{
"docid": "pos:1840384_0",
"text": "In this paper we examine how English and Mandarin speakers think about time, and we test how the patterns of thinking in the two groups relate to patterns in linguistic and cultural experience. In Mandarin, vertical spatial metaphors are used more frequently to talk about time than they are in English; English relies primarily on horizontal terms. We present results from two tasks comparing English and Mandarin speakers' temporal reasoning. The tasks measure how people spatialize time in three-dimensional space, including the sagittal (front/back), transverse (left/right), and vertical (up/down) axes. Results of Experiment 1 show that people automatically create spatial representations in the course of temporal reasoning, and these implicit spatializations differ in accordance with patterns in language, even in a non-linguistic task. Both groups showed evidence of a left-to-right representation of time, in accordance with writing direction, but only Mandarin speakers showed a vertical top-to-bottom pattern for time (congruent with vertical spatiotemporal metaphors in Mandarin). Results of Experiment 2 confirm and extend these findings, showing that bilinguals' representations of time depend on both long-term and proximal aspects of language experience. Participants who were more proficient in Mandarin were more likely to arrange time vertically (an effect of previous language experience). Further, bilinguals were more likely to arrange time vertically when they were tested in Mandarin than when they were tested in English (an effect of immediate linguistic context).",
"title": ""
}
] | [
{
"docid": "neg:1840384_0",
"text": "According to psychological scientists, humans understand models that most match their own internal models, which they characterize as lists of \"heuristic\"s (i.e. lists of very succinct rules). One such heuristic rule generator is the Fast-and-Frugal Trees (FFT) preferred by psychological scientists. Despite their successful use in many applied domains, FFTs have not been applied in software analytics. Accordingly, this paper assesses FFTs for software analytics. \n We find that FFTs are remarkably effective in that their models are very succinct (5 lines or less describing a binary decision tree) while also outperforming result from very recent, top-level, conference papers. Also, when we restrict training data to operational attributes (i.e., those attributes that are frequently changed by developers), the performance of FFTs are not effected (while the performance of other learners can vary wildly). \n Our conclusions are two-fold. Firstly, there is much that software analytics community could learn from psychological science. Secondly, proponents of complex methods should always baseline those methods against simpler alternatives. For example, FFTs could be used as a standard baseline learner against which other software analytics tools are compared.",
"title": ""
},
{
"docid": "neg:1840384_1",
"text": "Advanced Persistent Threats (APTs) are a new breed of internet based smart threats, which can go undetected with the existing state of-the-art internet traffic monitoring and protection systems. With the evolution of internet and cloud computing, a new generation of smart APT attacks has also evolved and signature based threat detection systems are proving to be futile and insufficient. One of the essential strategies in detecting APTs is to continuously monitor and analyze various features of a TCP/IP connection, such as the number of transferred packets, the total count of the bytes exchanged, the duration of the TCP/IP connections, and details of the number of packet flows. The current threat detection approaches make extensive use of machine learning algorithms that utilize statistical and behavioral knowledge of the traffic. However, the performance of these algorithms is far from satisfactory in terms of reducing false negatives and false positives simultaneously. Mostly, current algorithms focus on reducing false positives, only. This paper presents a fractal based anomaly classification mechanism, with the goal of reducing both false positives and false negatives, simultaneously. A comparison of the proposed fractal based method with a traditional Euclidean based machine learning algorithm (k-NN) shows that the proposed method significantly outperforms the traditional approach by reducing false positive and false negative rates, simultaneously, while improving the overall classification rates.",
"title": ""
},
{
"docid": "neg:1840384_2",
"text": "Deep generative adversarial networks (GANs) are the emerging technology in drug discovery and biomarker development. In our recent work, we demonstrated a proof-of-concept of implementing deep generative adversarial autoencoder (AAE) to identify new molecular fingerprints with predefined anticancer properties. Another popular generative model is the variational autoencoder (VAE), which is based on deep neural architectures. In this work, we developed an advanced AAE model for molecular feature extraction problems, and demonstrated its advantages compared to VAE in terms of (a) adjustability in generating molecular fingerprints; (b) capacity of processing very large molecular data sets; and (c) efficiency in unsupervised pretraining for regression model. Our results suggest that the proposed AAE model significantly enhances the capacity and efficiency of development of the new molecules with specific anticancer properties using the deep generative models.",
"title": ""
},
{
"docid": "neg:1840384_3",
"text": "A number of machine learning (ML) techniques have recently been proposed to solve color constancy problem in computer vision. Neural networks (NNs) and support vector regression (SVR) in particular, have been shown to outperform many traditional color constancy algorithms. However, neither neural networks nor SVR were compared to simpler regression tools in those studies. In this article, we present results obtained with a linear technique known as ridge regression (RR) and show that it performs better than NNs, SVR, and gray world (GW) algorithm on the same dataset. We also perform uncertainty analysis for NNs, SVR, and RR using bootstrapping and show that ridge regression and SVR are more consistent than neural networks. The shorter training time and single parameter optimization of the proposed approach provides a potential scope for real time video tracking application.",
"title": ""
},
{
"docid": "neg:1840384_4",
"text": "Monads are a de facto standard for the type-based analysis of impure aspects of programs, such as runtime cost [9, 5]. Recently, the logical dual of a monad, the comonad, has also been used for the cost analysis of programs, in conjunction with a linear type system [6, 8]. The logical duality of monads and comonads extends to cost analysis: In monadic type systems, costs are (side) effects, whereas in comonadic type systems, costs are coeffects. However, it is not clear whether these two methods of cost analysis are related and, if so, how. Are they equally expressive? Are they equally well-suited for cost analysis with all reduction strategies? Are there translations from type systems with effects to type systems with coeffects and viceversa? The goal of this work-in-progress paper is to explore some of these questions in a simple context — the simply typed lambda-calculus (STLC). As we show, even this simple context is already quite interesting technically and it suffices to bring out several key points.",
"title": ""
},
{
"docid": "neg:1840384_5",
"text": "Hypothesis generation, a crucial initial step for making scientific discoveries, relies on prior knowledge, experience and intuition. Chance connections made between seemingly distinct subareas sometimes turn out to be fruitful. The goal in text mining is to assist in this process by automatically discovering a small set of interesting hypotheses from a suitable text collection. In this paper we present open and closed text mining algorithms that are built within the discovery framework established by Swanson and Smalheiser. Our algorithms represent topics using metadata profiles. When applied to MEDLINE these are MeSH based profiles. We present experiments that demonstrate the effectiveness of our algorithms. Specifically, our algorithms generate ranked term lists where the key terms representing novel relationships between topics are ranked high.",
"title": ""
},
{
"docid": "neg:1840384_6",
"text": "The most common specimens from immunocompromised patients that are analyzed for detection of herpes simplex virus (HSV) or varicella-zoster virus (VZV) are from skin lesions. Many types of assays are applicable to these samples, but some, such as virus isolation and direct fluorescent antibody testing, are useful only in the early phases of the lesions. In contrast, nucleic acid (NA) detection methods, which generally have superior sensitivity and specificity, can be applied to skin lesions at any stage of progression. NA methods are also the best choice, and sometimes the only choice, for detecting HSV or VZV in blood, cerebrospinal fluid, aqueous or vitreous humor, and from mucosal surfaces. NA methods provide the best performance when reliability and speed (within 24 hours) are considered together. They readily distinguish the type of HSV detected or the source of VZV detected (wild type or vaccine strain). Nucleic acid detection methods are constantly being improved with respect to speed and ease of performance. Broader applications are under study, such as the use of quantitative results of viral load for prognosis and to assess the efficacy of antiviral therapy.",
"title": ""
},
{
"docid": "neg:1840384_7",
"text": "In this paper, we present a novel SpaTial Attention Residue Network (STAR-Net) for recognising scene texts. The overall architecture of our STAR-Net is illustrated in fig. 1. Our STARNet emphasises the importance of representative image-based feature extraction from text regions by the spatial attention mechanism and the residue learning strategy. It is by far the deepest neural network proposed for scene text recognition.",
"title": ""
},
{
"docid": "neg:1840384_8",
"text": "This paper proposes an Interactive Chinese Character Learning System (ICCLS) based on pictorial evolution as an edutainment concept in computer-based learning of language. The advantage of the language origination itself is taken as a learning platform due to the complexity in Chinese language as compared to other types of languages. Users especially children enjoy more by utilize this learning system because they are able to memories the Chinese Character easily and understand more of the origin of the Chinese character under pleasurable learning environment, compares to traditional approach which children need to rote learning Chinese Character under un-pleasurable environment. Skeletonization is used as the representation of Chinese character and object with an animated pictograph evolution to facilitate the learning of the language. Shortest skeleton path matching technique is employed for fast and accurate matching in our implementation. User is required to either write a word or draw a simple 2D object in the input panel and the matched word and object will be displayed as well as the pictograph evolution to instill learning. The target of computer-based learning system is for pre-school children between 4 to 6 years old to learn Chinese characters in a flexible and entertaining manner besides utilizing visual and mind mapping strategy as learning methodology.",
"title": ""
},
{
"docid": "neg:1840384_9",
"text": "Myoelectric or electromyogram (EMG) signals can be useful in intelligently recognizing intended limb motion of a person. This paper presents an attempt to develop a four-channel EMG signal acquisition system as part of an ongoing research in the development of an active prosthetic device. The acquired signals are used for identification and classification of six unique movements of hand and wrist, viz. hand open, hand close, wrist flexion, wrist extension, ulnar deviation and radial deviation. This information is used for actuation of prosthetic drive. The time domain features are extracted, and their dimension is reduced using principal component analysis. The reduced features are classified using two different techniques: k nearest neighbor and artificial neural networks, and the results are compared.",
"title": ""
},
{
"docid": "neg:1840384_10",
"text": "3D shape is a crucial but heavily underutilized cue in today’s computer vision system, mostly due to the lack of a good generic shape representation. With the recent availability of inexpensive 2.5D depth sensors (e.g. Microsoft Kinect), it is becoming increasingly important to have a powerful 3D shape model in the loop. Apart from object recognition on 2.5D depth maps, recovering these incomplete 3D shapes to full 3D is critical for analyzing shape variations. To this end, we propose to represent a geometric 3D shape as a probability distribution of binary variables on a 3D voxel grid, using a Convolutional Deep Belief Network. Our model, 3D ShapeNets, learns the distribution of complex 3D shapes across different object categories and arbitrary poses. It naturally supports joint object recognition and shape reconstruction from 2.5D depth maps, and further, as an additional application it allows active object recognition through view planning. We construct a largescale 3D CAD model dataset to train our model, and conduct extensive experiments to study our new representation.",
"title": ""
},
{
"docid": "neg:1840384_11",
"text": "This paper presents a new geometry-based method to determine if a cable-driven robot operating in a d-degree-of-freedom workspace (2 ≤ d ≤ 6) with n ≥ d cables can generate a given set of wrenches in a given pose, considering acceptable minimum and maximum tensions in the cables. To this end, the fundamental nature of the Available Wrench Set is studied. The latter concept, defined here, is closely related to similar sets introduced in [23, 4]. It is shown that the Available Wrench Set can be represented mathematically by a zonotope, a special class of convex polytopes. Using the properties of zonotopes, two methods to construct the Available Wrench Set are discussed. From the representation of the Available Wrench Set, computationallyefficient and non-iterative tests are presented to verify if this set includes the Task Wrench Set, the set of wrenches needed for a given task. INTRODUCTION AND PROBLEM DEFINITION A cable-driven robot, or simply cable robot, is a parallel robot whose actuated limbs are cables. The length of the cables can be adjusted in a coordinated manner to control the pose (position and orientation) and/or wrench (force and torque) at the moving platform. Pioneer applications of such mechanisms are the NIST Robocrane [1], the Falcon high-speed manipulator [15] and the Skycam [7]. The fact that cables can only exert efforts in one direction impacts the capability of the mechanism to generate wrenches at the platform. Previous work already presented methods to test if a set of wrenches – ranging from one to all possible wrenches – could be generated by a cable robot in a given pose, considering that cables work only in tension. Some of the proposed methods focus on fully constrained cable robots while others apply to unconstrained robots. In all cases, minimum and/or maximum cable tensions is considered. A complete section of this paper is dedicated to the comparison of the proposed approach with previous methods. A general geometric approach that addresses all possible cases without using an iterative algorithm is presented here. It will be shown that the results obtained with this approach are consistent with the ones previously presented in the literature [4, 5, 14, 17, 18, 22, 23, 24, 26]. This paper does not address the workspace of cable robots. The latter challenging problem was addressed in several papers over the recent years [10, 11, 12, 19, 25]. Before looking globally at the workspace, all proposed methods must go through the intermediate step of assessing the capability of a mechanism to generate a given set of wrenches. The approach proposed here is also compared with the intermediate steps of the papers on the workspace determination of cable robots. The task that a robot has to achieve implies that it will have to be able to generate a given set of wrenches in a given pose x. This Task Wrench Set, T , depends on the various applications of the considered robot, which can be for example to move a camera or other sensors [7, 6, 9, 3], manipulate payloads [15, 1] or simulate walking sensations to a user immersed in virtual reality [21], just to name a few. The Available Wrench Set, A, is the set of wrenches that the mechanism can generate. This set depends on the architecture of the robot, i.e., where the cables are attached on the platform and where the fixed winches are located. It also depends on the configuration pose as well as on the minimum and maximum acceptable tension in the cables. All the wrenches that are possibly needed to accomplish a task can 1 Copyright 2008 by ASME",
"title": ""
},
{
"docid": "neg:1840384_12",
"text": "OBJECTIVES\nThe purposes of this study were to identify age-related changes in objectively recorded sleep patterns across the human life span in healthy individuals and to clarify whether sleep latency and percentages of stage 1, stage 2, and rapid eye movement (REM) sleep significantly change with age.\n\n\nDESIGN\nReview of literature of articles published between 1960 and 2003 in peer-reviewed journals and meta-analysis.\n\n\nPARTICIPANTS\n65 studies representing 3,577 subjects aged 5 years to 102 years.\n\n\nMEASUREMENT\nThe research reports included in this meta-analysis met the following criteria: (1) included nonclinical participants aged 5 years or older; (2) included measures of sleep characteristics by \"all night\" polysomnography or actigraphy on sleep latency, sleep efficiency, total sleep time, stage 1 sleep, stage 2 sleep, slow-wave sleep, REM sleep, REM latency, or minutes awake after sleep onset; (3) included numeric presentation of the data; and (4) were published between 1960 and 2003 in peer-reviewed journals.\n\n\nRESULTS\nIn children and adolescents, total sleep time decreased with age only in studies performed on school days. Percentage of slow-wave sleep was significantly negatively correlated with age. Percentages of stage 2 and REM sleep significantly changed with age. In adults, total sleep time, sleep efficiency, percentage of slow-wave sleep, percentage of REM sleep, and REM latency all significantly decreased with age, while sleep latency, percentage of stage 1 sleep, percentage of stage 2 sleep, and wake after sleep onset significantly increased with age. However, only sleep efficiency continued to significantly decrease after 60 years of age. The magnitudes of the effect sizes noted changed depending on whether or not studied participants were screened for mental disorders, organic diseases, use of drug or alcohol, obstructive sleep apnea syndrome, or other sleep disorders.\n\n\nCONCLUSIONS\nIn adults, it appeared that sleep latency, percentages of stage 1 and stage 2 significantly increased with age while percentage of REM sleep decreased. However, effect sizes for the different sleep parameters were greatly modified by the quality of subject screening, diminishing or even masking age associations with different sleep parameters. The number of studies that examined the evolution of sleep parameters with age are scant among school-aged children, adolescents, and middle-aged adults. There are also very few studies that examined the effect of race on polysomnographic sleep parameters.",
"title": ""
},
{
"docid": "neg:1840384_13",
"text": "It has been believed that stochastic feedforward neural networks (SFNN) have several advantages beyond deterministic deep neural networks (DNN): they have more expressive power allowing multi-modal mappings and regularize better due to their stochastic nature. However, training SFNN is notoriously harder. In this paper, we aim at developing efficient training methods for large-scale SFNN, in particular using known architectures and pre-trained parameters of DNN. To this end, we propose a new intermediate stochastic model, called Simplified-SFNN, which can be built upon any baseline DNN and approximates certain SFNN by simplifying its upper latent units above stochastic ones. The main novelty of our approach is in establishing the connection between three models, i.e., DNN → Simplified-SFNN → SFNN, which naturally leads to an efficient training procedure of the stochastic models utilizing pre-trained parameters of DNN. Using several popular DNNs, we show how they can be effectively transferred to the corresponding stochastic models for both multi-modal and classification tasks on MNIST, TFD, CIFAR-10, CIFAR-100 and SVHN datasets. In particular, our stochastic model built from the wide residual network has 28 layers and 36 million parameters, where the former consistently outperforms the latter for the classification tasks on CIFAR-10 and CIFAR-100 due to its stochastic regularizing effect.",
"title": ""
},
{
"docid": "neg:1840384_14",
"text": "The prevalence of cardiovascular risk factors, insulin resistance/diabetes and/or uterine pathology appears to be increased in women with polycystic ovarian syndrome (PCOS), although more outcome studies are necessary to determine incidence. Data pertaining to some of the potential long-term health consequences associated with PCOS are summarized. Medline, Current Contents and PubMed were searched for studies from the time of our original interest in this issue in 1980 to the present. The review is limited to published human data. The current literature indicate that women with this syndrome cluster risk factors for premature morbidity and mortality. Large multi-site co-operative studies are necessary to evaluate the long-term health outcomes.",
"title": ""
},
{
"docid": "neg:1840384_15",
"text": "In this paper is presented the implementation of a compact FPGA-based single-phase cascaded H-bridge multilevel inverter suitable for teaching and research activities. The softwares Matlab/Simulink and Quartus II were used to describe and simulate the PWM algorithm in hardware description language (VHDL), before experimental implementation. A Terasic DE0-Nano board with an Intel Cyclone IV EP4CE22F17C6N FPGA was used to generate the 2.4 kHz PWM switching control signals, which are fed to isolated gate drivers with the HCPL-3180 optocoupler, before being applied to the eight IRF640N MOSFETs in a developed low power prototype inverter board. To validate the proposed inverter, two amplitude modulation indexes were evaluated (0.4 and 0.99) using the phase opposition carriers disposition (POD) technique. Simulation and experimental results to synthesize a three- and a five-level PWM voltage waveform across a resistive and a resistive-inductive-capacitive load show that both are in close agreement, validating the proposed low-cost inverter.",
"title": ""
},
{
"docid": "neg:1840384_16",
"text": "Modeling the structure of coherent texts is a key NLP problem. The task of coherently organizing a given set of sentences has been commonly used to build and evaluate models that understand such structure. We propose an end-to-end unsupervised deep learning approach based on the set-to-sequence framework to address this problem. Our model strongly outperforms prior methods in the order discrimination task and a novel task of ordering abstracts from scientific articles. Furthermore, our work shows that useful text representations can be obtained by learning to order sentences. Visualizing the learned sentence representations shows that the model captures high-level logical structure in paragraphs. Our representations perform comparably to state-of-the-art pre-training methods on sentence similarity and paraphrase detection tasks.",
"title": ""
},
{
"docid": "neg:1840384_17",
"text": "In 1984, a prospective cohort study, Coronary Artery Risk Development in Young Adults (CARDIA) was initiated to investigate life-style and other factors that influence, favorably and unfavorably, the evolution of coronary heart disease risk factors during young adulthood. After a year of planning and protocol development, 5,116 black and white women and men, age 18-30 years, were recruited and examined in four urban areas: Birmingham, Alabama; Chicago, Illinois; Minneapolis, Minnesota, and Oakland, California. The initial examination included carefully standardized measurements of major risk factors as well as assessments of psychosocial, dietary, and exercise-related characteristics that might influence them, or that might be independent risk factors. This report presents the recruitment and examination methods as well as the mean levels of blood pressure, total plasma cholesterol, height, weight and body mass index, and the prevalence of cigarette smoking by age, sex, race and educational level. Compared to recent national samples, smoking is less prevalent in CARDIA participants, and weight tends to be greater. Cholesterol levels are representative and somewhat lower blood pressures in CARDIA are probably, at least in part, due to differences in measurement methods. Especially noteworthy among several differences in risk factor levels by demographic subgroup, were a higher body mass index among black than white women and much higher prevalence of cigarette smoking among persons with no more than a high school education than among those with more education.",
"title": ""
},
{
"docid": "neg:1840384_18",
"text": "With the availability of vast collection of research articles on internet, textual analysis is an increasingly important technique in scientometric analysis. While the context in which it is used and the specific algorithms implemented may vary, typically any textual analysis exercise involves intensive pre-processing of input text which includes removing topically uninteresting terms (stop words). In this paper we argue that corpus specific stop words, which take into account the specificities of a collection of texts, improve textual analysis in scientometrics. We describe two relatively simple techniques to generate corpus-specific stop words; stop words lists following a Poisson distribution and keyword adjacency stop words lists. In a case study to extract keywords from scientific abstracts of research project funded by the European Research Council in the domain of Life sciences, we show that a combination of those techniques gives better recall values than standard stop words or any of the two techniques alone. The method we propose can be implemented to obtain stop words lists in an automatic way by using author provided keywords for a set of abstracts. The stop words lists generated can be updated easily by adding new texts to the training corpus. Conference Topic Methods and techniques",
"title": ""
},
{
"docid": "neg:1840384_19",
"text": "Impact of occupational stress on employee performance has been recognized as an important area of concern for organizations. Negative stress affects the physical and mental health of the employees that in turn affects their performance on job. Research into the relationship between stress and job performance has been neglected in the occupational stress literature (Jex, 1998). It is therefore significant to understand different Occupational Stress Inducers (OSI) on one hand and their impact on different aspects of job performance on the other. This article reviews the available literature to understand the phenomenon so as to develop appropriate stress management strategies to not only save the employees from variety of health problems but to improve their performance and the performance of the organization. 35 Occupational Stress Inducers (OSI) were identified through a comprehensive review of articles and reports published in the literature of management and allied disciplines between 1990 and 2014. A conceptual model is proposed towards the end to study the impact of stress on employee job performance. The possible data analysis techniques are also suggested providing direction for future research.",
"title": ""
}
] |
1840385 | Fine-Grained Control-Flow Integrity Through Binary Hardening | [
{
"docid": "pos:1840385_0",
"text": "Code diversification has been proposed as a technique to mitigate code reuse attacks, which have recently become the predominant way for attackers to exploit memory corruption vulnerabilities. As code reuse attacks require detailed knowledge of where code is in memory, diversification techniques attempt to mitigate these attacks by randomizing what instructions are executed and where code is located in memory. As an attacker cannot read the diversified code, it is assumed he cannot reliably exploit the code.\n In this paper, we show that the fundamental assumption behind code diversity can be broken, as executing the code reveals information about the code. Thus, we can leak information without needing to read the code. We demonstrate how an attacker can utilize a memory corruption vulnerability to create side channels that leak information in novel ways, removing the need for a memory disclosure vulnerability. We introduce seven new classes of attacks that involve fault analysis and timing side channels, where each allows a remote attacker to learn how code has been diversified.",
"title": ""
},
{
"docid": "pos:1840385_1",
"text": "A new binary software randomization and ControlFlow Integrity (CFI) enforcement system is presented, which is the first to efficiently resist code-reuse attacks launched by informed adversaries who possess full knowledge of the inmemory code layout of victim programs. The defense mitigates a recent wave of implementation disclosure attacks, by which adversaries can exfiltrate in-memory code details in order to prepare code-reuse attacks (e.g., Return-Oriented Programming (ROP) attacks) that bypass fine-grained randomization defenses. Such implementation-aware attacks defeat traditional fine-grained randomization by undermining its assumption that the randomized locations of abusable code gadgets remain secret. Opaque CFI (O-CFI) overcomes this weakness through a novel combination of fine-grained code-randomization and coarsegrained control-flow integrity checking. It conceals the graph of hijackable control-flow edges even from attackers who can view the complete stack, heap, and binary code of the victim process. For maximal efficiency, the integrity checks are implemented using instructions that will soon be hardware-accelerated on commodity x86-x64 processors. The approach is highly practical since it does not require a modified compiler and can protect legacy binaries without access to source code. Experiments using our fully functional prototype implementation show that O-CFI provides significant probabilistic protection against ROP attacks launched by adversaries with complete code layout knowledge, and exhibits only 4.7% mean performance overhead on current hardware (with further overhead reductions to follow on forthcoming Intel processors). I. MOTIVATION Code-reuse attacks (cf., [5]) have become a mainstay of software exploitation over the past several years, due to the rise of data execution protections that nullify traditional codeinjection attacks. Rather than injecting malicious payload code directly onto the stack or heap, where modern data execution protections block it from being executed, attackers now ingeniously inject addresses of existing in-memory code fragments (gadgets) onto victim stacks, causing the victim process to execute its own binary code in an unanticipated order [38]. With a sufficiently large victim code section, the pool of exploitable gadgets becomes arbitrarily expressive (e.g., Turing-complete) [20], facilitating the construction of arbitrary attack payloads without the need for code-injection. Such payload construction has even been automated [34]. As a result, code-reuse has largely replaced code-injection as one of the top software security threats. Permission to freely reproduce all or part of this paper for noncommercial purposes is granted provided that copies bear this notice and the full citation on the first page. Reproduction for commercial purposes is strictly prohibited without the prior written consent of the Internet Society, the first-named author (for reproduction of an entire paper only), and the author’s employer if the paper was prepared within the scope of employment. NDSS ’15, 8–11 February 2015, San Diego, CA, USA Copyright 2015 Internet Society, ISBN 1-891562-38-X http://dx.doi.org/10.14722/ndss.2015.23271 This has motivated copious work on defenses against codereuse threats. Prior defenses can generally be categorized into: CFI [1] and artificial software diversity [8]. CFI restricts all of a program’s runtime control-flows to a graph of whitelisted control-flow edges. Usually the graph is derived from the semantics of the program source code or a conservative disassembly of its binary code. As a result, CFIprotected programs reject control-flow hijacks that attempt to traverse edges not supported by the original program’s semantics. Fine-grained CFI monitors indirect control-flows precisely; for example, function callees must return to their exact callers. Although such precision provides the highest security, it also tends to incur high performance overheads (e.g., 21% for precise caller-callee return-matching [1]). Because this overhead is often too high for industry adoption, researchers have proposed many optimized, coarser-grained variants of CFI. Coarse-grained CFI trades some security for better performance by reducing the precision of the checks. For example, functions must return to valid call sites (but not necessarily to the particular site that invoked the callee). Unfortunately, such relaxations have proved dangerous—a number of recent proof-of-concept exploits have shown how even minor relaxations of the control-flow policy can be exploited to effect attacks [6, 11, 18, 19]. Table I summarizes the impact of several of these recent exploits. Artificial software diversity offers a different but complementary approach that randomizes programs in such a way that attacks succeeding against one program instance have a very low probability of success against other (independently randomized) instances of the same program. Probabilistic defenses rely on memory secrecy—i.e., the effects of randomization must remain hidden from attackers. One of the simplest and most widely adopted forms of artificial diversity is Address Space Layout Randomization (ASLR), which randomizes the base addresses of program segments at loadtime. Unfortunately, merely randomizing the base addresses does not yield sufficient entropy to preserve memory secrecy in many cases; there are numerous successful derandomization attacks against ASLR [13, 26, 36, 37, 39, 42]. Finer-grained diversity techniques obtain exponentially higher entropy by randomizing the relative distances between all code points. For example, binary-level Self-Transforming Instruction Relocation (STIR) [45] and compilers with randomized code-generation (e.g., [22]) have both realized fine-grained artificial diversity for production-level software at very low overheads. Recently, a new wave of implementation disclosure attacks [4, 10, 35, 40] have threatened to undermine fine-grained artificial diversity defenses. Implementation disclosure attacks exploit information leak vulnerabilities to read memory pages of victim processes at the discretion of the attacker. By reading the TABLE I. OVERVIEW OF CONTROL-FLOW INTEGRITY BYPASSES CFI [1] bin-CFI [50] CCFIR [49] kBouncer [33] ROPecker [7] ROPGuard [16] EMET [30] DeMott [12] Feb 2014 / Göktaş et al. [18] May 2014 / / / Davi et al. [11] Aug 2014 / / / / / Göktaş et al. [19] Aug 2014 / / Carlini and Wagner [6] Aug 2014 / / in-memory code sections, attackers violate the memory secrecy assumptions of artificial diversity, rendering their defenses ineffective. Since finding and closing all information leaks is well known to be prohibitively difficult and often intractable for many large software products, these attacks constitute a very dangerous development in the cyber-threat landscape; there is currently no well-established, practical defense. This paper presents Opaque CFI (O-CFI): a new approach to coarse-grained CFI that strengthens fine-grained artificial diversity to withstand implementation disclosure attacks. The heart of O-CFI is a new form of control-flow check that conceals the graph of abusable control-flow edges even from attackers who have complete read-access to the randomized binary code, the stack, and the heap of victim processes. Such access only affords attackers knowledge of the intended (and therefore nonabusable) edges of the control-flow graph, not the edges left unprotected by the coarse-grained CFI implementation. Artificial diversification is employed to vary the set of unprotected edges between program instances, maintaining the probabilistic guarantees of fine-grained diversity. Experiments show that O-CFI enjoys performance overheads comparable to standard fine-grained diversity and non-opaque, coarse-grained CFI. Moreover, O-CFI’s control-flow checking logic is implemented using Intel x86/x64 memory-protection extensions (MPX) that are expected to be hardware-accelerated in commodity CPUs from 2015 onwards. We therefore expect even better performance for O-CFI in the near future. Our contributions are as follows: • We introduce O-CFI, the first low-overhead code-reuse defense that tolerates implementation disclosures. • We describe our implementation of a fully functional prototype that protects stripped, x86 legacy binaries without source code. • Analysis shows that O-CFI provides quantifiable security against state-of-the-art exploits—including JITROP [40] and Blind-ROP [4]. • Performance evaluation yields competitive overheads of just 4.7% for computation-intensive programs. II. THREAT MODEL Our work is motivated by the emergence of attacks against fine-grained diversity and coarse-grained control-flow integrity. We therefore introduce these attacks and distill them into a single, unified threat model. A. Bypassing Coarse-Grained CFI Ideally, CFI permits only programmer-intended control-flow transfers during a program’s execution. The typical approach is to assign a unique ID to each permissible indirect controlflow target, and check the IDs at runtime. Unfortunately, this introduces performance overhead proportional to the degree of the graph—the more overlaps between valid target sets of indirect branch instructions, the more IDs must be stored and checked at each branch. Moreover, perfect CFI cannot be realized with a purely static control-flow graph; for example, the permissible destinations of function returns depend on the calling context, which is only known at runtime. Fine-grained CFI therefore implements a dynamically computed shadow stack, incurring high overheads [1]. To avoid this, coarse-grained CFI implementations resort to a reduced-degree, static approximation of the control-flow graph, and merge identifiers at the cost of reduced security. For example, bin-CFI [49] and CCFIR [50] use at most three IDs per branch, and omit shadow stacks. Recent work has demonstrated that these optimizations open exploitable",
"title": ""
}
] | [
{
"docid": "neg:1840385_0",
"text": "Detecting depression is a key public health challenge, as almost 12% of all disabilities can be attributed to depression. Computational models for depression detection must prove not only that can they detect depression, but that they can do it early enough for an intervention to be plausible. However, current evaluations of depression detection are poor at measuring model latency. We identify several issues with the currently popular ERDE metric, and propose a latency-weighted F1 metric that addresses these concerns. We then apply this evaluation to several models from the recent eRisk 2017 shared task on depression detection, and show how our proposed measure can better capture system differences.",
"title": ""
},
{
"docid": "neg:1840385_1",
"text": "Deep Neural Networks (DNNs) denote multilayer artificial neural networks with more than one hidden layer and millions of free parameters. We propose a Generalized Discriminant Analysis (GerDA) based on DNNs to learn discriminative features of low dimension optimized with respect to a fast classification from a large set of acoustic features for emotion recognition. On nine frequently used emotional speech corpora, we compare the performance of GerDA features and their subsequent linear classification with previously reported benchmarks obtained using the same set of acoustic features classified by Support Vector Machines (SVMs). Our results impressively show that low-dimensional GerDA features capture hidden information from the acoustic features leading to a significantly raised unweighted average recall and considerably raised weighted average recall.",
"title": ""
},
{
"docid": "neg:1840385_2",
"text": "Voice conversion methods based on frequency warping followed by amplitude scaling have been recently proposed. These methods modify the frequency axis of the source spectrum in such manner that some significant parts of it, usually the formants, are moved towards their image in the target speaker's spectrum. Amplitude scaling is then applied to compensate for the differences between warped source spectra and target spectra. This article presents a fully parametric formulation of a frequency warping plus amplitude scaling method in which bilinear frequency warping functions are used. Introducing this constraint allows for the conversion error to be described in the cepstral domain and to minimize it with respect to the parameters of the transformation through an iterative algorithm, even when multiple overlapping conversion classes are considered. The paper explores the advantages and limitations of this approach when applied to a cepstral representation of speech. We show that it achieves significant improvements in quality with respect to traditional methods based on Gaussian mixture models, with no loss in average conversion accuracy. Despite its relative simplicity, it achieves similar performance scores to state-of-the-art statistical methods involving dynamic features and global variance.",
"title": ""
},
{
"docid": "neg:1840385_3",
"text": "This paper reported the construction of partial discharge measurement system under influence of cylindrical metal particle in transformer oil. The partial discharge of free cylindrical metal particle in the uniform electric field under AC applied voltage was studied in this paper. The partial discharge inception voltage (PDIV) for the single particle was measure to be 11kV. The typical waveform of positive PD and negative PD was also obtained. The result shows that the magnitude of negative PD is higher compared to positive PD. The observation on cylindrical metal particle movement revealed that there were a few stages of motion process involved.",
"title": ""
},
{
"docid": "neg:1840385_4",
"text": "In this paper we present a decomposition strategy for solving large scheduling problems using mathematical programming methods. Instead of formulating one huge and unsolvable MILP problem, we propose a decomposition scheme that generates smaller programs that can often be solved to global optimality. The original problem is split into subproblems in a natural way using the special features of steel making and avoiding the need for expressing the highly complex rules as explicit constraints. We present a small illustrative example problem, and several real-world problems to demonstrate the capabilities of the proposed strategy, and the fact that the solutions typically lie within 1-3% of the global optimum.",
"title": ""
},
{
"docid": "neg:1840385_5",
"text": "This paper describes our proposed solution for SemEval 2017 Task 1: Semantic Textual Similarity (Daniel Cer and Specia, 2017). The task aims at measuring the degree of equivalence between sentences given in English. Performance is evaluated by computing Pearson Correlation scores between the predicted scores and human judgements. Our proposed system consists of two subsystems and one regression model for predicting STS scores. The two subsystems are designed to learn Paraphrase and Event Embeddings that can take the consideration of paraphrasing characteristics and sentence structures into our system. The regression model associates these embeddings to make the final predictions. The experimental result shows that our system acquires 0.8 of Pearson Correlation Scores in this task.",
"title": ""
},
{
"docid": "neg:1840385_6",
"text": "A touch-less interaction technology on vision based wearable device is designed and evaluated. Users interact with the application with dynamic hands/feet gestures in front of the camera. Several proof-of-concept prototypes with eleven dynamic gestures are developed based on the touch-less interaction. At last, a comparing user study evaluation is proposed to demonstrate the usability of the touch-less approach, as well as the impact on user's emotion, running on a wearable framework or Google Glass.",
"title": ""
},
{
"docid": "neg:1840385_7",
"text": "Human parechovirus type 3 (HPeV3) can cause serious conditions in neonates, such as sepsis and encephalitis, but data for adults are lacking. The case of a pregnant woman with HPeV3 infection is reported herein. A 28-year-old woman at 36 weeks of pregnancy was admitted because of myalgia and muscle weakness. Her grip strength was 6.0kg for her right hand and 2.5kg for her left hand. The patient's symptoms, probably due to fasciitis and not myositis, improved gradually with conservative treatment, however labor pains with genital bleeding developed unexpectedly 3 days after admission. An obstetric consultation was obtained and a cesarean section was performed, with no complications. A real-time PCR assay for the detection of viral genomic ribonucleic acid against HPeV showed positive results for pharyngeal swabs, feces, and blood, and negative results for the placenta, umbilical cord, umbilical cord blood, amniotic fluid, and breast milk. The HPeV3 was genotyped by sequencing of the VP1 region. The woman made a full recovery and was discharged with her infant in a stable condition.",
"title": ""
},
{
"docid": "neg:1840385_8",
"text": "Muscle samples were obtained from the gastrocnemius of 17 female and 23 male track athletes, 10 untrained women, and 11 untrained men. Portions of the specimen were analyzed for total phosphorylase, lactic dehydrogenase (LDH), and succinate dehydrogenase (SDH) activities. Sections of the muscle were stained for myosin adenosine triphosphatase, NADH2 tetrazolium reductase, and alpha-glycerophosphate dehydrogenase. Maximal oxygen uptake (VO2max) was measured on a treadmill for 23 of the volunteers (6 female athletes, 11 male athletes, 10 untrained women, and 6 untrained men). These measurements confirm earlier reports which suggest that the athlete's preference for strength, speed, and/or endurance events is in part a matter of genetic endowment. Aside from differences in fiber composition and enzymes among middle-distance runners, the only distinction between the sexes was the larger fiber areas of the male athletes. SDH activity was found to correlate 0.79 with VO2max, while muscle LDH appeared to be a function of muscle fiber composition. While sprint- and endurance-trained athletes are characterized by distinct fiber compositions and enzyme activities, participants in strength events (e.g., shot-put) have relatively low muscle enzyme activities and a variety of fiber compositions.",
"title": ""
},
{
"docid": "neg:1840385_9",
"text": "High service quality is imperative and important for competitiveness of service industry. In order to provide much quality service, a deeper research on service quality models is necessary. There are plenty of service quality models which enable managers and practitioners to identify quality problems and improve the efficiency and profitability of overall performance. One of the most influential models in the service quality literature is the model of service quality gaps. In this paper, the model of service quality gaps has been critically reviewed and developed in order to make it more comprehensive. The developed model has been verified based using a survey on 16 experts. Compared to the traditional models, the proposed model involves five additional components and eight additional gaps.",
"title": ""
},
{
"docid": "neg:1840385_10",
"text": "Medication administration is an increasingly complex process, influenced by the number of medications on the market, the number of medications prescribed for each patient, new medical technology and numerous administration policies and procedures. Adverse events initiated by medication error are a crucial area to improve patient safety. This project looked at the complexity of the medication administration process at a regional hospital and the effect of two medication distribution systems. A reduction in work complexity and time spent gathering medication and supplies, was a goal of this work; but more importantly was determining what barriers to safety and efficiency exist in the medication administration process and the impact of barcode scanning and other technologies. The concept of mobile medication units is attractive to both managers and clinicians; however it is only one solution to the problems with medication administration. Introduction and Background Medication administration is an increasingly complex process, influenced by the number of medications on the market, the number of medications prescribed for each patient, and the numerous policies and procedures created for their administration. Mayo and Duncan (2004) found that a “single [hospital] patient can receive up to 18 medications per day, and a nurse can administer as many as 50 medications per shift” (p. 209). While some researchers indicated that the solution is more nurse education or training (e.g. see Mayo & Duncan, 2004; and Tang, Sheu, Yu, Wei, & Chen, 2007), it does not appear that they have determined the feasibility of this solution and the increased time necessary to look up every unfamiliar medication. Most of the research which focuses on the causes of medication errors does not examine the processes involved in the administration of the medication. And yet, understanding the complexity in the nurses’ processes and workflow is necessary to develop safeguards and create more robust systems that reduce the probability of errors and adverse events. Current medication administration processes include many \\ tasks, including but not limited to, assessing the patient to obtain pertinent data, gathering medications, confirming the five rights (right dose, patient, route, medication, and time), administering the medications, documenting administration, and observing for therapeutic and untoward effects. In studies of the delivery of nursing care in acute care settings, Potter et al. (2005) found that nurses spent 16% their time preparing or administering medication. In addition to the amount of time that the nurses spent in preparing and administering medication, Potter et al found that a significant number of interruptions occurred during this critical process. Interruptions impact the cognitive workload of the nurse, and create an environment where medication errors are more likely to occur. A second environmental factor that affects the nurses’ workflow, is the distance traveled to administer care during a shift. Welker, Decker, Adam, & Zone-Smith (2006) found that on average, ward nurses who were assigned three patients walked just over 4.1 miles per shift while a nurse assigned to six patients walked over 4.8 miles. As a large number of interruptions (22%) occurred within the medication rooms, which were highly visible and in high traffic locations (Potter et al., 2005), and while collecting supplies or traveling to and from patient rooms (Ebright, Patterson, Chalko, & Render, 2003), reducing the distances and frequency of repeated travel could have the ability to decrease the number of interruptions and possibly errors in medication administration. Adding new technology, revising policies and procedures, and providing more education have often been the approaches taken to reduce medication errors. Unfortunately these new technologies, such as computerized order entry and electronic medical records / charting, and new procedures, for instance bar code scanning both the medicine and the patient, can add complexity to the nurse’s taskload. The added complexity in correspondence with the additional time necessary to complete the additional steps can lead to workarounds and variations in care. Given the problems in the current medication administration processes, this work focused on facilitating the nurse’s role in the medication administration process. This study expands on the Braswell and Duggar (2006) investigation and compares processes at baseline and postintroduction of a new mobile medication system. To do this, the current medication administration and distribution process was fully documented to determine a baseline in workload complexity. Then a new mobile medication center was installed to allow nurses easier access to patient medications while traveling on the floor, and the medication administration and distribution process was remapped to demonstrate where process complexities were reduced and nurse workflow is more efficient. A similar study showed that the time nurses spend gathering medications and supplies can be dramatically reduced through this type of system (see Braswell & Duggar, 2006); however, they did not directly investigate the impact on the nursing process. Thus, this research is presented to document the impact of this technology on the nursing workflow at a regional hospital, and as an expansion on the work begun by Braswell and Duggar.",
"title": ""
},
{
"docid": "neg:1840385_11",
"text": "Recently proposed universal filtered multicarrier (UFMC) system is not an orthogonal system in multipath channel environments and might cause significant performance loss. In this paper, the authors propose a cyclic prefix (CP) based UFMC system and first analyze the conditions for interference-free one-tap equalization in the absence of transceiver imperfections. Then the corresponding signal model and output signal-to-noise ratio expression are derived. In the presence of carrier frequency offset, timing offset, and insufficient CP length, the authors establish an analytical system model as a summation of desired signal, intersymbol interference, intercarrier interference, and noise. New channel equalization algorithms are proposed based on the derived analytical signal model. Numerical results show that the derived model matches the simulation results precisely, and the proposed equalization algorithms improve the UFMC system performance in terms of bit error rate.",
"title": ""
},
{
"docid": "neg:1840385_12",
"text": "Insulin resistance plays a major role in the pathogenesis of the metabolic syndrome and type 2 diabetes, and yet the mechanisms responsible for it remain poorly understood. Magnetic resonance spectroscopy studies in humans suggest that a defect in insulin-stimulated glucose transport in skeletal muscle is the primary metabolic abnormality in insulin-resistant patients with type 2 diabetes. Fatty acids appear to cause this defect in glucose transport by inhibiting insulin-stimulated tyrosine phosphorylation of insulin receptor substrate-1 (IRS-1) and IRS-1-associated phosphatidylinositol 3-kinase activity. A number of different metabolic abnormalities may increase intramyocellular and intrahepatic fatty acid metabolites; these include increased fat delivery to muscle and liver as a consequence of either excess energy intake or defects in adipocyte fat metabolism, and acquired or inherited defects in mitochondrial fatty acid oxidation. Understanding the molecular and biochemical defects responsible for insulin resistance is beginning to unveil novel therapeutic targets for the treatment of the metabolic syndrome and type 2 diabetes.",
"title": ""
},
{
"docid": "neg:1840385_13",
"text": "Personalized curriculum sequencing is an important research issue for web-based learning systems because no fixed learning paths will be appropriate for all learners. Therefore, many researchers focused on developing e-learning systems with personalized learning mechanisms to assist on-line web-based learning and adaptively provide learning paths in order to promote the learning performance of individual learners. However, most personalized e-learning systems usually neglect to consider if learner ability and the difficulty level of the recommended courseware are matched to each other while performing personalized learning services. Moreover, the problem of concept continuity of learning paths also needs to be considered while implementing personalized curriculum sequencing because smooth learning paths enhance the linked strength between learning concepts. Generally, inappropriate courseware leads to learner cognitive overload or disorientation during learning processes, thus reducing learning performance. Therefore, compared to the freely browsing learning mode without any personalized learning path guidance used in most web-based learning systems, this paper assesses whether the proposed genetic-based personalized e-learning system, which can generate appropriate learning paths according to the incorrect testing responses of an individual learner in a pre-test, provides benefits in terms of learning performance promotion while learning. Based on the results of pre-test, the proposed genetic-based personalized e-learning system can conduct personalized curriculum sequencing through simultaneously considering courseware difficulty level and the concept continuity of learning paths to support web-based learning. Experimental results indicated that applying the proposed genetic-based personalized e-learning system for web-based learning is superior to the freely browsing learning mode because of high quality and concise learning path for individual learners. 2007 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "neg:1840385_14",
"text": "Microblogs have recently received widespread interest from NLP researchers. However, current tools for Japanese word segmentation and POS tagging still perform poorly on microblog texts. We developed an annotated corpus and proposed a joint model for overcoming this situation. Our annotated corpus of microblog texts enables not only training of accurate statistical models but also quantitative evaluation of their performance. Our joint model with lexical normalization handles the orthographic diversity of microblog texts. We conducted an experiment to demonstrate that the corpus and model substantially contribute to boosting accuracy.",
"title": ""
},
{
"docid": "neg:1840385_15",
"text": "Ehlers-Danlos syndrome is an inherited heterogeneous group of connective tissue disorders, characterized by abnormal collagen synthesis, affecting skin, ligaments, joints, blood vessels and other organs. It is one of the oldest known causes of bruising and bleeding and was first described by Hipprocrates in 400 BC. Edvard Ehlers, in 1901, recognized the condition as a distinct entity. In 1908, Henri-Alexandre Danlos suggested that skin extensibility and fragility were the cardinal features of the syndrome. In 1998, Beighton published the classification of Ehlers-Danlos syndrome according to the Villefranche nosology. From the 1960s the genetic make up was identified. Management of bleeding problems associated with Ehlers-Danlos has been slow to progress.",
"title": ""
},
{
"docid": "neg:1840385_16",
"text": "Graph-based methods have gained attention in many areas of Natural Language Processing (NLP) including Word Sense Disambiguation (WSD), text summarization, keyword extraction and others. Most of the work in these areas formulate their problem in a graph-based setting and apply unsupervised graph clustering to obtain a set of clusters. Recent studies suggest that graphs often exhibit a hierarchical structure that goes beyond simple flat clustering. This paper presents an unsupervised method for inferring the hierarchical grouping of the senses of a polysemous word. The inferred hierarchical structures are applied to the problem of word sense disambiguation, where we show that our method performs significantly better than traditional graph-based methods and agglomerative clustering yielding improvements over state-of-the-art WSD systems based on sense induction.",
"title": ""
},
{
"docid": "neg:1840385_17",
"text": "PURPOSE\nTitanium based implant systems, though considered as the gold standard for rehabilitation of edentulous spaces, have been criticized for many inherent flaws. The onset of hypersensitivity reactions, biocompatibility issues, and an unaesthetic gray hue have raised demands for more aesthetic and tissue compatible material for implant fabrication. Zirconia is emerging as a promising alternative to conventional Titanium based implant systems for oral rehabilitation with superior biological, aesthetics, mechanical and optical properties. This review aims to critically analyze and review the credibility of Zirconia implants as an alternative to Titanium for prosthetic rehabilitation.\n\n\nSTUDY SELECTION\nThe literature search for articles written in the English language in PubMed and Cochrane Library database from 1990 till December 2016. The following search terms were utilized for data search: \"zirconia implants\" NOT \"abutment\", \"zirconia implants\" AND \"titanium implants\" AND \"osseointegration\", \"zirconia implants\" AND compatibility.\n\n\nRESULTS\nThe number of potential relevant articles selected were 47. All the human in vivo clinical, in vitro, animals' studies were included and discussed under the following subheadings: Chemical composition, structure and phases; Physical and mechanical properties; Aesthetic and optical properties; Osseointegration and biocompatibility; Surface modifications; Peri-implant tissue compatibility, inflammation and soft tissue healing, and long-term prognosis.\n\n\nCONCLUSIONS\nZirconia implants are a promising alternative to titanium with a superior soft-tissue response, biocompatibility, and aesthetics with comparable osseointegration. However, further long-term longitudinal and comparative clinical trials are required to validate zirconia as a viable alternative to the titanium implant.",
"title": ""
},
{
"docid": "neg:1840385_18",
"text": "Dataflow programming models are suitable to express multi-core streaming applications. The design of high-quality embedded systems in that context requires static analysis to ensure the liveness and bounded memory of the application. However, many streaming applications have a dynamic behavior. The previously proposed dataflow models for dynamic applications do not provide any static guarantees or only in exchange of significant restrictions in expressive power or automation. To overcome these restrictions, we propose the schedulable parametric dataflow (SPDF) model. We present static analyses and a quasi-static scheduling algorithm. We demonstrate our approach using a video decoder case study.",
"title": ""
},
{
"docid": "neg:1840385_19",
"text": "Homomorphic cryptography has been one of the most interesting topics of mathematics and computer security since Gentry presented the first construction of a fully homomorphic encryption (FHE) scheme in 2009. Since then, a number of different schemes have been found, that follow the approach of bootstrapping a fully homomorphic scheme from a somewhat homomorphic foundation. All existing implementations of these systems clearly proved, that fully homomorphic encryption is not yet practical, due to significant performance limitations. However, there are many applications in the area of secure methods for cloud computing, distributed computing and delegation of computation in general, that can be implemented with homomorphic encryption schemes of limited depth. We discuss a simple algebraically homomorphic scheme over the integers that is based on the factorization of an approximate semiprime integer. We analyze the properties of the scheme and provide a couple of known protocols that can be implemented with it. We also provide a detailed discussion on searching with encrypted search terms and present implementations and performance figures for the solutions discussed in this paper.",
"title": ""
}
] |
1840386 | Information Filtering and Information Retrieval: Two Sides of the Same Coin? | [
{
"docid": "pos:1840386_0",
"text": "Gerard Salton has worked on a number of non-numeric computer applications including automatic information retrieval. He has published a large number of articles and several books on information retrieval, most recently, \"Introduction to Modern Information Retrieval,\" 1983. Edward Fox is currently assistant professor of computer science at VPI. Harry Wu is currently involved in the comparative study of relational database systems. His main interest is in information storage and retrieval. Authors' Present Addresses: G. Salton, Dept. of Computer Science, Cornell Univ., Ithaca, NY 14853; E. A. Fox, Virginia Polytechnic Institute and State Univ., Blacksburg, VA 24061; H. Wu, ITT--Programming Technology Center, lO00 Oronogue Lane, Stratford, CT 06497. This study was supported in part by the National Science Foundation under Grant IST-8108696. Permission to copy without fee all or part of this material is granted provided that the copies are not made or distributed for direct commercial advantage, the ACM copyright notice and the title of the publication and its date appear, and notice is given that copying is by permission of the Association for Computing Machinery. To copy otherwise, or to republish, requires a fee and/or specific permission. © 1983 ACM 0001-0782/83/1100-1022 75¢ 1. CONVENTIONAL RETRIEVAL STRATEGIES In conventional information retrieval, the stored records are normally identified by sets of key words or",
"title": ""
},
{
"docid": "pos:1840386_1",
"text": "This article reviews recent research into the use of hierarchic agglomerative clustering methods for document retrieval. After an introduction to the calculation of interdocument similarities and to clustering methods that are appropriate for document clustering, the article discusses algorithms that can be used to allow the implementation of these methods on databases of nontrivial size. The validation of document hierarchies is described using tests based on the theory of random graphs and on empirical characteristics of document collections that are to be clustered. A range of search strategies is available for retrieval from document hierarchies and the results are presented of a series of research projects that have used these strategies to search the clusters resulting from several different types of hierarchic agglomerative clustering method. It is suggested that the complete linkage method is probably the most effective method in terms of retrieval performance; however, it is also difficult to implement in an efficient manner. Other applications of document clustering techniques are discussed briefly; experimental evidence suggests that nearest neighbor clusters, possibly represented as a network model, provide a reasonably efficient and effective means of including interdocument similarity information in document retrieval systems.",
"title": ""
}
] | [
{
"docid": "neg:1840386_0",
"text": "This paper concerns the problem of pose estimation for an inertial-visual sensor. It is well known that IMU bias, and calibration errors between camera and IMU frames can impair the achievement of high-quality estimates through the fusion of visual and inertial data. The main contribution of this work is the design of new observers to estimate pose, IMU bias and camera-to-IMU rotation. The observers design relies on an extension of the so-called passive complementary filter on SO(3). Stability of the observers is established using Lyapunov functions under adequate observability conditions. Experimental results are presented to assess this approach.",
"title": ""
},
{
"docid": "neg:1840386_1",
"text": "There is an ongoing debate over the capabilities of hierarchical neural feedforward architectures for performing real-world invariant object recognition. Although a variety of hierarchical models exists, appropriate supervised and unsupervised learning methods are still an issue of intense research. We propose a feedforward model for recognition that shares components like weight sharing, pooling stages, and competitive nonlinearities with earlier approaches but focuses on new methods for learning optimal feature-detecting cells in intermediate stages of the hierarchical network. We show that principles of sparse coding, which were previously mostly applied to the initial feature detection stages, can also be employed to obtain optimized intermediate complex features. We suggest a new approach to optimize the learning of sparse features under the constraints of a weight-sharing or convolutional architecture that uses pooling operations to achieve gradual invariance in the feature hierarchy. The approach explicitly enforces symmetry constraints like translation invariance on the feature set. This leads to a dimension reduction in the search space of optimal features and allows determining more efficiently the basis representatives, which achieve a sparse decomposition of the input. We analyze the quality of the learned feature representation by investigating the recognition performance of the resulting hierarchical network on object and face databases. We show that a hierarchy with features learned on a single object data set can also be applied to face recognition without parameter changes and is competitive with other recent machine learning recognition approaches. To investigate the effect of the interplay between sparse coding and processing nonlinearities, we also consider alternative feedforward pooling nonlinearities such as presynaptic maximum selection and sum-of-squares integration. The comparison shows that a combination of strong competitive nonlinearities with sparse coding offers the best recognition performance in the difficult scenario of segmentation-free recognition in cluttered surround. We demonstrate that for both learning and recognition, a precise segmentation of the objects is not necessary.",
"title": ""
},
{
"docid": "neg:1840386_2",
"text": "The epidermal growth factor receptor (EGFR) contributes to the pathogenesis of head&neck squamous cell carcinoma (HNSCC). However, only a subset of HNSCC patients benefit from anti-EGFR targeted therapy. By performing an unbiased proteomics screen, we found that the calcium-activated chloride channel ANO1 interacts with EGFR and facilitates EGFR-signaling in HNSCC. Using structural mutants of EGFR and ANO1 we identified the trans/juxtamembrane domain of EGFR to be critical for the interaction with ANO1. Our results show that ANO1 and EGFR form a functional complex that jointly regulates HNSCC cell proliferation. Expression of ANO1 affected EGFR stability, while EGFR-signaling elevated ANO1 protein levels, establishing a functional and regulatory link between ANO1 and EGFR. Co-inhibition of EGFR and ANO1 had an additive effect on HNSCC cell proliferation, suggesting that co-targeting of ANO1 and EGFR could enhance the clinical potential of EGFR-targeted therapy in HNSCC and might circumvent the development of resistance to single agent therapy. HNSCC cell lines with amplification and high expression of ANO1 showed enhanced sensitivity to Gefitinib, suggesting ANO1 overexpression as a predictive marker for the response to EGFR-targeting agents in HNSCC therapy. Taken together, our results introduce ANO1 as a promising target and/or biomarker for EGFR-directed therapy in HNSCC.",
"title": ""
},
{
"docid": "neg:1840386_3",
"text": "This study evaluated the clinical efficacy of 2% chlorhexidine (CHX) gel on intracanal bacteria reduction during root canal instrumentation. The additional antibacterial effect of an intracanal dressing (Ca[OH](2) mixed with 2% CHX gel) was also assessed. Forty-three patients with apical periodontitis were recruited. Four patients with irreversible pulpitis were included as negative controls. Teeth were instrumented using rotary instruments and 2% CHX gel as the disinfectant. Bacterial samples were taken upon access (S1), after instrumentation (S2), and after 2 weeks of intracanal dressing (S3). Anaerobic culture was performed. Four samples showed no bacteria growth at S1, which were excluded from further analysis. Of the samples cultured positively at S1, 10.3% (4/39) and 8.3% (4/36) sampled bacteria at S2 and S3, respectively. A significant difference in the percentage of positive culture between S1 and S2 (p < 0.001) but not between S2 and S3 (p = 0.692) was found. These results suggest that 2% CHX gel is an effective root canal disinfectant and additional intracanal dressing did not significantly improve the bacteria reduction on the sampled root canals.",
"title": ""
},
{
"docid": "neg:1840386_4",
"text": "It is projected that increasing on-chip integration with technology scaling will lead to the so-called dark silicon era in which more transistors are available on a chip than can be simultaneously powered on. It is conventionally assumed that the dark silicon will be provisioned with heterogeneous resources, for example dedicated hardware accelerators. In this paper we challenge the conventional assumption and build a case for homogeneous dark silicon CMPs that exploit the inherent variations in process parameters that exist in scaled technologies to offer increased performance. Since process variations result in core-to-core variations in power and frequency, the idea is to cherry pick the best subset of cores for an application so as to maximize performance within the power budget. To this end, we propose a polynomial time algorithm for optimal core selection, thread mapping and frequency assignment for a large class of multi-threaded applications. Our experimental results based on the Sniper multi-core simulator show that up to 22% and 30% performance improvement is observed for homogeneous CMPs with 33% and 50% dark silicon, respectively.",
"title": ""
},
{
"docid": "neg:1840386_5",
"text": "Artificial intelligence (AI) is the core technology of technological revolution and industrial transformation. As one of the new intelligent needs in the AI 2.0 era, financial intelligence has elicited much attention from the academia and industry. In our current dynamic capital market, financial intelligence demonstrates a fast and accurate machine learning capability to handle complex data and has gradually acquired the potential to become a \"financial brain\". In this work, we survey existing studies on financial intelligence. First, we describe the concept of financial intelligence and elaborate on its position in the financial technology field. Second, we introduce the development of financial intelligence and review state-of-the-art techniques in wealth management, risk management, financial security, financial consulting, and blockchain. Finally, we propose a research framework called FinBrain and summarize four open issues, namely, explainable financial agents and causality, perception and prediction under uncertainty, risk-sensitive and robust decision making, and multi-agent game and mechanism design. We believe that these research directions can lay the foundation for the development of AI 2.0 in the finance field.",
"title": ""
},
{
"docid": "neg:1840386_6",
"text": "With the increasing popularity of Web 2.0 streams, people become overwhelmed by the available information. This is partly countered by tagging blog posts and tweets, so that users can filter messages according to their tags. However, this is insufficient for detecting newly emerging topics that are not reflected by a single tag but are rather expressed by unusual tag combinations. This paper presents enBlogue, an approach for automatically detecting such emergent topics. EnBlogue uses a time-sliding window to compute statistics about tags and tag-pairs. These statistics are then used to identify unusual shifts in correlations, most of the time caused by real-world events. We analyze the strength of these shifts and measure the degree of unpredictability they include, used to rank tag-pairs expressing emergent topics. Additionally, this \"indicator of surprise\" is carried over to subsequent time points, as user interests do not abruptly vanish from one moment to the other. To avoid monitoring all tag-pairs we can also select a subset of tags, e. g., the most popular or volatile of them, to be used as seed-tags for subsequent pair-wise correlation computations. The system is fully implemented and publicly available on the Web, processing live Twitter data. We present experimental studies based on real world datasets demonstrating both the prediction quality by means of a user study and the efficiency of enBlogue.",
"title": ""
},
{
"docid": "neg:1840386_7",
"text": "We propose a machine learning framework to capture the dynamics of highfrequency limit order books in financial equity markets and automate real-time prediction of metrics such as mid-price movement and price spread crossing. By characterizing each entry in a limit order book with a vector of attributes such as price and volume at different levels, the proposed framework builds a learning model for each metric with the help of multi-class support vector machines (SVMs). Experiments with real data establish that features selected by the proposed framework are effective for short term price movement forecasts.",
"title": ""
},
{
"docid": "neg:1840386_8",
"text": "Leader election protocols are a fundamental building block for replicated distributed services. They ease the design of leader-based coordination protocols that tolerate failures. In partially synchronous systems, designing a leader election algorithm, that does not permit multiple leaders while the system is unstable, is a complex task. As a result many production systems use third-party distributed coordination services, such as ZooKeeper and Chubby, to provide a reliable leader election service. However, adding a third-party service such as ZooKeeper to a distributed system incurs additional operational costs and complexity. ZooKeeper instances must be kept running on at least three machines to ensure its high availability. In this paper, we present a novel leader election protocol using NewSQL databases for partially synchronous systems, that ensures at most one leader at any given time. The leader election protocol uses the database as distributed shared memory. Our work enables distributed systems that already use NewSQL databases to save the operational overhead of managing an additional third-party service for leader election. Our main contribution is the design, implementation and validation of a practical leader election algorithm, based on NewSQL databases, that has performance comparable to a leader election implementation using a state-of-the-art distributed coordination service, ZooKeeper.",
"title": ""
},
{
"docid": "neg:1840386_9",
"text": "In a typical content-based image retrieval (CBIR) system, query results are a set of images sorted by feature similarities with respect to the query. However, images with high feature similarities to the query may be very different from the query in terms of semantics. This is known as the semantic gap. We introduce a novel image retrieval scheme, CLUster-based rEtrieval of images by unsupervised learning (CLUE), which tackles the semantic gap problem based on a hypothesis: semantically similar images tend to be clustered in some feature space. CLUE attempts to capture semantic concepts by learning the way that images of the same semantics are similar and retrieving image clusters instead of a set of ordered images. Clustering in CLUE is dynamic. In particular, clusters formed depend on which images are retrieved in response to the query. Therefore, the clusters give the algorithm as well as the users semantic relevant clues as to where to navigate. CLUE is a general approach that can be combined with any real-valued symmetric similarity measure (metric or nonmetric). Thus it may be embedded in many current CBIR systems. Experimental results based on a database of about 60, 000 images from COREL demonstrate improved performance.",
"title": ""
},
{
"docid": "neg:1840386_10",
"text": "Loss of volume in the temples is an early sign of aging that is often overlooked by both the physician and the patient. Augmentation of the temple using soft tissue fillers improves the contours of the upper face with the secondary effect of lengthening and lifting the lateral brow. After replacement of volume, treatment of the overlying skin with skin-tightening devices or laser resurfacing help to complete a comprehensive rejuvenation of the temple and upper one-third of the face.",
"title": ""
},
{
"docid": "neg:1840386_11",
"text": "The seminal 2003 paper by Cosley, Lab, Albert, Konstan, and Reidl, demonstrated the susceptibility of recommender systems to rating biases. To facilitate browsing and selection, almost all recommender systems display average ratings before accepting ratings from users which has been shown to bias ratings. This effect is called Social Inuence Bias (SIB); the tendency to conform to the perceived \\norm\" in a community. We propose a methodology to 1) learn, 2) analyze, and 3) mitigate the effect of SIB in recommender systems. In the Learning phase, we build a baseline dataset by allowing users to rate twice: before and after seeing the average rating. In the Analysis phase, we apply a new non-parametric significance test based on the Wilcoxon statistic to test whether the data is consistent with SIB. If significant, we propose a Mitigation phase using polynomial regression and the Bayesian Information Criterion (BIC) to predict unbiased ratings. We evaluate our approach on a dataset of 9390 ratings from the California Report Card (CRC), a rating-based system designed to encourage political engagement. We found statistically significant evidence of SIB. Mitigating models were able to predict changed ratings with a normalized RMSE of 12.8% and reduce bias by 76.3%. The CRC, our data, and experimental code are available at: http://californiareportcard.org/data/",
"title": ""
},
{
"docid": "neg:1840386_12",
"text": "Objective This systematic review aims to summarize current evidence on which naturally present cannabinoids contribute to cannabis psychoactivity, considering their reported concentrations and pharmacodynamics in humans. Design Following PRISMA guidelines, papers published before March 2016 in Medline, Scopus-Elsevier, Scopus, ISI-Web of Knowledge and COCHRANE, and fulfilling established a-priori selection criteria have been included. Results In 40 original papers, three naturally present cannabinoids (∆-9-Tetrahydrocannabinol, ∆-8-Tetrahydrocannabinol and Cannabinol) and one human metabolite (11-OH-THC) had clinical relevance. Of these, the metabolite produces the greatest psychoactive effects. Cannabidiol (CBD) is not psychoactive but plays a modulating role on cannabis psychoactive effects. The proportion of 9-THC in plant material is higher (up to 40%) than in other cannabinoids (up to 9%). Pharmacodynamic reports vary due to differences in methodological aspects (doses, administration route and volunteers' previous experience with cannabis). Conclusions Findings reveal that 9-THC contributes the most to cannabis psychoactivity. Due to lower psychoactive potency and smaller proportions in plant material, other psychoactive cannabinoids have a weak influence on cannabis final effects. Current lack of standard methodology hinders homogenized research on cannabis health effects. Working on a standard cannabis unit considering 9-THC is recommended.",
"title": ""
},
{
"docid": "neg:1840386_13",
"text": "The large availability of biomedical data brings opportunities and challenges to health care. Representation of medical concepts has been well studied in many applications, such as medical informatics, cohort selection, risk prediction, and health care quality measurement. In this paper, we propose an efficient multichannel convolutional neural network (CNN) model based on multi-granularity embeddings of medical concepts named MG-CNN, to examine the effect of individual patient characteristics including demographic factors and medical comorbidities on total hospital costs and length of stay (LOS) by using the Hospital Quality Monitoring System (HQMS) data. The proposed embedding method leverages prior medical hierarchical ontology and improves the quality of embedding for rare medical concepts. The embedded vectors are further visualized by the t-Distributed Stochastic Neighbor Embedding (t-SNE) technique to demonstrate the effectiveness of grouping related medical concepts. Experimental results demonstrate that our MG-CNN model outperforms traditional regression methods based on the one-hot representation of medical concepts, especially in the outcome prediction tasks for patients with low-frequency medical events. In summary, MG-CNN model is capable of mining potential knowledge from the clinical data and will be broadly applicable in medical research and inform clinical decisions.",
"title": ""
},
{
"docid": "neg:1840386_14",
"text": "Code quality metrics are widely used to identify design flaws (e.g., code smells) as well as to act as fitness functions for refactoring recommenders. Both these applications imply a strong assumption: quality metrics are able to assess code quality as perceived by developers. Indeed, code smell detectors and refactoring recommenders should be able to identify design flaws/recommend refactorings that are meaningful from the developer's point-of-view. While such an assumption might look reasonable, there is limited empirical evidence supporting it. We aim at bridging this gap by empirically investigating whether quality metrics are able to capture code quality improvement as perceived by developers. While previous studies surveyed developers to investigate whether metrics align with their perception of code quality, we mine commits in which developers clearly state in the commit message their aim of improving one of four quality attributes: cohesion, coupling, code readability, and code complexity. Then, we use state-of-the-art metrics to assess the change brought by each of those commits to the specific quality attribute it targets. We found that, more often than not the considered quality metrics were not able to capture the quality improvement as perceived by developers (e.g., the developer states \"improved the cohesion of class C\", but no quality metric captures such an improvement).",
"title": ""
},
{
"docid": "neg:1840386_15",
"text": "In the current object detection field, one of the fastest algorithms is the Single Shot Multi-Box Detector (SSD), which uses a single convolutional neural network to detect the object in an image. Although SSD is fast, there is a big gap compared with the state-of-the-art on mAP. In this paper, we propose a method to improve SSD algorithm to increase its classification accuracy without affecting its speed. We adopt the Inception block to replace the extra layers in SSD, and call this method Inception SSD (I-SSD). The proposed network can catch more information without increasing the complexity. In addition, we use the batch-normalization (BN) and the residual structure in our I-SSD network architecture. Besides, we propose an improved non-maximum suppression method to overcome its deficiency on the expression ability of the model. The proposed I-SSD algorithm achieves 78.6% mAP on the Pascal VOC2007 test, which outperforms SSD algorithm while maintaining its time performance. We also construct an Outdoor Object Detection (OOD) dataset to testify the effectiveness of the proposed I-SSD on the platform of unmanned vehicles.",
"title": ""
},
{
"docid": "neg:1840386_16",
"text": "We explore frame-level audio feature learning for chord recognition using artificial neural networks. We present the argument that chroma vectors potentially hold enough information to model harmonic content of audio for chord recognition, but that standard chroma extractors compute too noisy features. This leads us to propose a learned chroma feature extractor based on artificial neural networks. It is trained to compute chroma features that encode harmonic information important for chord recognition, while being robust to irrelevant interferences. We achieve this by feeding the network an audio spectrum with context instead of a single frame as input. This way, the network can learn to selectively compensate noise and resolve harmonic ambiguities. We compare the resulting features to hand-crafted ones by using a simple linear frame-wise classifier for chord recognition on various data sets. The results show that the learned feature extractor produces superior chroma vectors for chord recognition.",
"title": ""
},
{
"docid": "neg:1840386_17",
"text": "Cutting and packing problems are encountered in many industries, with different industries incorporating different constraints and objectives. The wood-, glassand paper industry are mainly concerned with the cutting of regular figures, whereas in the ship building, textile and leather industry irregular, arbitrary shaped items are to be packed. In this paper two genetic algorithms are described for a rectangular packing problem. Both GAs are hybridised with a heuristic placement algorithm, one of which is the well-known Bottom-Left routine. A second placement method has been developed which overcomes some of the disadvantages of the Bottom-Left rule. The two hybrid genetic algorithms are compared with heuristic placement algorithms. In order to show the effectiveness of the design of the two genetic algorithms, their performance is compared to random search.",
"title": ""
},
{
"docid": "neg:1840386_18",
"text": "A wide range of defenses have been proposed to harden neural networks against adversarial attacks. However, a pattern has emerged in which the majority of adversarial defenses are quickly broken by new attacks. Given the lack of success at generating robust defenses, we are led to ask a fundamental question: Are adversarial attacks inevitable? This paper analyzes adversarial examples from a theoretical perspective, and identifies fundamental bounds on the susceptibility of a classifier to adversarial attacks. We show that, for certain classes of problems, adversarial examples are inescapable. Using experiments, we explore the implications of theoretical guarantees for real-world problems and discuss how factors such as dimensionality and image complexity limit a classifier’s robustness against adversarial examples.",
"title": ""
},
{
"docid": "neg:1840386_19",
"text": "In this paper, we propose text summarization method that creates text summary by definition of the relevance score of each sentence and extracting sentences from the original documents. While summarization this method takes into account weight of each sentence in the document. The essence of the method suggested is in preliminary identification of every sentence in the document with characteristic vector of words, which appear in the document, and calculation of relevance score for each sentence. The relevance score of sentence is determined through its comparison with all the other sentences in the document and with the document title by cosine measure. Prior to application of this method the scope of features is defined and then the weight of each word in the sentence is calculated with account of those features. The weights of features, influencing relevance of words, are determined using genetic algorithms.",
"title": ""
}
] |
1840387 | Real-Time Machine Learning: The Missing Pieces | [
{
"docid": "pos:1840387_0",
"text": "This paper introduces CIEL, a universal execution engine for distributed data-flow programs. Like previous execution engines, CIEL masks the complexity of distributed programming. Unlike those systems, a CIEL job can make data-dependent control-flow decisions, which enables it to compute iterative and recursive algorithms. We have also developed Skywriting, a Turingcomplete scripting language that runs directly on CIEL. The execution engine provides transparent fault tolerance and distribution to Skywriting scripts and highperformance code written in other programming languages. We have deployed CIEL on a cloud computing platform, and demonstrate that it achieves scalable performance for both iterative and non-iterative algorithms.",
"title": ""
}
] | [
{
"docid": "neg:1840387_0",
"text": "In this paper, we present a novel method to fuse observations from an inertial measurement unit (IMU) and visual sensors, such that initial conditions of the inertial integration, including gravity estimation, can be recovered quickly and in a linear manner, thus removing any need for special initialization procedures. The algorithm is implemented using a graphical simultaneous localization and mapping like approach that guarantees constant time output. This paper discusses the technical aspects of the work, including observability and the ability for the system to estimate scale in real time. Results are presented of the system, estimating the platforms position, velocity, and attitude, as well as gravity vector and sensor alignment and calibration on-line in a built environment. This paper discusses the system setup, describing the real-time integration of the IMU data with either stereo or monocular vision data. We focus on human motion for the purposes of emulating high-dynamic motion, as well as to provide a localization system for future human-robot interaction.",
"title": ""
},
{
"docid": "neg:1840387_1",
"text": "Osseous free flaps have become the preferred method for reconstructing segmental mandibular defects. Of 457 head and neck free flaps, 150 osseous mandible reconstructions were performed over a 10-year period. This experience was retrospectively reviewed to establish an approach to osseous free flap mandible reconstruction. There were 94 male and 56 female patients (mean age, 50 years; range 3 to 79 years); 43 percent had hemimandibular defects, and the rest had central, lateral, or a combination defect. Donor sites included the fibula (90 percent), radius (4 percent), scapula (4 percent), and ilium (2 percent). Rigid fixation (up to five osteotomy sites) was used in 98 percent of patients. Aesthetic and functional results were evaluated a minimum of 6 months postoperatively. The free flap success rate was 100 percent, and bony union was achieved in 97 percent of the osteotomy sites. Osseointegrated dental implants were placed in 20 patients. A return to an unrestricted diet was achieved in 45 percent of patients; 45 percent returned to a soft diet, and 5 percent were on a liquid diet. Five percent of patients required enteral feeding to maintain weight. Speech was assessed as normal (36 percent), near normal (27 percent), intelligible (28 percent), or unintelligible (9 percent). Aesthetic outcome was judged as excellent (32 percent), good (27 percent), fair (27 percent), or poor (14 percent). This study demonstrates a very high success rate, with good-to-excellent functional and aesthetic results using osseous free flaps for primary mandible reconstruction. The fibula donor site should be the first choice for most cases, particularly those with anterior or large bony defects requiring multiple osteotomies. Use of alternative donor sites (i.e., radius and scapula) is best reserved for cases with large soft-tissue and minimal bone requirements. The ilium is recommended only when other options are unavailable. Thoughtful flap selection and design should supplant the need for multiple, simultaneous free flaps and vein grafting in most cases.",
"title": ""
},
{
"docid": "neg:1840387_2",
"text": "The mobility of carriers in a silicon surface inversion layer is one of the most important parameters required to accurately model and predict MOSFET device and circuit performance. It has been found that electron mobility follows a universal curve when plotted as a function of an effective normal field regardless of substrate bias, substrate doping (≤ 1017 cm-3) and nominal process variations [1]. Although accurate modeling of p-channel MOS devices has become important due to the prevalence of CMOS technology, the existence of a universal hole mobility-field relationship has not been demonstrated. Furthermore, the effect on mobility of low-temperature and rapid high-temperature processing, which are commonly used in modern VLSI technology to control impurity diffusion, is unknown.",
"title": ""
},
{
"docid": "neg:1840387_3",
"text": "This paper presentes a novel algorithm for the voxelization of surface models of arbitrary topology. Our algorithm uses the depth and stencil buffers, available in most commercial graphics hardware, to achieve high performance. It is suitable for both polygonal meshes and parametric surfaces. Experiments highlight the advantages and limitations of our approach.",
"title": ""
},
{
"docid": "neg:1840387_4",
"text": "Although the concept of industrial cobots dates back to 1999, most present day hybrid human-machine assembly systems are merely weight compensators. Here, we present results on the development of a collaborative human-robot manufacturing cell for homokinetic joint assembly. The robot alternates active and passive behaviours during assembly, to lighten the burden on the operator in the first case, and to comply to his/her needs in the latter. Our approach can successfully manage direct physical contact between robot and human, and between robot and environment. Furthermore, it can be applied to standard position (and not torque) controlled robots, common in the industry. The approach is validated in a series of assembly experiments. The human workload is reduced, diminishing the risk of strain injuries. Besides, a complete risk analysis indicates that the proposed setup is compatible with the safety standards, and could be certified.",
"title": ""
},
{
"docid": "neg:1840387_5",
"text": "The large amount of text data which are continuously produced over time in a variety of large scale applications such as social networks results in massive streams of data. Typically massive text streams are created by very large scale interactions of individuals, or by structured creations of particular kinds of content by dedicated organizations. An example in the latter category would be the massive text streams created by news-wire services. Such text streams provide unprecedented challenges to data mining algorithms from an efficiency perspective. In this paper, we review text stream mining algorithms for a wide variety of problems in data mining such as clustering, classification and topic modeling. A recent challenge arises in the context of social streams, which are generated by large social networks such as Twitter. We also discuss a number of future challenges in this area of research.",
"title": ""
},
{
"docid": "neg:1840387_6",
"text": "PURPOSE\nThe aim of the study was to explore the impact of a permanent stoma on patients' everyday lives and to gain further insight into their need for ostomy-related education.\n\n\nSUBJECTS AND SETTING\nThe sample population comprised 15 persons with permanent ostomies. Stomas were created to manage colorectal cancer or inflammatory bowel disease. The research setting was the surgical department at a hospital in the Capitol Region of Denmark associated with the University of Copenhagen.\n\n\nMETHODS\nFocus group interviews were conducted using a phenomenological hermeneutic approach. Data were collected and analyzed using qualitative content analysis.\n\n\nRESULTS\nStoma creation led to feelings of stigma, worries about disclosure, a need for control and self-imposed limits. Furthermore, patients experienced difficulties identifying their new lives with their lives before surgery. Participants stated they need to be seen as a whole person, to have close contact with health care professionals, and receive trustworthy information about life with an ostomy. Respondents proposed group sessions conducted after hospital discharge. They further recommended that sessions be delivered by lay teachers who had a stoma themselves.\n\n\nCONCLUSIONS\nSelf-imposed isolation was often selected as a strategy for avoiding disclosing the presence of a stoma. Patient education, using health promotional methods, should take the settings into account and patients' possibility of effective knowledge transfer. Respondents recommend involvement of lay teachers, who have a stoma, and group-based learning processes are proposed, when planning and conducting patient education.",
"title": ""
},
{
"docid": "neg:1840387_7",
"text": "In this article, we review gas sensor application of one-dimensional (1D) metal-oxide nanostructures with major emphases on the types of device structure and issues for realizing practical sensors. One of the most important steps in fabricating 1D-nanostructure devices is manipulation and making electrical contacts of the nanostructures. Gas sensors based on individual 1D nanostructure, which were usually fabricated using electron-beam lithography, have been a platform technology for fundamental research. Recently, gas sensors with practical applicability were proposed, which were fabricated with an array of 1D nanostructures using scalable micro-fabrication tools. In the second part of the paper, some critical issues are pointed out including long-term stability, gas selectivity, and room-temperature operation of 1D-nanostructure-based metal-oxide gas sensors.",
"title": ""
},
{
"docid": "neg:1840387_8",
"text": "lations, is a critical ecological process (Ims and Yoccoz 1997). It can maintain genetic diversity, rescue declining populations, and re-establish extirpated populations. Sufficient movement of individuals between isolated, extinction-prone populations can allow an entire network of populations to persist via metapopulation dynamics (Hanski 1991). As areas of natural habitat are reduced in size and continuity by human activities, the degree to which the remaining fragments are functionally linked by dispersal becomes increasingly important. The strength of those linkages is determined largely by a property known as “connectivity”, which, despite its intuitive appeal, is inconsistently defined. At one extreme, metapopulation ecologists argue for a habitat patch-level definition, while at the other, landscape ecologists insist that connectivity is a landscape-scale property (Merriam 1984; Taylor et al. 1993; Tischendorf and Fahrig 2000; Moilanen and Hanski 2001; Tischendorf 2001a; Moilanen and Nieminen 2002). Differences in perspective notwithstanding, theoreticians do agree that connectivity has undeniable effects on many population processes (Wiens 1997; Moilanen and Hanski 2001). It is therefore desirable to quantify connectivity and use these measurements as a basis for decision making. Currently, many reserve design algorithms factor in some measure of connectivity when weighing alternative plans (Siitonen et al. 2002, 2003; Singleton et al. 2002; Cabeza 2003). Consideration of connectivity during the reserve design process could highlight situations where it really matters. For example, alternative reserve designs that are similar in other factors such as area, habitat quality, and cost may differ greatly in connectivity (Siitonen et al. 2002). This matters because the low-connectivity scenarios may not be able to support viable populations of certain species over long periods of time. Analyses of this sort could also redirect some project resources towards improving the connectivity of a reserve network by building movement corridors or acquiring small, otherwise undesirable habitat patches that act as links between larger patches (Keitt et al. 1997). Reserve designs could therefore include the demographic and genetic benefits of increased connectivity without substantially increasing the cost of the project (eg Siitonen et al. 2002). If connectivity is to serve as a guide, at least in part, for conservation decision-making, it clearly matters how it is measured. Unfortunately, the ecological literature is awash with different connectivity metrics. How are land managers and decision makers to efficiently choose between these alternatives, when ecologists cannot even agree on a basic definition of connectivity, let alone how it is best measured? Aside from the theoretical perspectives to which they are tied, these metrics differ in two important regards: the type of data they require and the level of detail they provide. Here, we attempt to cut through some of the confusion surrounding connectivity by developing a classification scheme based on these key differences between metrics. 529",
"title": ""
},
{
"docid": "neg:1840387_9",
"text": "Shor and Grover demonstrated that a quantum computer can outperform any classical computer in factoring numbers and in searching a database by exploiting the parallelism of quantum mechanics. Whereas Shor's algorithm requires both superposition and entanglement of a many-particle system, the superposition of single-particle quantum states is sufficient for Grover's algorithm. Recently, the latter has been successfully implemented using Rydberg atoms. Here we propose an implementation of Grover's algorithm that uses molecular magnets, which are solid-state systems with a large spin; their spin eigenstates make them natural candidates for single-particle systems. We show theoretically that molecular magnets can be used to build dense and efficient memory devices based on the Grover algorithm. In particular, one single crystal can serve as a storage unit of a dynamic random access memory device. Fast electron spin resonance pulses can be used to decode and read out stored numbers of up to 105, with access times as short as 10-10 seconds. We show that our proposal should be feasible using the molecular magnets Fe8 and Mn12.",
"title": ""
},
{
"docid": "neg:1840387_10",
"text": "This paper presents a stereo matching approach for a novel multi-perspective panoramic stereo vision system, making use of asynchronous and non-simultaneous stereo imaging towards real-time 3D 360° vision. The method is designed for events representing the scenes visual contrast as a sparse visual code allowing the stereo reconstruction of high resolution panoramic views. We propose a novel cost measure for the stereo matching, which makes use of a similarity measure based on event distributions. Thus, the robustness to variations in event occurrences was increased. An evaluation of the proposed stereo method is presented using distance estimation of panoramic stereo views and ground truth data. Furthermore, our approach is compared to standard stereo methods applied on event-data. Results show that we obtain 3D reconstructions of 1024 × 3600 round views and outperform depth reconstruction accuracy of state-of-the-art methods on event data.",
"title": ""
},
{
"docid": "neg:1840387_11",
"text": "This paper provides a description of the crowdfunding sector, considering investment-based crowdfunding platforms as well as platforms in which funders do not obtain monetary payments. It lays out key features of this quickly developing sector and explores the economic forces at play that can explain the design of these platforms. In particular, it elaborates on cross-group and within-group external e¤ects and asymmetric information on crowdfunding platforms. Keywords: Crowdfunding, Platform markets, Network e¤ects, Asymmetric information, P2P lending JEL-Classi
cation: L13, D62, G24 Université catholique de Louvain, CORE and Louvain School of Management, and CESifo yRITM, University of Paris Sud and Digital Society Institute zUniversity of Mannheim, Mannheim Centre for Competition and Innovation (MaCCI), and CERRE. Email: martin.peitz@gmail.com",
"title": ""
},
{
"docid": "neg:1840387_12",
"text": "Detecting representative frames in videos based on human actions is quite challenging because of the combined factors of human pose in action and the background. This paper addresses this problem and formulates the key frame detection as one of finding the video frames that optimally maximally contribute to differentiating the underlying action category from all other categories. To this end, we introduce a deep two-stream ConvNet for key frame detection in videos that learns to directly predict the location of key frames. Our key idea is to automatically generate labeled data for the CNN learning using a supervised linear discriminant method. While the training data is generated taking many different human action videos into account, the trained CNN can predict the importance of frames from a single video. We specify a new ConvNet framework, consisting of a summarizer and discriminator. The summarizer is a two-stream ConvNet aimed at, first, capturing the appearance and motion features of video frames, and then encoding the obtained appearance and motion features for video representation. The discriminator is a fitting function aimed at distinguishing between the key frames and others in the video. We conduct experiments on a challenging human action dataset UCF101 and show that our method can detect key frames with high accuracy.",
"title": ""
},
{
"docid": "neg:1840387_13",
"text": "The demand for computing resources in the university is on the increase on daily basis and the traditional method of acquiring computing resources may no longer meet up with the present demand. This is as a result of high level of researches being carried out by the universities. The 21st century universities are now seen as the centre and base of education, research and development for the society. The university community now has to deal with a large number of people including staff, students and researchers working together on voluminous large amount of data. This actually requires very high computing resources that can only be gotten easily through cloud computing. In this paper, we have taken a close look at exploring the benefits of cloud computing and study the adoption and usage of cloud services in the University Enterprise. We establish a theoretical background to cloud computing and its associated services including rigorous analysis of the latest research on Cloud Computing as an alternative to IT provision, management and security and discuss the benefits of cloud computing in the university enterprise. We also assess the trend of adoption and usage of cloud services in the university enterprise.",
"title": ""
},
{
"docid": "neg:1840387_14",
"text": "This article proposes integrating the insights generated by framing, priming, and agenda-setting research through a systematic effort to conceptualize and understand their larger implications for political power and democracy. The organizing concept is bias, that curiously undertheorized staple of public discourse about the media. After showing how agenda setting, framing and priming fit together as tools of power, the article connects them to explicit definitions of news slant and the related but distinct phenomenon of bias. The article suggests improved measures of slant and bias. Properly defined and measured, slant and bias provide insight into how the media influence the distribution of power: who gets what, when, and how. Content analysis should be informed by explicit theory linking patterns of framing in the media text to predictable priming and agenda-setting effects on audiences. When unmoored by such underlying theory, measures and conclusions of media bias are suspect.",
"title": ""
},
{
"docid": "neg:1840387_15",
"text": "This report describes and analyzes the MD6 hash function and is part of our submission package for MD6 as an entry in the NIST SHA-3 hash function competition. Significant features of MD6 include: • Accepts input messages of any length up to 2 − 1 bits, and produces message digests of any desired size from 1 to 512 bits, inclusive, including the SHA-3 required sizes of 224, 256, 384, and 512 bits. • Security—MD6 is by design very conservative. We aim for provable security whenever possible; we provide reduction proofs for the security of the MD6 mode of operation, and prove that standard differential attacks against the compression function are less efficient than birthday attacks for finding collisions. We also show that when used as a MAC within NIST recommendedations, the keyed version of MD6 is not vulnerable to linear cryptanalysis. The compression function and the mode of operation are each shown to be indifferentiable from a random oracle under reasonable assumptions. • MD6 has good efficiency: 22.4–44.1M bytes/second on a 2.4GHz Core 2 Duo laptop with 32-bit code compiled with Microsoft Visual Studio 2005 for digest sizes in the range 160–512 bits. When compiled for 64-bit operation, it runs at 61.8–120.8M bytes/second, compiled with MS VS, running on a 3.0GHz E6850 Core Duo processor. • MD6 works extremely well for multicore and parallel processors; we have demonstrated hash rates of over 1GB/second on one 16-core system, and over 427MB/sec on an 8-core system, both for 256-bit digests. We have also demonstrated MD6 hashing rates of 375 MB/second on a typical desktop GPU (graphics processing unit) card. We also show that MD6 runs very well on special-purpose hardware. • MD6 uses a single compression function, no matter what the desired digest size, to map input data blocks of 4096 bits to output blocks of 1024 bits— a fourfold reduction. (The number of rounds does, however, increase for larger digest sizes.) The compression function has auxiliary inputs: a “key” (K), a “number of rounds” (r), a “control word” (V ), and a “unique ID” word (U). • The standard mode of operation is tree-based: the data enters at the leaves of a 4-ary tree, and the hash value is computed at the root. See Figure 2.1. This standard mode of operation is highly parallelizable. 1http://www.csrc.nist.gov/pki/HashWorkshop/index.html",
"title": ""
},
{
"docid": "neg:1840387_16",
"text": "One of the greatest challenges food research is facing in this century lies in maintaining sustainable food production and at the same time delivering high quality food products with an added functionality to prevent life-style related diseases such as, cancer, obesity, diabetes, heart disease, stroke. Functional foods that contain bioactive components may provide desirable health benefits beyond basic nutrition and play important roles in the prevention of life-style related diseases. Polyphenols and carotenoids are plant secondary metabolites which are well recognized as natural antioxidants linked to the reduction of the development and progression of life-style related diseases. This chapter focuses on healthpromoting food ingredients (polyphenols and carotenoids), food structure and functionality, and bioavailability of these bioactive ingredients, with examples on their commercial applications, namely on functional foods. Thereafter, in order to support successful development of health-promoting food ingredients, this chapter contributes to an understanding of the relationship between food structures, ingredient functionality, in relation to the breakdown of food structures in the gastrointestinal tract and its impact on the bioavailability of bioactive ingredients. The overview on food processing techniques and the processing of functional foods given here will elaborate novel delivery systems for functional food ingredients and their applications in food. Finally, this chapter concludes with microencapsulation techniques and examples of encapsulation of polyphenols and carotenoids; the physical structure of microencapsulated food ingredients and their impacts on food sensorial properties; yielding an outline on the controlled release of encapsulated bioactive compounds in food products.",
"title": ""
},
{
"docid": "neg:1840387_17",
"text": "Ensuring that autonomous systems work ethically is both complex and difficult. However, the idea of having an additional ‘governor’ that assesses options the system has, and prunes them to select the most ethical choices is well understood. Recent work has produced such a governor consisting of a ‘consequence engine’ that assesses the likely future outcomes of actions then applies a Safety/Ethical logic to select actions. Although this is appealing, it is impossible to be certain that the most ethical options are actually taken. In this paper we extend and apply a well-known agent verification approach to our consequence engine, allowing us to verify the correctness of its ethical decision-making.",
"title": ""
},
{
"docid": "neg:1840387_18",
"text": "Supervisory Control and Data Acquisition(SCADA) systems are deeply ingrained in the fabric of critical infrastructure sectors. These computerized real-time process control systems, over geographically dispersed continuous distribution operations, are increasingly subject to serious damage and disruption by cyber means due to their standardization and connectivity to other networks. However, SCADA systems generally have little protection from the escalating cyber threats. In order to understand the potential danger and to protect SCADA systems, in this paper, we highlight their difference from standard IT systems and present a set of security property goals. Furthermore, we focus on systematically identifying and classifying likely cyber attacks including cyber-induced cyber-physical attack son SCADA systems. Determined by the impact on control performance of SCADA systems, the attack categorization criteria highlights commonalities and important features of such attacks that define unique challenges posed to securing SCADA systems versus traditional Information Technology(IT) systems.",
"title": ""
}
] |
1840388 | Cover Tree Bayesian Reinforcement Learning | [
{
"docid": "pos:1840388_0",
"text": "We present a tree data structure for fast nearest neighbor operations in general <i>n</i>-point metric spaces (where the data set consists of <i>n</i> points). The data structure requires <i>O</i>(<i>n</i>) space <i>regardless</i> of the metric's structure yet maintains all performance properties of a navigating net (Krauthgamer & Lee, 2004b). If the point set has a bounded expansion constant <i>c</i>, which is a measure of the intrinsic dimensionality, as defined in (Karger & Ruhl, 2002), the cover tree data structure can be constructed in <i>O</i> (<i>c</i><sup>6</sup><i>n</i> log <i>n</i>) time. Furthermore, nearest neighbor queries require time only logarithmic in <i>n</i>, in particular <i>O</i> (<i>c</i><sup>12</sup> log <i>n</i>) time. Our experimental results show speedups over the brute force search varying between one and several orders of magnitude on natural machine learning datasets.",
"title": ""
}
] | [
{
"docid": "neg:1840388_0",
"text": "Convolutional neural networks (CNNs) with their ability to learn useful spatial features have revolutionized computer vision. The network topology of CNNs exploits the spatial relationship among the pixels in an image and this is one of the reasons for their success. In other domains deep learning has been less successful because it is not clear how the structure of non-spatial data can constrain network topology. Here, we show how multivariate time series can be interpreted as space-time pictures, thus expanding the applicability of the tricks-of-the-trade for CNNs to this important domain. We demonstrate that our model beats more traditional state-of-the-art models at predicting price development on the European Power Exchange (EPEX). Furthermore, we find that the features discovered by CNNs on raw data beat the features that were hand-designed by an expert.",
"title": ""
},
{
"docid": "neg:1840388_1",
"text": "In this paper, a novel converter, named as negative-output KY buck-boost converter, is presented herein, which has no bilinear characteristics. First of all, the basic operating principle of the proposed converter is illustrated in detail, and secondly some simulated and experimental results are provided to verify its effectiveness.",
"title": ""
},
{
"docid": "neg:1840388_2",
"text": "Online learning represents a family of machine learning methods, where a learner attempts to tackle some predictive (or any type of decision-making) task by learning from a sequence of data instances one by one at each time. The goal of online learning is to maximize the accuracy/correctness for the sequence of predictions/decisions made by the online learner given the knowledge of correct answers to previous prediction/learning tasks and possibly additional information. This is in contrast to traditional batch or offline machine learning methods that are often designed to learn a model from the entire training data set at once. Online learning has become a promising technique for learning from continuous streams of data in many real-world applications. This survey aims to provide a comprehensive survey of the online machine learning literature through a systematic review of basic ideas and key principles and a proper categorization of different algorithms and techniques. Generally speaking, according to the types of learning tasks and the forms of feedback information, the existing online learning works can be classified into three major categories: (i) online supervised learning where full feedback information is always available, (ii) online learning with limited feedback, and (iii) online unsupervised learning where no feedback is available. Due to space limitation, the survey will be mainly focused on the first category, but also briefly cover some basics of the other two categories. Finally, we also discuss some open issues and attempt to shed light on potential future research directions in this field.",
"title": ""
},
{
"docid": "neg:1840388_3",
"text": "This paper aims to improve the feature learning in Convolutional Networks (Convnet) by capturing the structure of objects. A new sparsity function is imposed on the extracted featuremap to capture the structure and shape of the learned object, extracting interpretable features to improve the prediction performance. The proposed algorithm is based on organizing the activation within and across featuremap by constraining the node activities through `2 and `1 normalization in a structured form.",
"title": ""
},
{
"docid": "neg:1840388_4",
"text": "We tackle the problem of predicting the future popularity level of micro-reviews, focusing on Foursquare tips, whose high degree of informality and briefness offer extra difficulties to the design of effective popularity prediction methods. Such predictions can greatly benefit the future design of content filtering and recommendation methods. Towards our goal, we first propose a rich set of features related to the user who posted the tip, the venue where it was posted, and the tip’s content to capture factors that may impact popularity of a tip. We evaluate different regression and classification based models using this rich set of proposed features as predictors in various scenarios. As fas as we know, this is the first work to investigate the predictability of micro-review popularity (or helpfulness) exploiting spatial, temporal, topical and, social aspects that are rarely exploited conjointly in this domain. © 2015 Published by Elsevier Inc.",
"title": ""
},
{
"docid": "neg:1840388_5",
"text": "INTRODUCTION\nVulvar and vaginal atrophy (VVA) affects up to two thirds of postmenopausal women, but most symptomatic women do not receive prescription therapy.\n\n\nAIM\nTo evaluate postmenopausal women's perceptions of VVA and treatment options for symptoms in the Women's EMPOWER survey.\n\n\nMETHODS\nThe Rose Research firm conducted an internet survey of female consumers provided by Lightspeed Global Market Insite. Women at least 45 years of age who reported symptoms of VVA and residing in the United States were recruited.\n\n\nMAIN OUTCOME MEASURES\nSurvey results were compiled and analyzed by all women and by treatment subgroups.\n\n\nRESULTS\nRespondents (N = 1,858) had a median age of 58 years (range = 45-90). Only 7% currently used prescribed VVA therapies (local estrogen therapies or oral selective estrogen receptor modulators), whereas 18% were former users of prescribed VVA therapies, 25% used over-the-counter treatments, and 50% had never used any treatment. Many women (81%) were not aware of VVA or that it is a medical condition. Most never users (72%) had never discussed their symptoms with a health care professional (HCP). The main reason for women not to discuss their symptoms with an HCP was that they believed that VVA was just a natural part of aging and something to live with. When women spoke to an HCP about their symptoms, most (85%) initiated the discussion. Preferred sources of information were written material from the HCP's office (46%) or questionnaires to fill out before seeing the HCP (41%).The most negative attributes of hormonal products were perceived risk of systemic absorption, messiness of local creams, and the need to reuse an applicator. Overall, HCPs only recommended vaginal estrogen therapy to 23% and oral hormone therapies to 18% of women. When using vaginal estrogen therapy, less than half of women adhered to and complied with posology; only 33% to 51% of women were very to extremely satisfied with their efficacy.\n\n\nCONCLUSION\nThe Women's EMPOWER survey showed that VVA continues to be an under-recognized and under-treated condition, despite recent educational initiatives. A disconnect in education, communication, and information between HCPs and their menopausal patients remains prevalent. Kingsberg S, Krychman M, Graham S, et al. The Women's EMPOWER Survey: Identifying Women's Perceptions on Vulvar and Vaginal Atrophy and Its Treatment. J Sex Med 2017;14:413-424.",
"title": ""
},
{
"docid": "neg:1840388_6",
"text": "Genetic Algorithms and Evolution Strategies represent two of the three major Evolutionary Algorithms. This paper examines the history, theory and mathematical background, applications, and the current direction of both Genetic Algorithms and Evolution Strategies.",
"title": ""
},
{
"docid": "neg:1840388_7",
"text": "High-rate data communication over a multipath wireless channel often requires that the channel response be known at the receiver. Training-based methods, which probe the channel in time, frequency, and space with known signals and reconstruct the channel response from the output signals, are most commonly used to accomplish this task. Traditional training-based channel estimation methods, typically comprising linear reconstruction techniques, are known to be optimal for rich multipath channels. However, physical arguments and growing experimental evidence suggest that many wireless channels encountered in practice tend to exhibit a sparse multipath structure that gets pronounced as the signal space dimension gets large (e.g., due to large bandwidth or large number of antennas). In this paper, we formalize the notion of multipath sparsity and present a new approach to estimating sparse (or effectively sparse) multipath channels that is based on some of the recent advances in the theory of compressed sensing. In particular, it is shown in the paper that the proposed approach, which is termed as compressed channel sensing (CCS), can potentially achieve a target reconstruction error using far less energy and, in many instances, latency and bandwidth than that dictated by the traditional least-squares-based training methods.",
"title": ""
},
{
"docid": "neg:1840388_8",
"text": "ISSN: 1049-4820 (Print) 1744-5191 (Online) Journal homepage: http://www.tandfonline.com/loi/nile20 Gamification and student motivation Patrick Buckley & Elaine Doyle To cite this article: Patrick Buckley & Elaine Doyle (2016) Gamification and student motivation, Interactive Learning Environments, 24:6, 1162-1175, DOI: 10.1080/10494820.2014.964263 To link to this article: https://doi.org/10.1080/10494820.2014.964263",
"title": ""
},
{
"docid": "neg:1840388_9",
"text": "IEEE 802.11 WLANs are a very important technology to provide high speed wireless Internet access. Especially at airports, university campuses or in city centers, WLAN coverage is becoming ubiquitous leading to a deployment of hundreds or thousands of Access Points (AP). Managing and configuring such large WLAN deployments is a challenge. Current WLAN management protocols such as CAPWAP are hard to extend with new functionality. In this paper, we present CloudMAC, a novel architecture for enterprise or carrier grade WLAN systems. By partially offloading the MAC layer processing to virtual machines provided by cloud services and by integrating our architecture with OpenFlow, a software defined networking approach, we achieve a new level of flexibility and reconfigurability. In Cloud-MAC APs just forward MAC frames between virtual APs and IEEE 802.11 stations. The processing of MAC layer frames as well as the creation of management frames is handled at the virtual APs while the binding between the virtual APs and the physical APs is managed using OpenFlow. The testbed evaluation shows that CloudMAC achieves similar performance as normal WLANs, but allows novel services to be implemented easily in high level programming languages. The paper presents a case study which shows that dynamically switching off APs to save energy can be performed seamlessly with CloudMAC, while a traditional WLAN architecture causes large interruptions for users.",
"title": ""
},
{
"docid": "neg:1840388_10",
"text": "We address the use of three-dimensional facial shape information for human face identification. We propose a new method to represent faces as 3D registered point clouds. Fine registration of facial surfaces is done by first automatically finding important facial landmarks and then, establishing a dense correspondence between points on the facial surface with the help of a 3D face template-aided thin plate spline algorithm. After the registration of facial surfaces, similarity between two faces is defined as a discrete approximation of the volume difference between facial surfaces. Experiments done on the 3D RMA dataset show that the proposed algorithm performs as good as the point signature method, and it is statistically superior to the point distribution model-based method and the 2D depth imagery technique. In terms of computational complexity, the proposed algorithm is faster than the point signature method.",
"title": ""
},
{
"docid": "neg:1840388_11",
"text": "The macaque monkey ventral intraparietal area (VIP) contains neurons with aligned visual-tactile receptive fields anchored to the face and upper body. Our previous fMRI studies using standard head coils found a human parietal face area (VIP+ complex; putative macaque VIP homologue) containing superimposed topological maps of the face and near-face visual space. Here, we construct high signal-to-noise surface coils and used phase-encoded air puffs and looming stimuli to map topological organization of the parietal face area at higher resolution. This area is consistently identified as a region extending between the superior postcentral sulcus and the upper bank of the anterior intraparietal sulcus (IPS), avoiding the fundus of IPS. Using smaller voxel sizes, our surface coils picked up strong fMRI signals in response to tactile and visual stimuli. By analyzing tactile and visual maps in our current and previous studies, we constructed a set of topological models illustrating commonalities and differences in map organization across subjects. The most consistent topological feature of the VIP+ complex is a central-anterior upper face (and upper visual field) representation adjoined by lower face (and lower visual field) representations ventrally (laterally) and/or dorsally (medially), potentially forming two subdivisions VIPv (ventral) and VIPd (dorsal). The lower visual field representations typically extend laterally into the anterior IPS to adjoin human area AIP, and medially to overlap with the parietal body areas at the superior parietal ridge. Significant individual variations are then illustrated to provide an accurate and comprehensive view of the topological organization of the parietal face area.",
"title": ""
},
{
"docid": "neg:1840388_12",
"text": "Epistemic planning can be used for decision making in multi-agent situations with distributed knowledge and capabilities. Dynamic Epistemic Logic (DEL) has been shown to provide a very natural and expressive framework for epistemic planning. In this paper, we aim to give an accessible introduction to DEL-based epistemic planning. The paper starts with the most classical framework for planning, STRIPS, and then moves towards epistemic planning in a number of smaller steps, where each step is motivated by the need to be able to model more complex planning scenarios.",
"title": ""
},
{
"docid": "neg:1840388_13",
"text": "Based on EuroNCAP regulations the number of autonomous emergency braking systems for pedestrians (AEB-P) will increase over the next years. According to accident research a considerable amount of severe pedestrian accidents happen at artificial lighting, twilight or total darkness conditions. Because radar sensors are very robust in these situations, they will play an important role for future AEB-P systems. To assess and evaluate systems a pedestrian dummy with reflection characteristics as close as possible to real humans is indispensable. As an extension to existing measurements in literature this paper addresses open issues like the influence of different positions of the limbs or different clothing for both relevant automotive frequency bands. Additionally suggestions and requirements for specification of pedestrian dummies based on results of RCS measurements of humans and first experimental developed dummies are given.",
"title": ""
},
{
"docid": "neg:1840388_14",
"text": "Breast cancer is the most common form of cancer among women worldwide. Ultrasound imaging is one of the most frequently used diagnostic tools to detect and classify abnormalities of the breast. Recently, computer-aided diagnosis (CAD) systems using ultrasound images have been developed to help radiologists to increase diagnosis accuracy. However, accurate ultrasound image segmentation remains a challenging problem due to various ultrasound artifacts. In this paper, we investigate approaches developed for breast ultrasound (BUS) image segmentation. In this paper, we reviewed the literature on the segmentation of BUS images according to the techniques adopted, especially over the past 10 years. By dividing into seven classes (i.e., thresholding-based, clustering-based, watershed-based, graph-based, active contour model, Markov random field and neural network), we have introduced corresponding techniques and representative papers accordingly. We have summarized and compared many techniques on BUS image segmentation and found that all these techniques have their own pros and cons. However, BUS image segmentation is still an open and challenging problem due to various ultrasound artifacts introduced in the process of imaging, including high speckle noise, low contrast, blurry boundaries, low signal-to-noise ratio and intensity inhomogeneity To the best of our knowledge, this is the first comprehensive review of the approaches developed for segmentation of BUS images. With most techniques involved, this paper will be useful and helpful for researchers working on segmentation of ultrasound images, and for BUS CAD system developers.",
"title": ""
},
{
"docid": "neg:1840388_15",
"text": "Recent advances in RST discourse parsing have focused on two modeling paradigms: (a) high order parsers which jointly predict the tree structure of the discourse and the relations it encodes; or (b) lineartime parsers which are efficient but mostly based on local features. In this work, we propose a linear-time parser with a novel way of representing discourse constituents based on neural networks which takes into account global contextual information and is able to capture long-distance dependencies. Experimental results show that our parser obtains state-of-the art performance on benchmark datasets, while being efficient (with time complexity linear in the number of sentences in the document) and requiring minimal feature engineering.",
"title": ""
},
{
"docid": "neg:1840388_16",
"text": "Recent systems for natural language understanding are strong at overcoming linguistic variability for lookup style reasoning. Yet, their accuracy drops dramatically as the number of reasoning steps increases. We present the first formal framework to study such empirical observations, addressing the ambiguity, redundancy, incompleteness, and inaccuracy that the use of language introduces when representing a hidden conceptual space. Our formal model uses two interrelated spaces: a conceptual meaning space that is unambiguous and complete but hidden, and a linguistic symbol space that captures a noisy grounding of the meaning space in the symbols or words of a language. We apply this framework to study the connectivity problem in undirected graphs---a core reasoning problem that forms the basis for more complex multi-hop reasoning. We show that it is indeed possible to construct a high-quality algorithm for detecting connectivity in the (latent) meaning graph, based on an observed noisy symbol graph, as long as the noise is below our quantified noise level and only a few hops are needed. On the other hand, we also prove an impossibility result: if a query requires a large number (specifically, logarithmic in the size of the meaning graph) of hops, no reasoning system operating over the symbol graph is likely to recover any useful property of the meaning graph. This highlights a fundamental barrier for a class of reasoning problems and systems, and suggests the need to limit the distance between the two spaces, rather than investing in multi-hop reasoning with\"many\"hops.",
"title": ""
},
{
"docid": "neg:1840388_17",
"text": "One of the most important issue that must be addressed in designing communication protocols for wireless sensor networks (WSN) is how to save sensor node energy while meeting the needs of applications. Recent researches have led to new protocols specifically designed for sensor networks where energy awareness is an essential consideration. Internet of Things (IoT) is an innovative ICT paradigm where a number of intelligent devices connected to Internet are involved in sharing information and making collaborative decision. Integration of sensing and actuation systems, connected to the Internet, means integration of all forms of energy consuming devices such as power outlets, bulbs, air conditioner, etc. Sometimes the system can communicate with the utility supply company and this led to achieve a balance between power generation and energy usage or in general is likely to optimize energy consumption as a whole. In this paper some emerging trends and challenges are identified to enable energy-efficient communications in Internet of Things architectures and between smart devices. The way devices communicate is analyzed in order to reduce energy consumption and prolong system lifetime. Devices equipped with WiFi and RF interfaces are analyzed under different scenarios by setting different communication parameters, such as data size, in order to evaluate the best device configuration and the longest lifetime of devices.",
"title": ""
},
{
"docid": "neg:1840388_18",
"text": "As organizations increase their dependence on database systems for daily business, they become more vulnerable to security breaches even as they gain productivity and efficiency advantages. A truly comprehensive approach for data protection must include mechanisms for enforcing access control policies based on data contents, subject qualifications and characteristics. The database security community has developed a number of different techniques and approaches to assure data confidentiality, integrity, and availability. In this paper, we survey the most relevant concepts underlying the notion of access control policies for database security. We review the key access control models, namely, the discretionary and mandatory access control models and the role-based access control (RBAC)",
"title": ""
}
] |
1840389 | Name usage pattern in the synonym ambiguity problem in bibliographic data | [
{
"docid": "pos:1840389_0",
"text": "Name disambiguation can occur when one is seeking a list of publications of an author who has used different name variations and when there are multiple other authors with the same name. We present an efficient integrative machine learning framework for solving the name disambiguation problem: a blocking method retrieves candidate classes of authors with similar names and a clustering method, DBSCAN, clusters papers by author. The distance metric between papers used in DBSCAN is calculated by an online active selection support vector machine algorithm (LASVM), yielding a simpler model, lower test errors and faster prediction time than a standard SVM. We prove that by recasting transitivity as density reachability in DBSCAN, transitivity is guaranteed for core points. For evaluation, we manually annotated 3,355 papers yielding 490 authors and achieved 90.6% pairwise-F1 metric. For scalability, authors in the entire CiteSeer dataset, over 700,000 papers, were readily disambiguated.",
"title": ""
},
{
"docid": "pos:1840389_1",
"text": "The large number of potential applications from bridging web data with knowledge bases have led to an increase in the entity linking research. Entity linking is the task to link entity mentions in text with their corresponding entities in a knowledge base. Potential applications include information extraction, information retrieval, and knowledge base population. However, this task is challenging due to name variations and entity ambiguity. In this survey, we present a thorough overview and analysis of the main approaches to entity linking, and discuss various applications, the evaluation of entity linking systems, and future directions.",
"title": ""
},
{
"docid": "pos:1840389_2",
"text": "Background: We recently described “Author-ity,” a model for estimating the probability that two articles in MEDLINE, sharing the same author name, were written by the same individual. Features include shared title words, journal name, coauthors, medical subject headings, language, affiliations, and author name features (middle initial, suffix, and prevalence in MEDLINE). Here we test the hypothesis that the Author-ity model will suffice to disambiguate author names for the vast majority of articles in MEDLINE. Methods: Enhancements include: (a) incorporating first names and their variants, email addresses, and correlations between specific last names and affiliation words; (b) new methods of generating large unbiased training sets; (c) new methods for estimating the prior probability; (d) a weighted least squares algorithm for correcting transitivity violations; and (e) a maximum likelihood based agglomerative algorithm for computing clusters of articles that represent inferred author-individuals. Results: Pairwise comparisons were computed for all author names on all 15.3 million articles in MEDLINE (2006 baseline), that share last name and first initial, to create Author-ity 2006, a database that has each name on each article assigned to one of 6.7 million inferred author-individual clusters. Recall is estimated at ∼98.8%. Lumping (putting two different individuals into the same cluster) affects ∼0.5% of clusters, whereas splitting (assigning articles written by the same individual to >1 cluster) affects ∼2% of articles. Impact: The Author-ity model can be applied generally to other bibliographic databases. Author name disambiguation allows information retrieval and data integration to become person-centered, not just document-centered, setting the stage for new data mining and social network tools that will facilitate the analysis of scholarly publishing and collaboration behavior. Availability: The Author-ity 2006 database is available for nonprofit academic research, and can be freely queried via http://arrowsmith.psych.uic.edu.",
"title": ""
},
{
"docid": "pos:1840389_3",
"text": "Author ambiguity mainly arises when several different authors express their names in the same way, generally known as the namesake problem, and also when the name of an author is expressed in many different ways, referred to as the heteronymous name problem. These author ambiguity problems have long been an obstacle to efficient information retrieval in digital libraries, causing incorrect identification of authors and impeding correct classification of their publications. It is a nontrivial task to distinguish those authors, especially when there is very limited information about them. In this paper, we propose a graph based approach to author name disambiguation, where a graph model is constructed using the co-author relations, and author ambiguity is resolved by graph operations such as vertex (or node) splitting and merging based on the co-authorship. In our framework, called a Graph Framework for Author Disambiguation (GFAD), the namesake problem is solved by splitting an author vertex involved in multiple cycles of co-authorship, and the heteronymous name problem is handled by merging multiple author vertices having similar names if those vertices are connected to a common vertex. Experiments were carried out with the real DBLP and Arnetminer collections and the performance of GFAD is compared with three representative unsupervised author name disambiguation systems. We confirm that GFAD shows better overall performance from the perspective of representative evaluation metrics. An additional contribution is that we released the refined DBLP collection to the public to facilitate organizing a performance benchmark for future systems on author disambiguation.",
"title": ""
},
{
"docid": "pos:1840389_4",
"text": "Author Name Disambiguation Neil R. Smalheiser and Vetle I. Torvik",
"title": ""
}
] | [
{
"docid": "neg:1840389_0",
"text": "This paper concerns the problem of pose estimation for an inertial-visual sensor. It is well known that IMU bias, and calibration errors between camera and IMU frames can impair the achievement of high-quality estimates through the fusion of visual and inertial data. The main contribution of this work is the design of new observers to estimate pose, IMU bias and camera-to-IMU rotation. The observers design relies on an extension of the so-called passive complementary filter on SO(3). Stability of the observers is established using Lyapunov functions under adequate observability conditions. Experimental results are presented to assess this approach.",
"title": ""
},
{
"docid": "neg:1840389_1",
"text": "Traditional execution environments deploy Address Space Layout Randomization (ASLR) to defend against memory corruption attacks. However, Intel Software Guard Extension (SGX), a new trusted execution environment designed to serve security-critical applications on the cloud, lacks such an effective, well-studied feature. In fact, we find that applying ASLR to SGX programs raises non-trivial issues beyond simple engineering for a number of reasons: 1) SGX is designed to defeat a stronger adversary than the traditional model, which requires the address space layout to be hidden from the kernel; 2) the limited memory uses in SGX programs present a new challenge in providing a sufficient degree of entropy; 3) remote attestation conflicts with the dynamic relocation required for ASLR; and 4) the SGX specification relies on known and fixed addresses for key data structures that cannot be randomized. This paper presents SGX-Shield, a new ASLR scheme designed for SGX environments. SGX-Shield is built on a secure in-enclave loader to secretly bootstrap the memory space layout with a finer-grained randomization. To be compatible with SGX hardware (e.g., remote attestation, fixed addresses), SGX-Shield is designed with a software-based data execution protection mechanism through an LLVM-based compiler. We implement SGX-Shield and thoroughly evaluate it on real SGX hardware. It shows a high degree of randomness in memory layouts and stops memory corruption attacks with a high probability. SGX-Shield shows 7.61% performance overhead in running common microbenchmarks and 2.25% overhead in running a more realistic workload of an HTTPS server.",
"title": ""
},
{
"docid": "neg:1840389_2",
"text": "Historical Chinese character recognition has been suffering from the problem of lacking sufficient labeled training samples. A transfer learning method based on Convolutional Neural Network (CNN) for historical Chinese character recognition is proposed in this paper. A CNN model L is trained by printed Chinese character samples in the source domain. The network structure and weights of model L are used to initialize another CNN model T, which is regarded as the feature extractor and classifier in the target domain. The model T is then fine-tuned by a few labeled historical or handwritten Chinese character samples, and used for final evaluation in the target domain. Several experiments regarding essential factors of the CNNbased transfer learning method are conducted, showing that the proposed method is effective.",
"title": ""
},
{
"docid": "neg:1840389_3",
"text": "Mobile phone sensing is an emerging area of interest for researchers as smart phones are becoming the core communication device in people's everyday lives. Sensor enabled mobile phones or smart phones are hovering to be at the center of a next revolution in social networks, green applications, global environmental monitoring, personal and community healthcare, sensor augmented gaming, virtual reality and smart transportation systems. More and more organizations and people are discovering how mobile phones can be used for social impact, including how to use mobile technology for environmental protection, sensing, and to leverage just-in-time information to make our movements and actions more environmentally friendly. In this paper we have described comprehensively all those systems which are using smart phones and mobile phone sensors for humans good will and better human phone interaction.",
"title": ""
},
{
"docid": "neg:1840389_4",
"text": "This article investigates whether, and how, an artificial intelligence (AI) system can be said to use visual, imagery-based representations in a way that is analogous to the use of visual mental imagery by people. In particular, this article aims to answer two fundamental questions about imagery-based AI systems. First, what might visual imagery look like in an AI system, in terms of the internal representations used by the system to store and reason about knowledge? Second, what kinds of intelligent tasks would an imagery-based AI system be able to accomplish? The first question is answered by providing a working definition of what constitutes an imagery-based knowledge representation, and the second question is answered through a literature survey of imagery-based AI systems that have been developed over the past several decades of AI research, spanning task domains of: 1) template-based visual search; 2) spatial and diagrammatic reasoning; 3) geometric analogies and matrix reasoning; 4) naive physics; and 5) commonsense reasoning for question answering. This article concludes by discussing three important open research questions in the study of visual-imagery-based AI systems-on evaluating system performance, learning imagery operators, and representing abstract concepts-and their implications for understanding human visual mental imagery.",
"title": ""
},
{
"docid": "neg:1840389_5",
"text": "Direct volume rendered images (DVRIs) have been widely used to reveal structures in volumetric data. However, DVRIs generated by many volume visualization techniques can only partially satisfy users' demands. In this paper, we propose a framework for editing DVRIs, which can also be used for interactive transfer function (TF) design. Our approach allows users to fuse multiple features in distinct DVRIs into a comprehensive one, to blend two DVRIs, and/or to delete features in a DVRI. We further present how these editing operations can generate smooth animations for focus + context visualization. Experimental results on some real volumetric data demonstrate the effectiveness of our method.",
"title": ""
},
{
"docid": "neg:1840389_6",
"text": "The Smart information retrieval project emphasizes completely automatic approaches to the understanding and retrieval of large quantities of text. We continue our work in TREC 3, performing runs in the routing, ad-hoc, and foreign language environments. Our major focus is massive query expansion: adding from 300 to 530 terms to each query. These terms come from known relevant documents in the case of routing, and from just the top retrieved documents in the case of ad-hoc and Spanish. This approach improves e ectiveness from 7% to 25% in the various experiments. Other ad-hoc work extends our investigations into combining global similarities, giving an overall indication of how a document matches a query, with local similarities identifying a smaller part of the document which matches the query. Using an overlapping text window de nition of \\local\", we achieve a 16% improvement.",
"title": ""
},
{
"docid": "neg:1840389_7",
"text": "Mobile devices such as smart phones are becoming popular, and realtime access to multimedia data in different environments is getting easier. With properly equipped communication services, users can easily obtain the widely distributed videos, music, and documents they want. Because of its usability and capacity requirements, music is more popular than other types of multimedia data. Documents and videos are difficult to view on mobile phones' small screens, and videos' large data size results in high overhead for retrieval. But advanced compression techniques for music reduce the required storage space significantly and make the circulation of music data easier. This means that users can capture their favorite music directly from the Web without going to music stores. Accordingly, helping users find music they like in a large archive has become an attractive but challenging issue over the past few years.",
"title": ""
},
{
"docid": "neg:1840389_8",
"text": "Haptic devices for computers and video-game consoles aim to reproduce touch and to engage the user with `force feedback'. Although physical touch is often associated with proximity and intimacy, technologies of touch can reproduce such sensations over a distance, allowing intricate and detailed operations to be conducted through a network such as the Internet. The `virtual handshake' between Boston and London in 2002 is given as an example. This paper is therefore a critical investigation into some technologies of touch, leading to observations about the sociospatial framework in which this technological touching takes place. Haptic devices have now become routinely included with videogame consoles, and have started to be used in computer-aided design and manufacture, medical simulation, and even the cybersex industry. The implications of these new technologies are enormous, as they remould the human ^ computer interface from being primarily audiovisual to being more truly multisensory, and thereby enhance the sense of `presence' or immersion. But the main thrust of this paper is the development of ideas of presence over a large distance, and how this is enhanced by the sense of touch. By using the results of empirical research, including interviews with key figures in haptics research and engineering and personal experience of some of the haptic technologies available, I build up a picture of how `presence', `copresence', and `immersion', themselves paradoxically intangible properties, are guiding the design, marketing, and application of haptic devices, and the engendering and engineering of a set of feelings of interacting with virtual objects, across a range of distances. DOI:10.1068/d394t",
"title": ""
},
{
"docid": "neg:1840389_9",
"text": "Pretend play has recently been of great interest to researchers studying children's understanding of the mind. One reason for this interest is that pretense seems to require many of the same skills as mental state understanding, and these skills seem to emerge precociously in pretense. Pretend play might be a zone of proximal development, an activity in which children operate at a cognitive level higher than they operate at in nonpretense situations. Alternatively, pretend play might be fool's gold, in that it might appear to be more sophisticated than it really is. This paper first discusses what pretend play is. It then investigates whether pretend play is an area of advanced understanding with reference to 3 skills that are implicated in both pretend play and a theory of mind: the ability to represent one object as two things at once, the ability to see one object as representing another, and the ability to represent mental representations.",
"title": ""
},
{
"docid": "neg:1840389_10",
"text": "OBJECTIVE\nTo construct new size charts for all fetal limb bones.\n\n\nDESIGN\nA prospective, cross sectional study.\n\n\nSETTING\nUltrasound department of a large hospital.\n\n\nSAMPLE\n663 fetuses scanned once only for the purpose of the study at gestations between 12 and 42 weeks.\n\n\nMETHODS\nCentiles were estimated by combining separate regression models fitted to the mean and standard deviation, assuming that the measurements have a normal distribution at each gestational age.\n\n\nMAIN OUTCOME MEASURES\nDetermination of fetal limb lengths from 12 to 42 weeks of gestation.\n\n\nRESULTS\nSize charts for fetal bones (radius, ulna, humerus, tibia, fibula, femur and foot) are presented and compared with previously published data.\n\n\nCONCLUSIONS\nWe present new size charts for fetal limb bones which take into consideration the increasing variability with gestational age. We have compared these charts with other published data; the differences seen may be largely due to methodological differences. As standards for fetal head and abdominal measurements have been published from the same population, we suggest that the use of the new charts may facilitate prenatal diagnosis of skeletal dysplasias.",
"title": ""
},
{
"docid": "neg:1840389_11",
"text": "Working memory emerges in infancy and plays a privileged role in subsequent adaptive cognitive development. The neural networks important for the development of working memory during infancy remain unknown. We used diffusion tensor imaging (DTI) and deterministic fiber tracking to characterize the microstructure of white matter fiber bundles hypothesized to support working memory in 12-month-old infants (n=73). Here we show robust associations between infants' visuospatial working memory performance and microstructural characteristics of widespread white matter. Significant associations were found for white matter tracts that connect brain regions known to support working memory in older children and adults (genu, anterior and superior thalamic radiations, anterior cingulum, arcuate fasciculus, and the temporal-parietal segment). Better working memory scores were associated with higher FA and lower RD values in these selected white matter tracts. These tract-specific brain-behavior relationships accounted for a significant amount of individual variation above and beyond infants' gestational age and developmental level, as measured with the Mullen Scales of Early Learning. Working memory was not associated with global measures of brain volume, as expected, and few associations were found between working memory and control white matter tracts. To our knowledge, this study is among the first demonstrations of brain-behavior associations in infants using quantitative tractography. The ability to characterize subtle individual differences in infant brain development associated with complex cognitive functions holds promise for improving our understanding of normative development, biomarkers of risk, experience-dependent learning and neuro-cognitive periods of developmental plasticity.",
"title": ""
},
{
"docid": "neg:1840389_12",
"text": "Coffee is one of the most consumed beverages in the world and is the second largest traded commodity after petroleum. Due to the great demand of this product, large amounts of residues are generated in the coffee industry, which are toxic and represent serious environmental problems. Coffee silverskin and spent coffee grounds are the main coffee industry residues, obtained during the beans roasting, and the process to prepare “instant coffee”, respectively. Recently, some attempts have been made to use these residues for energy or value-added compounds production, as strategies to reduce their toxicity levels, while adding value to them. The present article provides an overview regarding coffee and its main industrial residues. In a first part, the composition of beans and their processing, as well as data about the coffee world production and exportation, are presented. In the sequence, the characteristics, chemical composition, and application of the main coffee industry residues are reviewed. Based on these data, it was concluded that coffee may be considered as one of the most valuable primary products in world trade, crucial to the economies and politics of many developing countries since its cultivation, processing, trading, transportation, and marketing provide employment for millions of people. As a consequence of this big market, the reuse of the main coffee industry residues is of large importance from environmental and economical viewpoints.",
"title": ""
},
{
"docid": "neg:1840389_13",
"text": "Conversational search and recommendation based on user-system dialogs exhibit major differences from conventional search and recommendation tasks in that 1) the user and system can interact for multiple semantically coherent rounds on a task through natural language dialog, and 2) it becomes possible for the system to understand the user needs or to help users clarify their needs by asking appropriate questions from the users directly. We believe the ability to ask questions so as to actively clarify the user needs is one of the most important advantages of conversational search and recommendation. In this paper, we propose and evaluate a unified conversational search/recommendation framework, in an attempt to make the research problem doable under a standard formalization. Specifically, we propose a System Ask -- User Respond (SAUR) paradigm for conversational search, define the major components of the paradigm, and design a unified implementation of the framework for product search and recommendation in e-commerce. To accomplish this, we propose the Multi-Memory Network (MMN) architecture, which can be trained based on large-scale collections of user reviews in e-commerce. The system is capable of asking aspect-based questions in the right order so as to understand the user needs, while (personalized) search is conducted during the conversation, and results are provided when the system feels confident. Experiments on real-world user purchasing data verified the advantages of conversational search and recommendation against conventional search and recommendation algorithms in terms of standard evaluation measures such as NDCG.",
"title": ""
},
{
"docid": "neg:1840389_14",
"text": "A simple, but comprehensive model of heat transfer and solidification of the continuous casting of steel slabs is described, including phenomena in the mold and spray regions. The model includes a one-dimensional (1-D) transient finite-difference calculation of heat conduction within the solidifying steel shell coupled with two-dimensional (2-D) steady-state heat conduction within the mold wall. The model features a detailed treatment of the interfacial gap between the shell and mold, including mass and momentum balances on the solid and liquid interfacial slag layers, and the effect of oscillation marks. The model predicts the shell thickness, temperature distributions in the mold and shell, thickness of the resolidified and liquid powder layers, heat-flux profiles down the wide and narrow faces, mold water temperature rise, ideal taper of the mold walls, and other related phenomena. The important effect of the nonuniform distribution of superheat is incorporated using the results from previous threedimensional (3-D) turbulent fluid-flow calculations within the liquid pool. The FORTRAN program CONID has a user-friendly interface and executes in less than 1 minute on a personal computer. Calibration of the model with several different experimental measurements on operating slab casters is presented along with several example applications. In particular, the model demonstrates that the increase in heat flux throughout the mold at higher casting speeds is caused by two combined effects: a thinner interfacial gap near the top of the mold and a thinner shell toward the bottom. This modeling tool can be applied to a wide range of practical problems in continuous casters.",
"title": ""
},
{
"docid": "neg:1840389_15",
"text": "This paper presents an open-source diarization toolkit which is mostly dedicated to speaker and developed by the LIUM. This toolkit includes hierarchical agglomerative clustering methods using well-known measures such as BIC and CLR. Two applications for which the toolkit has been used are presented: one is for broadcast news using the ESTER 2 data and the other is for telephone conversations using the MEDIA corpus.",
"title": ""
},
{
"docid": "neg:1840389_16",
"text": "Three concurrent public health problems coexist in the United States: endemic nonmedical use/misuse of opioid analgesics, epidemic overdose fatalities involving opioid analgesics, and endemic chronic pain in adults. These intertwined issues comprise an opioid crisis that has spurred the development of formulations of opioids with abuse-deterrent properties and label claims (OADP). To reduce abuse and misuse of prescription opioids, the federal Food and Drug Administration (FDA) has issued a formal Guidance to drug developers that delineates four categories of testing to generate data sufficient for a description of a product's abuse-deterrent properties, along with associated claims, in its Full Prescribing Information (FPI). This article reviews the epidemiology of the crisis as background for the development of OADP, summarizes the FDA Guidance for Industry regarding abuse-deterrent technologies, and provides an overview of some technologies that are currently employed or are under study for incorporation into OADP. Such technologies include physical and chemical barriers to abuse, combined formulations of opioid agonists and antagonists, inclusion of aversive agents, use of delivery systems that deter abuse, development of new molecular entities and prodrugs, and formulation of products that include some combination of these approaches. Opioids employing these novel technologies are one part of a comprehensive intervention strategy that can deter abuse of prescription opioid analgesics without creating barriers to the safe use of prescription opioids. The maximal public health contribution of OADP will probably occur only when all opioids have FDA-recognized abuse-deterrent properties and label claims.",
"title": ""
},
{
"docid": "neg:1840389_17",
"text": "The culture movement challenged the universality of the self-enhancement motive by proposing that the motive is pervasive in individualistic cultures (the West) but absent in collectivistic cultures (the East). The present research posited that Westerners and Easterners use different tactics to achieve the same goal: positive self-regard. Study 1 tested participants from differing cultural backgrounds (the United States vs. Japan), and Study 2 tested participants of differing self-construals (independent vs. interdependent). Americans and independents self-enhanced on individualistic attributes, whereas Japanese and interdependents self-enhanced on collectivistic attributes. Independents regarded individualistic attributes, whereas interdependents regarded collectivistic attributes, as personally important. Attribute importance mediated self-enhancement. Regardless of cultural background or self-construal, people self-enhance on personally important dimensions. Self-enhancement is a universal human motive.",
"title": ""
},
{
"docid": "neg:1840389_18",
"text": "Sometimes information systems fail or have operational and communication problems because designers may not have knowledge of the domain which is intended to be modeled. The same happens with systems for monitoring. Thus, an ontological model is needed to represent the organizational domain, which is intended to be monitored in order to develop an effective monitoring system. In this way, the purpose of the paper is to present a database based on Enterprise Ontology, which represents and specifies organizational transactions, aiming to be a repository of references or models of organizational transaction executions. Therefore, this database intends to be a generic risk profiles repository of organizational transactions for monitoring applications. Moreover, the Risk Profiles Repository presented in this paper is an innovative vision about continuous monitoring and has demonstrated to be a powerful tool for technological representations of organizational transactions and processes in compliance with the formalisms of a business ontological model.",
"title": ""
},
{
"docid": "neg:1840389_19",
"text": "With the development of information technologies, a great amount of semantic data is being generated on the web. Consequently, finding efficient ways of accessing this data becomes more and more important. Question answering is a good compromise between intuitiveness and expressivity, which has attracted the attention of researchers from different communities. In this paper, we propose an intelligent questing answering system for answering questions about concepts. It is based on ConceptRDF, which is an RDF presentation of the ConceptNet knowledge base. We use it as a knowledge base for answering questions. Our experimental results show that our approach is promising: it can answer questions about concepts at a satisfactory level of accuracy (reaches 94.5%).",
"title": ""
}
] |
1840390 | Context Contrasted Feature and Gated Multi-scale Aggregation for Scene Segmentation | [
{
"docid": "pos:1840390_0",
"text": "We present an approach to interpret the major surfaces, objects, and support relations of an indoor scene from an RGBD image. Most existing work ignores physical interactions or is applied only to tidy rooms and hallways. Our goal is to parse typical, often messy, indoor scenes into floor, walls, supporting surfaces, and object regions, and to recover support relationships. One of our main interests is to better understand how 3D cues can best inform a structured 3D interpretation. We also contribute a novel integer programming formulation to infer physical support relations. We offer a new dataset of 1449 RGBD images, capturing 464 diverse indoor scenes, with detailed annotations. Our experiments demonstrate our ability to infer support relations in complex scenes and verify that our 3D scene cues and inferred support lead to better object segmentation.",
"title": ""
},
{
"docid": "pos:1840390_1",
"text": "In this paper, we address the challenging task of scene segmentation. In order to capture the rich contextual dependencies over image regions, we propose Directed Acyclic Graph-Recurrent Neural Networks (DAG-RNN) to perform context aggregation over locally connected feature maps. More specifically, DAG-RNN is placed on top of pre-trained CNN (feature extractor) to embed context into local features so that their representative capability can be enhanced. In comparison with plain CNN (as in Fully Convolutional Networks-FCN), DAG-RNN is empirically found to be significantly more effective at aggregating context. Therefore, DAG-RNN demonstrates noticeably performance superiority over FCNs on scene segmentation. Besides, DAG-RNN entails dramatically less parameters as well as demands fewer computation operations, which makes DAG-RNN more favorable to be potentially applied on resource-constrained embedded devices. Meanwhile, the class occurrence frequencies are extremely imbalanced in scene segmentation, so we propose a novel class-weighted loss to train the segmentation network. The loss distributes reasonably higher attention weights to infrequent classes during network training, which is essential to boost their parsing performance. We evaluate our segmentation network on three challenging public scene segmentation benchmarks: Sift Flow, Pascal Context and COCO Stuff. On top of them, we achieve very impressive segmentation performance.",
"title": ""
}
] | [
{
"docid": "neg:1840390_0",
"text": "Recently, Fan-out Wafer Level Packaging (FOWLP) has been emerged as a promising technology to meet the ever increasing demands of the consumer electronic products. However, conventional FOWLP technology is limited to small size packages with single chip and Low to Mid-range Input/ Output (I/O) count due to die shift, warpage and RDL scaling issues. In this paper, we are presenting new RDL-First FOWLP approach which enables RDL scaling, overcomes the die shift, die protrusion and warpage challenges of conventional FOWLP, and extend the FOWLP technology for multi-chip and high I/O count package applications. RDL-First FOWLP process integration flow was demonstrated and fabricated test vehicles of large multi-chip package of 20 x 20 mm2 with 3 layers fine pitch RDL of LW/LS of 2μm/2μm and ~2400 package I/Os. Two Through Mold Interconnections (TMI) fabrication approaches (tall Cu pillar and vertical Cu wire) were evaluated on this platform for Package-on-Package (PoP) application. Backside RDL process on over molded Chip-to-Wafer (C2W) with carrier wafer was demonstrated for PoP applications. Laser de-bonding and sacrificial release layer material cleaning processes were established, and successfully used in the integration flow to fabricate the test vehicles. Assembly processes were optimized and successfully demonstrated large multi-chip RDL-first FOWLP package and PoP assembly on test boards. The large multi-chip FOWLP packages samples were passed JEDEC component level test Moisture Sensitivity Test Level 1 & Level 3 (MST L1 & MST L3) and 30 drops of board level drop test, and results will be presented.",
"title": ""
},
{
"docid": "neg:1840390_1",
"text": "This work presents an intelligent clothes search system based on domain knowledge, targeted at creating a virtual assistant to search clothes matched to fashion and userpsila expectation using all what have already been in real closet. All what garment essentials and fashion knowledge are from visual images. Users can simply submit the desired image keywords, such as elegant, sporty, casual, and so on, and occasion type, such as formal meeting, outdoor dating, and so on, to the system. And then the fashion style recognition module is activated to search the desired clothes within the personal garment database. Category learning with supervised neural networking is applied to cluster garments into different impression groups. The input stimuli of the neural network are three sensations, warmness, loudness, and softness, which are transformed from the physical garment essentials like major color tone, print type, and fabric material. The system aims to provide such an intelligent user-centric services system functions as a personal fashion advisor.",
"title": ""
},
{
"docid": "neg:1840390_2",
"text": "In future planetary exploration missions, rovers will be required to autonomously traverse challenging environments. Much of the previous work in robot motion planning cannot be successfully applied to the rough-terrain planning problem. A model-based planning method is presented in this paper that is computationally efficient and takes into account uncertainty in the robot model, terrain model, range sensor data, and rover pathfollowing errors. It is based on rapid path planning through the visible terrain map with a simple graph-search algorithm, followed by a physics-based evaluation of the path with a rover model. Simulation results are presented which demonstrate the method’s effectiveness.",
"title": ""
},
{
"docid": "neg:1840390_3",
"text": "In traditional video conferencing systems, it is impossible for users to have eye contact when looking at the conversation partner’s face displayed on the screen, due to the disparity between the locations of the camera and the screen. In this work, we implemented a gaze correction system that can automatically maintain eye contact by replacing the eyes of the user with the direct looking eyes (looking directly into the camera) captured in the initialization stage. Our real-time system has good robustness against different lighting conditions and head poses, and it provides visually convincing and natural results while relying only on a single webcam that can be positioned almost anywhere around the",
"title": ""
},
{
"docid": "neg:1840390_4",
"text": "The paper treats a modular program in which transfers of control between modules follow a semi-Markov process. Each module is failure-prone, and the different failure processes are assumed to be Poisson. The transfers of control between modules (interfaces) are themselves subject to failure. The overall failure process of the program is described, and an asymptotic Poisson process approximation is given for the case when the individual modules and interfaces are very reliable. A simple formula gives the failure rate of the overall program (and hence mean time between failures) under this limiting condition. The remainder of the paper treats the consequences of failures. Each failure results in a cost, represented by a random variable with a distribution typical of the type of failure. The quantity of interest is the total cost of running the program for a time t, and a simple approximating distribution is given for large t. The parameters of this limiting distribution are functions only of the means and variances of the underlying distributions, and are thus readily estimable. A calculation of program availability is given as an example of the cost process. There follows a brief discussion of methods of estimating the parameters of the model, with suggestions of areas in which it might be used.",
"title": ""
},
{
"docid": "neg:1840390_5",
"text": "Forecasting hourly spot prices for real-time electricity usage is a challenging task. This paper investigates a series of forecasting methods to 90 and 180 days of load data collection acquired from the Iberian Electricity Market (MIBEL). This dataset was used to train and test multiple forecast models. The Mean Absolute Percentage Error (MAPE) for the proposed Hybrid combination of Auto Regressive Integrated Moving Average (ARIMA) and Generalized Linear Model (GLM) was compared against ARIMA, GLM, Random forest (RF) and Support Vector Machines (SVM) methods. The results indicate significant improvement in MAPE and correlation co-efficient values for the proposed hybrid ARIMA-GLM method.",
"title": ""
},
{
"docid": "neg:1840390_6",
"text": "In 2013, the IEEE Future Directions Committee (FDC) formed an SDN work group to explore the amount of interest in forming an IEEE Software-Defined Network (SDN) Community. To this end, a Workshop on “SDN for Future Networks and Services” (SDN4FNS’13) was organized in Trento, Italy (Nov. 11-13 2013). Following the results of the workshop, in this paper, we have further analyzed scenarios, prior-art, state of standardization, and further discussed the main technical challenges and socio-economic aspects of SDN and virtualization in future networks and services. A number of research and development directions have been identified in this white paper, along with a comprehensive analysis of the technical feasibility and business availability of those fundamental technologies. A radical industry transition towards the “economy of information through softwarization” is expected in the near future. Keywords—Software-Defined Networks, SDN, Network Functions Virtualization, NFV, Virtualization, Edge, Programmability, Cloud Computing.",
"title": ""
},
{
"docid": "neg:1840390_7",
"text": "In the application of face recognition, eyeglasses could significantly degrade the recognition accuracy. A feasible method is to collect large-scale face images with eyeglasses for training deep learning methods. However, it is difficult to collect the images with and without glasses of the same identity, so that it is difficult to optimize the intra-variations caused by eyeglasses. In this paper, we propose to address this problem in a virtual synthesis manner. The high-fidelity face images with eyeglasses are synthesized based on 3D face model and 3D eyeglasses. Models based on deep learning methods are then trained on the synthesized eyeglass face dataset, achieving better performance than previous ones. Experiments on the real face database validate the effectiveness of our synthesized data for improving eyeglass face recognition performance.",
"title": ""
},
{
"docid": "neg:1840390_8",
"text": "The emergence of powerful portable computers, along with advances in wireless communication technologies, has made mobile computing a reality. Among the applications that are finding their way to the market of mobile computingthose that involve data managementhold a prominent position. In the past few years, there has been a tremendous surge of research in the area of data management in mobile computing. This research has produced interesting results in areas such as data dissemination over limited bandwith channels, location-dependent querying of data, and advanced interfaces for mobile computers. This paper is an effort to survey these techniques and to classify this research in a few broad areas.",
"title": ""
},
{
"docid": "neg:1840390_9",
"text": "Policy languages (such as privacy and rights) have had little impact on the wider community. Now that Social Networks have taken off, the need to revisit Policy languages and realign them towards Social Networks requirements has become more apparent. One such language is explored as to its applicability to the Social Networks masses. We also argue that policy languages alone are not sufficient and thus they should be paired with reasoning mechanisms to provide precise and unambiguous execution models of the policies. To this end we propose a computationally oriented model to represent, reason with and execute policies for Social Networks.",
"title": ""
},
{
"docid": "neg:1840390_10",
"text": "We investigate a previously unknown phase of phosphorus that shares its layered structure and high stability with the black phosphorus allotrope. We find the in-plane hexagonal structure and bulk layer stacking of this structure, which we call \"blue phosphorus,\" to be related to graphite. Unlike graphite and black phosphorus, blue phosphorus displays a wide fundamental band gap. Still, it should exfoliate easily to form quasi-two-dimensional structures suitable for electronic applications. We study a likely transformation pathway from black to blue phosphorus and discuss possible ways to synthesize the new structure.",
"title": ""
},
{
"docid": "neg:1840390_11",
"text": "This paper presents M3Express (Modular-Mobile-Multirobot), a new design for a low-cost modular robot. The robot is self-mobile, with three independently driven wheels that also serve as connectors. The new connectors can be automatically operated, and are based on stationary magnets coupled to mechanically actuated ferromagnetic yoke pieces. Extensive use is made of plastic castings, laser cut plastic sheets, and low-cost motors and electronic components. Modules interface with a host PC via Bluetooth® radio. An off-board camera, along with a set of modules and a control PC form a convenient, low-cost system for rapidly developing and testing control algorithms for modular reconfigurable robots. Experimental results demonstrate mechanical docking, connector strength, and accuracy of dead reckoning locomotion.",
"title": ""
},
{
"docid": "neg:1840390_12",
"text": "THEORIES IN AI FALL INT O TWO broad categories: mechanismtheories and contenttheories. Ontologies are content the ories about the sor ts of objects, properties of objects,and relations between objects tha t re possible in a specif ed domain of kno wledge. They provide potential ter ms for descr ibing our knowledge about the domain. In this article, we survey the recent de velopment of the f ield of ontologies in AI. We point to the some what different roles ontologies play in information systems, naturallanguage under standing, and knowledgebased systems. Most r esear ch on ontologies focuses on what one might characterize as domain factual knowledge, because kno wlede of that type is par ticularly useful in natural-language under standing. There is another class of ontologies that are important in KBS—one that helps in shar ing knoweldge about reasoning str ategies or pr oblemsolving methods. In a f ollow-up article, we will f ocus on method ontolo gies.",
"title": ""
},
{
"docid": "neg:1840390_13",
"text": "The design complexity of modern high performance processors calls for innovative design methodologies for achieving time-to-market goals. New design techniques are also needed to curtail power increases that inherently arise from ever increasing performance targets. This paper describes new design approaches employed by the POWER8 processor design team to address complexity and power consumption challenges. Improvements in productivity are attained by leveraging a new and more synthesis-centric design methodology. New optimization strategies for synthesized macros allow power reduction without sacrificing performance. These methodology innovations contributed to the industry leading performance of the POWER8 processor. Overall, POWER8 delivers a 2.5x increase in per-socket performance over its predecessor, POWER7+, while maintaining the same power dissipation.",
"title": ""
},
{
"docid": "neg:1840390_14",
"text": "Honey bee colony feeding trials were conducted to determine whether differential effects of carbohydrate feeding (sucrose syrup (SS) vs. high fructose corn syrup, or HFCS) could be measured between colonies fed exclusively on these syrups. In one experiment, there was a significant difference in mean wax production between the treatment groups and a significant interaction between time and treatment for the colonies confined in a flight arena. On average, the colonies supplied with SS built 7916.7 cm(2) ± 1015.25 cm(2) honeycomb, while the colonies supplied with HFCS built 4571.63 cm(2) ± 786.45 cm(2). The mean mass of bees supplied with HFCS was 4.65 kg (± 0.97 kg), while those supplied with sucrose had a mean of 8.27 kg (± 1.26). There was no significant difference between treatment groups in terms of brood rearing. Differences in brood production were complicated due to possible nutritional deficiencies experienced by both treatment groups. In the second experiment, colonies supplemented with SS through the winter months at a remote field site exhibited increased spring brood production when compared to colonies fed with HFCS. The differences in adult bee populations were significant, having an overall average of 10.0 ± 1.3 frames of bees fed the sucrose syrup between November 2008 and April 2009, compared to 7.5 ± 1.6 frames of bees fed exclusively on HFCS. For commercial queen beekeepers, feeding the right supplementary carbohydrates could be especially important, given the findings of this study.",
"title": ""
},
{
"docid": "neg:1840390_15",
"text": "Current approaches to cross-lingual sentiment analysis try to leverage the wealth of labeled English data using bilingual lexicons, bilingual vector space embeddings, or machine translation systems. Here we show that it is possible to use a single linear transformation, with as few as 2000 word pairs, to capture fine-grained sentiment relationships between words in a cross-lingual setting. We apply these cross-lingual sentiment models to a diverse set of tasks to demonstrate their functionality in a non-English context. By effectively leveraging English sentiment knowledge without the need for accurate translation, we can analyze and extract features from other languages with scarce data at a very low cost, thus making sentiment and related analyses for many languages inexpensive.",
"title": ""
},
{
"docid": "neg:1840390_16",
"text": "A method for tracking and predicting cloud movement using ground based sky imagery is presented. Sequences of partial sky images, with each image taken one second apart with a size of 640 by 480 pixels, were processed to determine the time taken for clouds to reach a user defined region in the image or the Sun. The clouds were first identified by segmenting the image based on the difference between the blue and red colour channels, producing a binary detection image. Good features to track were then located in the image and tracked utilising the Lucas-Kanade method for optical flow. From the trajectory of the tracked features and the binary detection image, cloud signals were generated. The trajectory of the individual features were used to determine the risky cloud signals (signals that pass over the user defined region or Sun). Time to collision estimates were produced based on merging these risky cloud signals. Estimates of times up to 40 seconds were achieved with error in the estimate increasing when the estimated time is larger. The method presented has the potential for tracking clouds travelling in different directions and at different velocities.",
"title": ""
},
{
"docid": "neg:1840390_17",
"text": "Neural-network training can be slow and energy intensive, owing to the need to transfer the weight data for the network between conventional digital memory chips and processor chips. Analogue non-volatile memory can accelerate the neural-network training algorithm known as backpropagation by performing parallelized multiply–accumulate operations in the analogue domain at the location of the weight data. However, the classification accuracies of such in situ training using non-volatile-memory hardware have generally been less than those of software-based training, owing to insufficient dynamic range and excessive weight-update asymmetry. Here we demonstrate mixed hardware–software neural-network implementations that involve up to 204,900 synapses and that combine long-term storage in phase-change memory, near-linear updates of volatile capacitors and weight-data transfer with ‘polarity inversion’ to cancel out inherent device-to-device variations. We achieve generalization accuracies (on previously unseen data) equivalent to those of software-based training on various commonly used machine-learning test datasets (MNIST, MNIST-backrand, CIFAR-10 and CIFAR-100). The computational energy efficiency of 28,065 billion operations per second per watt and throughput per area of 3.6 trillion operations per second per square millimetre that we calculate for our implementation exceed those of today’s graphical processing units by two orders of magnitude. This work provides a path towards hardware accelerators that are both fast and energy efficient, particularly on fully connected neural-network layers. Analogue-memory-based neural-network training using non-volatile-memory hardware augmented by circuit simulations achieves the same accuracy as software-based training but with much improved energy efficiency and speed.",
"title": ""
},
{
"docid": "neg:1840390_18",
"text": "The MATLAB toolbox YALMIP is introduced. It is described how YALMIP can be used to model and solve optimization problems typically occurring in systems and control theory. In this paper, free MATLAB toolbox YALMIP, developed initially to model SDPs and solve these by interfacing eternal solvers. The toolbox makes development of optimization problems in general, and control oriented SDP problems in particular, extremely simple. In fact, learning 3 YALMIP commands is enough for most users to model and solve the optimization problems",
"title": ""
}
] |
1840391 | High Resolution Face Completion with Multiple Controllable Attributes via Fully End-to-End Progressive Generative Adversarial Networks | [
{
"docid": "pos:1840391_0",
"text": "We propose a method for automatically guiding patch-based image completion using mid-level structural cues. Our method first estimates planar projection parameters, softly segments the known region into planes, and discovers translational regularity within these planes. This information is then converted into soft constraints for the low-level completion algorithm by defining prior probabilities for patch offsets and transformations. Our method handles multiple planes, and in the absence of any detected planes falls back to a baseline fronto-parallel image completion algorithm. We validate our technique through extensive comparisons with state-of-the-art algorithms on a variety of scenes.",
"title": ""
},
{
"docid": "pos:1840391_1",
"text": "What can you do with a million images? In this paper, we present a new image completion algorithm powered by a huge database of photographs gathered from the Web. The algorithm patches up holes in images by finding similar image regions in the database that are not only seamless, but also semantically valid. Our chief insight is that while the space of images is effectively infinite, the space of semantically differentiable scenes is actually not that large. For many image completion tasks, we are able to find similar scenes which contain image fragments that will convincingly complete the image. Our algorithm is entirely data driven, requiring no annotations or labeling by the user. Unlike existing image completion methods, our algorithm can generate a diverse set of image completions and we allow users to select among them. We demonstrate the superiority of our algorithm over existing image completion approaches.",
"title": ""
}
] | [
{
"docid": "neg:1840391_0",
"text": "BACKGROUND\nThe management of opioid-induced constipation (OIC) is often complicated by the fact that clinical measures of constipation do not always correlate with patient perception. As the discomfort associated with OIC can lead to poor compliance with the opioid treatment, a shift in focus towards patient assessment is often advocated.\n\n\nSCOPE\nThe Bowel Function Index * (BFI) is a new patient-assessment scale that has been developed and validated specifically for OIC. It is a physician-administered, easy-to-use scale made up of three items (ease of defecation, feeling of incomplete bowel evacuation, and personal judgement of constipation). An extensive analysis has been performed in order to validate the BFI as reliable, stable, clinically valid, and responsive to change in patients with OIC, with a 12-point change in score constituting a clinically relevant change in constipation.\n\n\nFINDINGS\nThe results of the validation analysis were based on major clinical trials and have been further supported by data from a large open-label study and a pharmaco-epidemiological study, in which the BFI was used effectively to assess OIC in a large population of patients treated with opioids. Although other patient self-report scales exist, the BFI offers several unique advantages. First, by being physician-administered, the BFI minimizes reading and comprehension difficulties; second, by offering general and open-ended questions which capture patient perspective, the BFI is likely to detect most patients suffering from OIC; third, by being short and easy-to-use, it places little burden on the patient, thereby increasing the likelihood of gathering accurate information.\n\n\nCONCLUSION\nAltogether, the available data suggest that the BFI will be useful in clinical trials and in daily practice.",
"title": ""
},
{
"docid": "neg:1840391_1",
"text": "Dexterity robotic hands can (Cummings, 1996) greatly enhance the functionality of humanoid robots, but the making of such hands with not only human-like appearance but also the capability of performing the natural movement of social robots is a challenging problem. The first challenge is to create the hand’s articulated structure and the second challenge is to actuate it to move like a human hand. A robotic hand for humanoid robot should look and behave human like. At the same time, it also needs to be light and cheap for widely used purposes. We start with studying the biomechanical features of a human hand and propose a simplified mechanical model of robotic hands, which can achieve the important local motions of the hand. Then, we use 3D modeling techniques to create a single interlocked hand model that integrates pin and ball joints to our hand model. Compared to other robotic hands, our design saves the time required for assembling and adjusting, which makes our robotic hand ready-to-use right after the 3D printing is completed. Finally, the actuation of the hand is realized by cables and motors. Based on this approach, we have designed a cost-effective, 3D printable, compact, and lightweight robotic hand. Our robotic hand weighs 150 g, has 15 joints, which are similar to a real human hand, and 6 Degree of Freedom (DOFs). It is actuated by only six small size actuators. The wrist connecting part is also integrated into the hand model and could be customized for different robots such as Nadine robot (Magnenat Thalmann et al., 2017). The compact servo bed can be hidden inside the Nadine robot’s sleeve and the whole robotic hand platform will not cause extra load to her arm as the total weight (150 g robotic hand and 162 g artificial skin) is almost the same as her previous unarticulated robotic hand which is 348 g. The paper also shows our test results with and without silicon artificial hand skin, and on Nadine robot.",
"title": ""
},
{
"docid": "neg:1840391_2",
"text": "Partially observed control problems are a challenging aspect of reinforcement learning. We extend two related, model-free algorithms for continuous control – deterministic policy gradient and stochastic value gradient – to solve partially observed domains using recurrent neural networks trained with backpropagation through time. We demonstrate that this approach, coupled with long-short term memory is able to solve a variety of physical control problems exhibiting an assortment of memory requirements. These include the short-term integration of information from noisy sensors and the identification of system parameters, as well as long-term memory problems that require preserving information over many time steps. We also demonstrate success on a combined exploration and memory problem in the form of a simplified version of the well-known Morris water maze task. Finally, we show that our approach can deal with high-dimensional observations by learning directly from pixels. We find that recurrent deterministic and stochastic policies are able to learn similarly good solutions to these tasks, including the water maze where the agent must learn effective search strategies.",
"title": ""
},
{
"docid": "neg:1840391_3",
"text": "Recent years have seen exciting developments in join algorithms. In 2008, Atserias, Grohe and Marx (henceforth AGM) proved a tight bound on the maximum result size of a full conjunctive query, given constraints on the input rel ation sizes. In 2012, Ngo, Porat, R «e and Rudra (henceforth NPRR) devised a join algorithm with worst-case running time proportional to the AGM bound [8]. Our commercial database system LogicBlox employs a novel join algorithm, leapfrog triejoin, which compared conspicuously well to the NPRR algorithm in preliminary benchmarks. This spurred us to analyze the complexity of leapfrog triejoin. In this pa per we establish that leapfrog triejoin is also worst-case o ptimal, up to a log factor, in the sense of NPRR. We improve on the results of NPRR by proving that leapfrog triejoin achieves worst-case optimality for finer-grained classes o f database instances, such as those defined by constraints on projection cardinalities. We show that NPRR is not worstcase optimal for such classes, giving a counterexamplewher e leapfrog triejoin runs inO(n log n) time and NPRR runs in Θ(n) time. On a practical note, leapfrog triejoin can be implemented using conventional data structures such as B-trees, and extends naturally to ∃1 queries. We believe our algorithm offers a useful addition to the existing toolbox o f join algorithms, being easy to absorb, simple to implement, and having a concise optimality proof.",
"title": ""
},
{
"docid": "neg:1840391_4",
"text": "Nodes residing in different parts of a graph can have similar structural roles within their local network topology. The identification of such roles provides key insight into the organization of networks and can be used for a variety of machine learning tasks. However, learning structural representations of nodes is a challenging problem, and it has typically involved manually specifying and tailoring topological features for each node. In this paper, we develop GraphWave, a method that represents each node's network neighborhood via a low-dimensional embedding by leveraging heat wavelet diffusion patterns. Instead of training on hand-selected features, GraphWave learns these embeddings in an unsupervised way. We mathematically prove that nodes with similar network neighborhoods will have similar GraphWave embeddings even though these nodes may reside in very different parts of the network, and our method scales linearly with the number of edges. Experiments in a variety of different settings demonstrate GraphWave's real-world potential for capturing structural roles in networks, and our approach outperforms existing state-of-the-art baselines in every experiment, by as much as 137%.",
"title": ""
},
{
"docid": "neg:1840391_5",
"text": "The number of malicious programs has grown both in number and in sophistication. Analyzing the malicious intent of vast amounts of data requires huge resources and thus, effective categorization of malware is required. In this paper, the content of a malicious program is represented as an entropy stream, where each value describes the amount of entropy of a small chunk of code in a specific location of the file. Wavelet transforms are then applied to this entropy signal to describe the variation in the entropic energy. Motivated by the visual similarity between streams of entropy of malicious software belonging to the same family, we propose a file agnostic deep learning approach for categorization of malware. Our method exploits the fact that most variants are generated by using common obfuscation techniques and that compression and encryption algorithms retain some properties present in the original code. This allows us to find discriminative patterns that almost all variants in a family share. Our method has been evaluated using the data provided by Microsoft for the BigData Innovators Gathering Anti-Malware Prediction Challenge, and achieved promising results in comparison with the State of the Art.",
"title": ""
},
{
"docid": "neg:1840391_6",
"text": "©2018 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. DOI: 10.1109/IJCNN.2018.8489656 Abstract—Recurrent neural networks are now the state-ofthe-art in natural language processing because they can build rich contextual representations and process texts of arbitrary length. However, recent developments on attention mechanisms have equipped feedforward networks with similar capabilities, hence enabling faster computations due to the increase in the number of operations that can be parallelized. We explore this new type of architecture in the domain of question-answering and propose a novel approach that we call Fully Attention Based Information Retriever (FABIR). We show that FABIR achieves competitive results in the Stanford Question Answering Dataset (SQuAD) while having fewer parameters and being faster at both learning and inference than rival methods.",
"title": ""
},
{
"docid": "neg:1840391_7",
"text": "Resource management in Cloud Computing has been dominated by system-level virtual machines to enable the management of resources using a coarse grained approach, largely in a manner independent from the applications running on these infrastructures. However, in such environments, although different types of applications can be running, the resources are delivered equally to each one, missing the opportunity to manage the available resources in a more efficient and application driven way. So, as more applications target managed runtimes, high level virtualization is a relevant abstraction layer that has not been properly explored to enhance resource usage, control, and effectiveness. We propose a VM economics model to manage cloud infrastructures, governed by a quality-of-execution (QoE) metric and implemented by an extended virtual machine. The Adaptive and Resource-Aware Java Virtual Machine (ARA-JVM) is a cluster-enabled virtual execution environment with the ability to monitor base mechanisms (e.g. thread cheduling, garbage collection, memory or network consumptions) to assess application's performance and reconfigure these mechanisms in runtime according to previously defined resource allocation policies. Reconfiguration is driven by incremental gains in quality-of-execution (QoE), used by the VM economics model to balance relative resource savings and perceived performance degradation. Our work in progress, aims to allow cloud providers to exchange resource slices among virtual machines, continually addressing where those resources are required, while being able to determine where the reduction will be more economically effective, i.e., will contribute in lesser extent to performance degradation.",
"title": ""
},
{
"docid": "neg:1840391_8",
"text": "An active contour tracker is presented which can be used for gaze-based interaction with off-the-shelf components. The underlying contour model is based on image statistics and avoids explicit feature detection. The tracker combines particle filtering with the EM algorithm. The method exhibits robustness to light changes and camera defocusing; consequently, the model is well suited for use in systems using off-the-shelf hardware, but may equally well be used in controlled environments, such as in IR-based settings. The method is even capable of handling sudden changes between IR and non-IR light conditions, without changing parameters. For the purpose of determining where the user is looking, calibration is usually needed. The number of calibration points used in different methods varies from a few to several thousands, depending on the prior knowledge used on the setup and equipment. We examine basic properties of gaze determination when the geometry of the camera, screen, and user is unknown. In particular we present a lower bound on the number of calibration points needed for gaze determination on planar objects, and we examine degenerate configurations. Based on this lower bound we apply a simple calibration procedure, to facilitate gaze estimation. 2004 Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "neg:1840391_9",
"text": "As a matter of fact, most natural structures are complex topology structures with intricate holes or irregular surface morphology. These structures can be used as lightweight infill, porous scaffold, energy absorber or micro-reactor. With the rapid advancement of 3D printing, the complex topology structures can now be efficiently and accurately fabricated by stacking layered materials. The novel manufacturing technology and application background put forward new demands and challenges to the current design methodologies of complex topology structures. In this paper, a brief review on the development of recent complex topology structure design methods was provided; meanwhile, the limitations of existing methods and future work are also discussed in the end.",
"title": ""
},
{
"docid": "neg:1840391_10",
"text": "Shulman, Carl. 2010. Omohundro's \" Basic AI Drives \" and Catastrophic Risks.",
"title": ""
},
{
"docid": "neg:1840391_11",
"text": "Accurate estimation of heart rates from photoplethysmogram (PPG) signals during intense physical activity is a very challenging problem. This is because strenuous and high intensity exercise can result in severe motion artifacts in PPG signals, making accurate heart rate (HR) estimation difficult. In this study we investigated a novel technique to accurately reconstruct motion-corrupted PPG signals and HR based on time-varying spectral analysis. The algorithm is called Spectral filter algorithm for Motion Artifacts and heart rate reconstruction (SpaMA). The idea is to calculate the power spectral density of both PPG and accelerometer signals for each time shift of a windowed data segment. By comparing time-varying spectra of PPG and accelerometer data, those frequency peaks resulting from motion artifacts can be distinguished from the PPG spectrum. The SpaMA approach was applied to three different datasets and four types of activities: (1) training datasets from the 2015 IEEE Signal Process. Cup Database recorded from 12 subjects while performing treadmill exercise from 1 km/h to 15 km/h; (2) test datasets from the 2015 IEEE Signal Process. Cup Database recorded from 11 subjects while performing forearm and upper arm exercise. (3) Chon Lab dataset including 10 min recordings from 10 subjects during treadmill exercise. The ECG signals from all three datasets provided the reference HRs which were used to determine the accuracy of our SpaMA algorithm. The performance of the SpaMA approach was calculated by computing the mean absolute error between the estimated HR from the PPG and the reference HR from the ECG. The average estimation errors using our method on the first, second and third datasets are 0.89, 1.93 and 1.38 beats/min respectively, while the overall error on all 33 subjects is 1.86 beats/min and the performance on only treadmill experiment datasets (22 subjects) is 1.11 beats/min. Moreover, it was found that dynamics of heart rate variability can be accurately captured using the algorithm where the mean Pearson's correlation coefficient between the power spectral densities of the reference and the reconstructed heart rate time series was found to be 0.98. These results show that the SpaMA method has a potential for PPG-based HR monitoring in wearable devices for fitness tracking and health monitoring during intense physical activities.",
"title": ""
},
{
"docid": "neg:1840391_12",
"text": "The requirement to perform complicated statistic analysis of big data by institutions of engineering, scientific research, health care, commerce, banking and computer research is immense. However, the limitations of the widely used current desktop software like R, excel, minitab and spss gives a researcher limitation to deal with big data. The big data analytic tools like IBM Big Insight, Revolution Analytics, and tableau software are commercial and heavily license. Still, to deal with big data, client has to invest in infrastructure, installation and maintenance of hadoop cluster to deploy these analytical tools. Apache Hadoop is an open source distributed computing framework that uses commodity hardware. With this project, I intend to collaborate Apache Hadoop and R software over the on the Cloud. Objective is to build a SaaS (Software-as-a-Service) analytic platform that stores & analyzes big data using open source Apache Hadoop and open source R software. The benefits of this cloud based big data analytical service are user friendliness & cost as it is developed using open-source software. The system is cloud based so users have their own space in cloud where user can store there data. User can browse data, files, folders using browser and arrange datasets. User can select dataset and analyze required dataset and store result back to cloud storage. Enterprise with a cloud environment can save cost of hardware, upgrading software, maintenance or network configuration, thus it making it more economical.",
"title": ""
},
{
"docid": "neg:1840391_13",
"text": "The large number of rear end collisions due to driver inattention has been identified as a major automotive safety issue. Even a short advance warning can significantly reduce the number and severity of the collisions. This paper describes a vision based forward collision warning (FCW) system for highway safety. The algorithm described in this paper computes time to contact (TTC) and possible collision course directly from the size and position of the vehicles in the image - which are the natural measurements for a vision based system - without having to compute a 3D representation of the scene. The use of a single low cost image sensor results in an affordable system which is simple to install. The system has been implemented on real-time hardware and has been test driven on highways. Collision avoidance tests have also been performed on test tracks.",
"title": ""
},
{
"docid": "neg:1840391_14",
"text": "This paper presents the research and development of two terahertz imaging systems based on photonic and electronic principles, respectively. As part of this study, a survey of ongoing research in the field of terahertz imaging is provided focusing on security applications. Existing terahertz imaging systems are reviewed in terms of the employed architecture and data processing strategies. Active multichannel measurement method is found to be promising for real-time applications among the various terahertz imaging techniques and is chosen as a basis for the imaging instruments presented in this paper. An active system operation allows for a wide dynamic range, which is important for image quality. The described instruments employ a multichannel high-sensitivity heterodyne architecture and aperture filling techniques, with close to real-time image acquisition time. In the case of the photonic imaging system, mechanical scanning is completely obsolete. We show 2-D images of simulated 3-D image data for both systems. The reconstruction algorithms are suitable for 3-D real-time operation, only limited by mechanical scanning.",
"title": ""
},
{
"docid": "neg:1840391_15",
"text": "Customer relationship management (CRM) has once again gained prominence amongst academics and practitioners. However, there is a tremendous amount of confusion regarding its domain and meaning. In this paper, the authors explore the conceptual foundations of CRM by examining the literature on relationship marketing and other disciplines that contribute to the knowledge of CRM. A CRM process framework is proposed that builds on other relationship development process models. CRM implementation challenges as well as CRM's potential to become a distinct discipline of marketing are also discussed in this paper. JEL Classification Codes: M31.",
"title": ""
},
{
"docid": "neg:1840391_16",
"text": "Abstract. Today, a paradigm shift is being observed in science, where the focus is gradually shifting toward the cloud environments to obtain appropriate, robust and affordable services to deal with Big Data challenges (Sharma et al. 2014, 2015a, 2015b). Cloud computing avoids any need to locally maintain the overly scaled computing infrastructure that include not only dedicated space, but the expensive hardware and software also. In this paper, we study the evolution of as-a-Service modalities, stimulated by cloud computing, and explore the most complete inventory of new members beyond traditional cloud computing stack.",
"title": ""
},
{
"docid": "neg:1840391_17",
"text": "Users of social media sites like Facebook and Twitter rely on crowdsourced content recommendation systems (e.g., Trending Topics) to retrieve important and useful information.",
"title": ""
}
] |
1840392 | Type-Aware Distantly Supervised Relation Extraction with Linked Arguments | [
{
"docid": "pos:1840392_0",
"text": "We present a new part-of-speech tagger that demonstrates the following ideas: (i) explicit use of both preceding and following tag contexts via a dependency network representation, (ii) broad use of lexical features, including jointly conditioning on multiple consecutive words, (iii) effective use of priors in conditional loglinear models, and (iv) fine-grained modeling of unknown word features. Using these ideas together, the resulting tagger gives a 97.24% accuracy on the Penn Treebank WSJ, an error reduction of 4.4% on the best previous single automatically learned tagging result.",
"title": ""
},
{
"docid": "pos:1840392_1",
"text": "In this paper, we extend distant supervision (DS) based on Wikipedia for Relation Extraction (RE) by considering (i) relations defined in external repositories, e.g. YAGO, and (ii) any subset of Wikipedia documents. We show that training data constituted by sentences containing pairs of named entities in target relations is enough to produce reliable supervision. Our experiments with state-of-the-art relation extraction models, trained on the above data, show a meaningful F1 of 74.29% on a manually annotated test set: this highly improves the state-of-art in RE using DS. Additionally, our end-to-end experiments demonstrated that our extractors can be applied to any general text document.",
"title": ""
},
{
"docid": "pos:1840392_2",
"text": "Knowledge of objects and their parts, meronym relations, are at the heart of many question-answering systems, but manually encoding these facts is impractical. Past researchers have tried hand-written patterns, supervised learning, and bootstrapped methods, but achieving both high precision and recall has proven elusive. This paper reports on a thorough exploration of distant supervision to learn a meronym extractor for the domain of college biology. We introduce a novel algorithm, generalizing the ``at least one'' assumption of multi-instance learning to handle the case where a fixed (but unknown) percentage of bag members are positive examples. Detailed experiments compare strategies for mention detection, negative example generation, leveraging out-of-domain meronyms, and evaluate the benefit of our multi-instance percentage model.",
"title": ""
},
{
"docid": "pos:1840392_3",
"text": "We propose a new deterministic approach to coreference resolution that combines the global information and precise features of modern machine-learning models with the transparency and modularity of deterministic, rule-based systems. Our sieve architecture applies a battery of deterministic coreference models one at a time from highest to lowest precision, where each model builds on the previous model's cluster output. The two stages of our sieve-based architecture, a mention detection stage that heavily favors recall, followed by coreference sieves that are precision-oriented, offer a powerful way to achieve both high precision and high recall. Further, our approach makes use of global information through an entity-centric model that encourages the sharing of features across all mentions that point to the same real-world entity. Despite its simplicity, our approach gives state-of-the-art performance on several corpora and genres, and has also been incorporated into hybrid state-of-the-art coreference systems for Chinese and Arabic. Our system thus offers a new paradigm for combining knowledge in rule-based systems that has implications throughout computational linguistics.",
"title": ""
},
{
"docid": "pos:1840392_4",
"text": "In relation extraction, distant supervision seeks to extract relations between entities from text by using a knowledge base, such as Freebase, as a source of supervision. When a sentence and a knowledge base refer to the same entity pair, this approach heuristically labels the sentence with the corresponding relation in the knowledge base. However, this heuristic can fail with the result that some sentences are labeled wrongly. This noisy labeled data causes poor extraction performance. In this paper, we propose a method to reduce the number of wrong labels. We present a novel generative model that directly models the heuristic labeling process of distant supervision. The model predicts whether assigned labels are correct or wrong via its hidden variables. Our experimental results show that this model detected wrong labels with higher performance than baseline methods. In the experiment, we also found that our wrong label reduction boosted the performance of relation extraction.",
"title": ""
},
{
"docid": "pos:1840392_5",
"text": "Entity Recognition (ER) is a key component of relation extraction systems and many other natural-language processing applications. Unfortunately, most ER systems are restricted to produce labels from to a small set of entity classes, e.g., person, organization, location or miscellaneous. In order to intelligently understand text and extract a wide range of information, it is useful to more precisely determine the semantic classes of entities mentioned in unstructured text. This paper defines a fine-grained set of 112 tags, formulates the tagging problem as multi-class, multi-label classification, describes an unsupervised method for collecting training data, and presents the FIGER implementation. Experiments show that the system accurately predicts the tags for entities. Moreover, it provides useful information for a relation extraction system, increasing the F1 score by 93%. We make FIGER and its data available as a resource for future work.",
"title": ""
}
] | [
{
"docid": "neg:1840392_0",
"text": "Education University of California, Berkeley (2008-2013) Ph.D. in Computer Science Thesis: Surface Web Semantics for Structured Natural Language Processing Advisor: Dan Klein. Committee members: Dan Klein, Marti Hearst, Line Mikkelsen, Nelson Morgan University of California, Berkeley (2012) Master of Science (M.S.) in Computer Science Thesis: An All-Fragments Grammar for Simple and Accurate Parsing Advisor: Dan Klein Indian Institute of Technology, Kanpur (2004-2008) Bachelor of Technology (B.Tech.) in Computer Science and Engineering GPA: 3.96/4.00 (Institute and Department Rank 2) Cornell University (Summer 2007) CS490 (Independent Research and Reading) GPA: 4.00/4.00 Advisors: Lillian Lee, Claire Cardie",
"title": ""
},
{
"docid": "neg:1840392_1",
"text": "In this paper, we compare the geometrical performance between the rigorous sensor model (RSM) and rational function model (RFM) in the sensor modeling of FORMOSAT-2 satellite images. For the RSM, we provide a least squares collocation procedure to determine the precise orbits. As for the RFM, we analyze the model errors when a large amount of quasi-control points, which are derived from the satellite ephemeris and attitude data, are employed. The model errors with respect to the length of the image strip are also demonstrated. Experimental results show that the RFM is well behaved, indicating that its positioning errors is similar to that of the RSM. Introduction Sensor orientation modeling is a prerequisite for the georeferencing of satellite images or 3D object reconstruction from satellite stereopairs. Nowadays, most of the high-resolution satellites use linear array pushbroom scanners. Based on the pushbroom scanning geometry, a number of investigations have been reported regarding the geometric accuracy of linear array images (Westin, 1990; Chen and Lee, 1993; Li, 1998; Tao et al., 2000; Toutin, 2003; Grodecki and Dial, 2003). The geometric modeling of the sensor orientation may be divided into two categories, namely, the rigorous sensor model (RSM) and the rational function model (RFM) (Toutin, 2004). Capable of fully delineating the imaging geometry between the image space and object space, the RSM has been recognized in providing the most precise geometrical processing of satellite images. Based on the collinearity condition, an image point corresponds to a ground point using the employment of the orientation parameters, which are expressed as a function of the sampling time. Due to the dynamic sampling, the RSM contains many mathematical calculations, which can cause problems for researchers who are not familiar with the data preprocessing. Moreover, with the increasing number of Earth resource satellites, researchers need to familiarize themselves with the uniqueness and complexity of each sensor model. Therefore, a generic sensor model of the geometrical processing is needed for simplification. (Dowman and Michalis, 2003). The RFM is a generalized sensor model that is used as an alternative for the RSM. The model uses a pair of ratios of two polynomials to approximate the collinearity condition equations. The RFM has been successfully applied to several high-resolution satellite images such as Ikonos (Di et al., 2003; Grodecki and Dial, 2003; Fraser and Hanley, 2003) and QuickBird (Robertson, 2003). Due to its simple impleThe Geometrical Comparisons of RSM and RFM for FORMOSAT-2 Satellite Images Liang-Chien Chen, Tee-Ann Teo, and Chien-Liang Liu mentation and standardization (NIMA, 2000), the approach has been widely used in the remote sensing community. Launched on 20 May 2004, FORMOSAT-2 is operated by the National Space Organization of Taiwan. The satellite operates in a sun-synchronous orbit at an altitude of 891 km and with an inclination of 99.1 degrees. It has a swath width of 24 km and orbits the Earth exactly 14 times per day, which makes daily revisits possible (NSPO, 2005). Its panchromatic images have a resolution of 2 meters, while the multispectral sensor produces 8 meter resolution images covering the blue, green, red, and NIR bands. Its high performance provides an excellent data resource for the remote sensing researchers. The major objective of this investigation is to compare the geometrical performances between the RSM and RFM when FORMOSAT-2 images are employed. A least squares collocation-based RSM will also be proposed in the paper. In the reconstruction of the RFM, rational polynomial coefficients are generated by using the on-board ephemeris and attitude data. In addition to the comparison of the two models, the modeling error of the RFM is analyzed when long image strips are used. Rigorous Sensor Models The proposed method comprises essentially of two parts. The first involves the development of the mathematical model for time-dependent orientations. The second performs the least squares collocation to compensate the local systematic errors. Orbit Fitting There are two types of sensor models for pushbroom satellite images, i.e., orbital elements (Westin, 1990) and state vectors (Chen and Chang, 1998). The orbital elements use the Kepler elements as the orbital parameters, while the state vectors calculate the orbital parameters directly by using the position vector. Although both sensor models are robust, the state vector model provides simpler mathematical calculations. For this reason, we select the state vector approach in this investigation. Three steps are included in the orbit modeling: (a) Initialization of the orientation parameters using on-board ephemeris data; (b) Compensation of the systematic errors of the orbital parameters and attitude data via ground control points (GCPs); and (c) Modification of the orbital parameters by using the Least Squares Collocation (Mikhail and Ackermann, 1982) technique. PHOTOGRAMMETRIC ENGINEER ING & REMOTE SENS ING May 2006 573 Center for Space and Remote Sensing Research National Central University, Chung-Li, Taiwan (lcchen@csrsr.ncu.edu.tw). Photogrammetric Engineering & Remote Sensing Vol. 72, No. 5, May 2006, pp. 573–579. 0099-1112/06/7205–0573/$3.00/0 © 2006 American Society for Photogrammetry and Remote Sensing HR-05-016.qxd 4/10/06 2:55 PM Page 573",
"title": ""
},
{
"docid": "neg:1840392_2",
"text": "Computation of floating-point transcendental functions has a relevant importance in a wide variety of scientific applications, where the area cost, error and latency are important requirements to be attended. This paper describes a flexible FPGA implementation of a parameterizable floating-point library for computing sine, cosine, arctangent and exponential functions using the CORDIC algorithm. The novelty of the proposed architecture is that by sharing the same resources the CORDIC algorithm can be used in two operation modes, allowing it to compute the sine, cosine or arctangent functions. Additionally, in case of the exponential function, the architectures change automatically between the CORDIC or a Taylor approach, which helps to improve the precision characteristics of the circuit, specifically for small input values after the argument reduction. Synthesis of the circuits and an experimental analysis of the errors have demonstrated the correctness and effectiveness of the implemented cores and allow the designer to choose, for general-purpose applications, a suitable bit-width representation and number of iterations of the CORDIC algorithm.",
"title": ""
},
{
"docid": "neg:1840392_3",
"text": "In this paper we examine the use of a mathematical procedure, called Principal Component Analysis, in Recommender Systems. The resulting filtering algorithm applies PCA on user ratings and demographic data, aiming to improve various aspects of the recommendation process. After a brief introduction to PCA, we provide a discussion of the proposed PCADemog algorithm, along with possible ways of combining it with different sources of filtering data. The experimental part of this work tests distinct parameterizations for PCA-Demog, identifying those with the best performance. Finally, the paper compares their results with those achieved by other filtering approaches, and draws interesting conclusions.",
"title": ""
},
{
"docid": "neg:1840392_4",
"text": "Data-driven decision-making consequential to individuals raises important questions of accountability and justice. Indeed, European law provides individuals limited rights to 'meaningful information about the logic' behind significant, autonomous decisions such as loan approvals, insurance quotes, and CV filtering. We undertake three experimental studies examining people's perceptions of justice in algorithmic decision-making under different scenarios and explanation styles. Dimensions of justice previously observed in response to human decision-making appear similarly engaged in response to algorithmic decisions. Qualitative analysis identified several concerns and heuristics involved in justice perceptions including arbitrariness, generalisation, and (in)dignity. Quantitative analysis indicates that explanation styles primarily matter to justice perceptions only when subjects are exposed to multiple different styles---under repeated exposure of one style, scenario effects obscure any explanation effects. Our results suggests there may be no 'best' approach to explaining algorithmic decisions, and that reflection on their automated nature both implicates and mitigates justice dimensions.",
"title": ""
},
{
"docid": "neg:1840392_5",
"text": "The Internet of Things (IoTs) refers to the inter-connection of billions of smart devices. The steadily increasing number of IoT devices with heterogeneous characteristics requires that future networks evolve to provide a new architecture to cope with the expected increase in data generation. Network function virtualization (NFV) provides the scale and flexibility necessary for IoT services by enabling the automated control, management and orchestration of network resources. In this paper, we present a novel NFV enabled IoT architecture targeted for a state-of-the art operating room environment. We use web services based on the representational state transfer (REST) web architecture as the IoT application's southbound interface and illustrate its applicability via two different scenarios.",
"title": ""
},
{
"docid": "neg:1840392_6",
"text": "Sentence compression is the task of producing a summary of a single sentence. The compressed sentence should be shorter, contain the important content from the original, and itself be grammatical. The three papers discussed here take different approaches to identifying important content, determining which sentences are grammatical, and jointly optimizing these objectives. One family of approaches we will discuss is those that are tree-based, which create a compressed sentence by making edits to the syntactic tree of the original sentence. A second type of approach is sentence-based, which generates strings directly. Orthogonal to either of these two approaches is whether sentences are treated in isolation or if the surrounding discourse affects compressions. We compare a tree-based, a sentence-based, and a discourse-based approach and conclude with ideas for future work in this area. Comments University of Pennsylvania Department of Computer and Information Science Technical Report No. MSCIS-10-20. This technical report is available at ScholarlyCommons: http://repository.upenn.edu/cis_reports/929 Methods for Sentence Compression",
"title": ""
},
{
"docid": "neg:1840392_7",
"text": "This paper identifies and examines the key principles underlying building a state-of-the-art grammatical error correction system. We do this by analyzing the Illinois system that placed first among seventeen teams in the recent CoNLL-2013 shared task on grammatical error correction. The system focuses on five different types of errors common among non-native English writers. We describe four design principles that are relevant for correcting all of these errors, analyze the system along these dimensions, and show how each of these dimensions contributes to the performance.",
"title": ""
},
{
"docid": "neg:1840392_8",
"text": "A good test suite is one that detects real faults. Because the set of faults in a program is usually unknowable, this definition is not useful to practitioners who are creating test suites, nor to researchers who are creating and evaluating tools that generate test suites. In place of real faults, testing research often uses mutants, which are artificial faults -- each one a simple syntactic variation -- that are systematically seeded throughout the program under test. Mutation analysis is appealing because large numbers of mutants can be automatically-generated and used to compensate for low quantities or the absence of known real faults. Unfortunately, there is little experimental evidence to support the use of mutants as a replacement for real faults. This paper investigates whether mutants are indeed a valid substitute for real faults, i.e., whether a test suite’s ability to detect mutants is correlated with its ability to detect real faults that developers have fixed. Unlike prior studies, these investigations also explicitly consider the conflating effects of code coverage on the mutant detection rate. Our experiments used 357 real faults in 5 open-source applications that comprise a total of 321,000 lines of code. Furthermore, our experiments used both developer-written and automatically-generated test suites. The results show a statistically significant correlation between mutant detection and real fault detection, independently of code coverage. The results also give concrete suggestions on how to improve mutation analysis and reveal some inherent limitations.",
"title": ""
},
{
"docid": "neg:1840392_9",
"text": "Traditionally, interference protection is guaranteed through a policy of spectrum licensing, whereby wireless systems get exclusive access to spectrum. This is an effective way to prevent interference, but it leads to highly inefficient use of spectrum. Cognitive radio along with software radio, spectrum sensors, mesh networks, and other emerging technologies can facilitate new forms of spectrum sharing that greatly improve spectral efficiency and alleviate scarcity, if policies are in place that support these forms of sharing. On the other hand, new technology that is inconsistent with spectrum policy will have little impact. This paper discusses policies that can enable or facilitate use of many spectrum-sharing arrangements, where the arrangements are categorized as being based on coexistence or cooperation and as sharing among equals or primary-secondary sharing. A shared spectrum band may be managed directly by the regulator, or this responsibility may be delegated in large part to a license-holder. The type of sharing arrangement and the entity that manages it have a great impact on which technical approaches are viable and effective. The most efficient and cost-effective form of spectrum sharing will depend on the type of systems involved, where systems under current consideration are as diverse as television broadcasters, cellular carriers, public safety systems, point-to-point links, and personal and local-area networks. In addition, while cognitive radio offers policy-makers the opportunity to improve spectral efficiency, cognitive radio also provides new challenges for policy enforcement. A responsible regulator will not allow a device into the marketplace that might harm other systems. Thus, designers must seek innovative ways to assure regulators that new devices will comply with policy requirements and will not cause harmful interference.",
"title": ""
},
{
"docid": "neg:1840392_10",
"text": "Protecting source code against reverse engineering and theft is an important problem. The goal is to carry out computations using confidential algorithms on an untrusted party while ensuring confidentiality of algorithms. This problem has been addressed for Boolean circuits known as ‘circuit privacy’. Circuits corresponding to real-world programs are impractical. Well-known obfuscation techniques are highly practicable, but provide only limited security, e.g., no piracy protection. In this work, we modify source code yielding programs with adjustable performance and security guarantees ranging from indistinguishability obfuscators to (non-secure) ordinary obfuscation. The idea is to artificially generate ‘misleading’ statements. Their results are combined with the outcome of a confidential statement using encrypted selector variables. Thus, an attacker must ‘guess’ the encrypted selector variables to disguise the confidential source code. We evaluated our method using more than ten programmers as well as pattern mining across open source code repositories to gain insights of (micro-)coding patterns that are relevant for generating misleading statements. The evaluation reveals that our approach is effective in that it successfully preserves source code confidentiality.",
"title": ""
},
{
"docid": "neg:1840392_11",
"text": "Since parental personality traits are assumed to play a role in parenting behaviors, the current study examined the relation between parental personality and parenting style among 688 Dutch parents of adolescents in the SMILE study. The study assessed Big Five personality traits and derived parenting styles (authoritative, authoritarian, indulgent, and uninvolved) from scores on the underlying dimensions of support and strict control. Regression analyses were used to determine which personality traits were associated with parenting dimensions and styles. As regards dimensions, the two aspects of personality reflecting interpersonal interactions (extraversion and agreeableness) were related to supportiveness. Emotional stability was associated with lower strict control. As regards parenting styles, extraverted, agreeable, and less emotionally stable individuals were most likely to be authoritative parents. Conscientiousness and openness did not relate to general parenting, but might be associated with more content-specific acts of parenting.",
"title": ""
},
{
"docid": "neg:1840392_12",
"text": "In the future smart grid, both users and power companies can potentially benefit from the economical and environmental advantages of smart pricing methods to more effectively reflect the fluctuations of the wholesale price into the customer side. In addition, smart pricing can be used to seek social benefits and to implement social objectives. To achieve social objectives, the utility company may need to collect various information about users and their energy consumption behavior, which can be challenging. In this paper, we propose an efficient pricing method to tackle this problem. We assume that each user is equipped with an energy consumption controller (ECC) as part of its smart meter. All smart meters are connected to not only the power grid but also a communication infrastructure. This allows two-way communication among smart meters and the utility company. We analytically model each user's preferences and energy consumption patterns in form of a utility function. Based on this model, we propose a Vickrey-Clarke-Groves (VCG) mechanism which aims to maximize the social welfare, i.e., the aggregate utility functions of all users minus the total energy cost. Our design requires that each user provides some information about its energy demand. In return, the energy provider will determine each user's electricity bill payment. Finally, we verify some important properties of our proposed VCG mechanism for demand side management such as efficiency, user truthfulness, and nonnegative transfer. Simulation results confirm that the proposed pricing method can benefit both users and utility companies.",
"title": ""
},
{
"docid": "neg:1840392_13",
"text": "Cryptographic systems are essential for computer and communication security, for instance, RSA is used in PGP Email clients and AES is employed in full disk encryption. In practice, the cryptographic keys are loaded and stored in RAM as plain-text, and therefore vulnerable to physical memory attacks (e.g., cold-boot attacks). To tackle this problem, we propose Copker, which implements asymmetric cryptosystems entirely within the CPU, without storing plain-text private keys in the RAM. In its active mode, Copker stores kilobytes of sensitive data, including the private key and the intermediate states, only in onchip CPU caches (and registers). Decryption/signing operations are performed without storing sensitive information in system memory. In the suspend mode, Copker stores symmetrically encrypted private keys in memory, while employs existing solutions to keep the key-encryption key securely in CPU registers. Hence, Copker releases the system resources in the suspend mode. In this paper, we implement Copker with the most common asymmetric cryptosystem, RSA, with the support of multiple private keys. We show that Copker provides decryption/signing services that are secure against physical memory attacks. Meanwhile, with intensive experiments, we demonstrate that our implementation of Copker is secure and requires reasonable overhead. Keywords—Cache-as-RAM; cold-boot attack; key management; asymmetric cryptography implementation.",
"title": ""
},
{
"docid": "neg:1840392_14",
"text": "We explore the application of deep residual learning and dilated convolutions to the keyword spotting task, using the recently-released Google Speech Commands Dataset as our benchmark. Our best residual network (ResNet) implementation significantly outperforms Google's previous convolutional neural networks in terms of accuracy. By varying model depth and width, we can achieve compact models that also outperform previous small-footprint variants. To our knowledge, we are the first to examine these approaches for keyword spotting, and our results establish an open-source state-of-the-art reference to support the development of future speech-based interfaces.",
"title": ""
},
{
"docid": "neg:1840392_15",
"text": "The goal of this note is to point out that any distributed representation can be turned into a classifier through inversion via Bayes rule. The approach is simple and modular, in that it will work with any language representation whose training can be formulated as optimizing a probability model. In our application to 2 million sentences from Yelp reviews, we also find that it performs as well as or better than complex purpose-built algorithms.",
"title": ""
},
{
"docid": "neg:1840392_16",
"text": "This paper presents a novel approach based on enhanced local directional patterns (ELDP) to face recognition, which adopts local edge gradient information to represent face images. Specially, each pixel of every facial image sub-block gains eight edge response values by convolving the local 3 3 neighborhood with eight Kirsch masks, respectively. ELDP just utilizes the directions of the most encoded into a double-digit octal number to produce the ELDP codes. The ELDP dominant patterns (ELDP) are generated by statistical analysis according to the occurrence rates of the ELDP codes in a mass of facial images. Finally, the face descriptor is represented by using the global concatenated histogram based on ELDP or ELDP extracted from the face image which is divided into several sub-regions. The performances of several single face descriptors not integrated schemes are evaluated in face recognition under different challenges via several experiments. The experimental results demonstrate that the proposed method is more robust to non-monotonic illumination changes and slight noise without any filter. & 2013 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "neg:1840392_17",
"text": "The paper proposed a novel automatic target recognition (ATR) system for classification of three types of ground vehicles in the moving and stationary target acquisition and recognition (MSTAR) public release database. First MSTAR image chips are represented as fine and raw feature vectors, where raw features compensate for the target pose estimation error that corrupts fine image features. Then, the chips are classified by using the adaptive boosting (AdaBoost) algorithm with the radial basis function (RBF) network as the base learner. Since the RBF network is a binary classifier, the multiclass problem was decomposed into a set of binary ones through the error-correcting output codes (ECOC) method, specifying a dictionary of code words for the set of three possible classes. AdaBoost combines the classification results of the RBF network for each binary problem into a code word, which is then \"decoded\" as one of the code words (i.e., ground-vehicle classes) in the specified dictionary. Along with classification, within the AdaBoost framework, we also conduct efficient fusion of the fine and raw image-feature vectors. The results of large-scale experiments demonstrate that our ATR scheme outperforms the state-of-the-art systems reported in the literature",
"title": ""
},
{
"docid": "neg:1840392_18",
"text": "Reasoning and inference are central to human and artificial intelligence. Modeling inference in human language is notoriously challenging but is fundamental to natural language understanding and many applications. With the availability of large annotated data, neural network models have recently advanced the field significantly. In this paper, we present a new state-of-the-art result, achieving the accuracy of 88.3% on the standard benchmark, the Stanford Natural Language Inference dataset. This result is achieved first through our enhanced sequential encoding model, which outperforms the previous best model that employs more complicated network architectures, suggesting that the potential of sequential LSTM-based models have not been fully explored yet in previous work. We further show that by explicitly considering recursive architectures, we achieve additional improvement. Particularly, incorporating syntactic parse information contributes to our best result; it improves the performance even when the parse information is added to an already very strong system.",
"title": ""
}
] |
1840393 | Selfishness, Altruism and Message Spreading in Mobile Social Networks | [
{
"docid": "pos:1840393_0",
"text": "Mobile ad hoc routing protocols allow nodes with wireless adaptors to communicate with one another without any pre-existing network infrastructure. Existing ad hoc routing protocols, while robust to rapidly changing network topology, assume the presence of a connected path from source to destination. Given power limitations, the advent of short-range wireless networks, and the wide physical conditions over which ad hoc networks must be deployed, in some scenarios it is likely that this assumption is invalid. In this work, we develop techniques to deliver messages in the case where there is never a connected path from source to destination or when a network partition exists at the time a message is originated. To this end, we introduce Epidemic Routing, where random pair-wise exchanges of messages among mobile hosts ensure eventual message delivery. The goals of Epidemic Routing are to: i) maximize message delivery rate, ii) minimize message latency, and iii) minimize the total resources consumed in message delivery. Through an implementation in the Monarch simulator, we show that Epidemic Routing achieves eventual delivery of 100% of messages with reasonable aggregate resource consumption in a number of interesting scenarios.",
"title": ""
}
] | [
{
"docid": "neg:1840393_0",
"text": "Mobile online social networks (OSNs) are emerging as the popular mainstream platform for information and content sharing among people. In order to provide Quality of Experience (QoE) support for mobile OSN services, in this paper we propose a socially-driven learning-based framework, namely Spice, for media content prefetching to reduce the access delay and enhance mobile user's satisfaction. Through a large-scale data-driven analysis over real-life mobile Twitter traces from over 17,000 users during a period of five months, we reveal that the social friendship has a great impact on user's media content click behavior. To capture this effect, we conduct social friendship clustering over the set of user's friends, and then develop a cluster-based Latent Bias Model for socially-driven learning-based prefetching prediction. We then propose a usage-adaptive prefetching scheduling scheme by taking into account that different users may possess heterogeneous patterns in the mobile OSN app usage. We comprehensively evaluate the performance of Spice framework using trace-driven emulations on smartphones. Evaluation results corroborate that the Spice can achieve superior performance, with an average 67.2% access delay reduction at the low cost of cellular data and energy consumption. Furthermore, by enabling users to offload their machine learning procedures to a cloud server, our design can achieve speed-up of a factor of 1000 over the local data training execution on smartphones.",
"title": ""
},
{
"docid": "neg:1840393_1",
"text": "Feeling emotion is a critical characteristic to distinguish people from machines. Among all the multi-modal resources for emotion detection, textual datasets are those containing the least additional information in addition to semantics, and hence are adopted widely for testing the developed systems. However, most of the textual emotional datasets consist of emotion labels of only individual words, sentences or documents, which makes it challenging to discuss the contextual flow of emotions. In this paper, we introduce EmotionLines, the first dataset with emotions labeling on all utterances in each dialogue only based on their textual content. Dialogues in EmotionLines are collected from Friends TV scripts and private Facebook messenger dialogues. Then one of seven emotions, six Ekman’s basic emotions plus the neutral emotion, is labeled on each utterance by 5 Amazon MTurkers. A total of 29,245 utterances from 2,000 dialogues are labeled in EmotionLines. We also provide several strong baselines for emotion detection models on EmotionLines in this paper.",
"title": ""
},
{
"docid": "neg:1840393_2",
"text": "In this paper, Magnetic Resonance Images,T2 weighte d modality , have been pre-processed by bilateral filter to reduce th e noise and maintaining edges among the different tissues. Four different t echniques with morphological operations have been applied to extra c the tumor region. These were: Gray level stretching and Sobel edge de tection, K-Means Clustering technique based on location and intensit y, Fuzzy C-Means Clustering, and An Adapted K-Means clustering techn ique and Fuzzy CMeans technique. The area of the extracted tumor re gions has been calculated. The present work showed that the four i mplemented techniques can successfully detect and extract the brain tumor and thereby help doctors in identifying tumor's size and region.",
"title": ""
},
{
"docid": "neg:1840393_3",
"text": "The Trilobita were characterized by a cephalic region in which the biomineralized exoskeleton showed relatively high morphological differentiation among a taxonomically stable set of well defined segments, and an ontogenetically and taxonomically dynamic trunk region in which both exoskeletal segments and ventral appendages were similar in overall form. Ventral appendages were homonomous biramous limbs throughout both the cephalon and trunk, except for the most anterior appendage pair that was antenniform, preoral, and uniramous, and a posteriormost pair of antenniform cerci, known only in one species. In some clades trunk exoskeletal segments were divided into two batches. In some, but not all, of these clades the boundary between batches coincided with the boundary between the thorax and the adult pygidium. The repeated differentiation of the trunk into two batches of segments from the homonomous trunk condition indicates an evolutionary trend in aspects of body patterning regulation that was achieved independently in several trilobite clades. The phylogenetic placement of trilobites and congruence of broad patterns of tagmosis with those seen among extant arthropods suggest that the expression domains of trilobite cephalic Hox genes may have overlapped in a manner similar to that seen among extant arachnates. This, coupled with the fact that trilobites likely possessed ten Hox genes, presents one alternative to a recent model in which Hox gene distribution in trilobites was equated to eight putative divisions of the trilobite body plan.",
"title": ""
},
{
"docid": "neg:1840393_4",
"text": "A bond graph model of a hybrid electric vehicle (HEV) powertrain test cell is proposed. The test cell consists of a motor/generator coupled to a HEV powertrain and powered by a bidirectional power converter. Programmable loading conditions, including positive and negative resistive and inertial loads of any magnitude are modeled, avoiding the use of mechanical inertial loads involved in conventional test cells. The dynamics and control equations of the test cell are derived directly from the bond graph models. The modeling and simulation results of the dynamics of the test cell are validated through experiments carried out on a scaled-down system.",
"title": ""
},
{
"docid": "neg:1840393_5",
"text": "This paper presents the machine translation system known as TransLI (Translation of Legal Information) developed by the authors for automatic translation of Canadian Court judgments from English to French and from French to English. Normally, a certified translation of a legal judgment takes several months to complete. The authors attempted to shorten this time significantly using a unique statistical machine translation system which has attracted the attention of the federal courts in Canada for its accuracy and speed. This paper also describes the results of a human evaluation of the output of the system in the context of a pilot project in collaboration with the federal courts of Canada. 1. Context of the work NLP Technologies is an enterprise devoted to the use of advanced information technologies in the judicial domain. Its main focus is DecisionExpressTM a service utilizing automatic summarization technology with respect to legal information. DecisionExpress is a weekly bulletin of recent decisions of Canadian federal courts and tribunals. It is an tool that processes judicial decisions automatically and makes the daily information used by jurists more accessible by presenting the legal record of the proceedings of federal courts in Canada as a table-style summary (Farzindar et al., 2004, Chieze et al. 2008). NLP Technologies in collaboration with researchers from the RALI at Université de Montréal have developed TransLI to translate automatically the judgments from the Canadian Federal Courts. As it happens, for the new weekly published judgments, 75% of decisions are originally written in English 1 http://www.nlptechnologies.ca 2 http://rali.iro.umontreal.ca Machine Translation of Legal Information and Its Evaluation 2 and 25% in French. By law, the Federal Courts have to provide a translation in the other official language of Canada. The legal domain has continuous publishing and translation cycles, large volumes of digital content and growing demand to distribute more multilingual information. It is necessary to handle a high volume of translations quickly. Currently, a certified translation of a legal judgment takes several months to complete. Afterwards, there is a significant delay between the publication of a judgment in the original language and the availability of its human translation into the other official language. Initially, the goal of this work was to allow the court, during the few months when the official translation is pending, to publish automatically translated judgments and summaries with the appropriate caveat. Once the official translation would become available, the Court would replace the machine translations by the official ones. However, the high quality of the machine translation system obtained, developed and trained specifically on the Federal Courts corpora, opens further opportunities which are currently being investigated: machine translations could be considered as first drafts for official translations that would only need to be revised before their publication. This procedure would thus reduce the delay between the publication of the decision in the original language and its official translation. It would also provide opportunities for saving on the cost of translation. We evaluated the French and English output and performed a more detailed analysis of the modifications made to the translations by the evaluators in the context of a pilot study to be conducted in cooperation with the Federal Courts. This paper describes our statistical machine translation system, whose performance has been assessed with the usual automatic evaluation metrics. We also present the results of a manual evaluation of the translations and the result of a completed translation pilot project in a real context of publication of the federal courts of Canada. To our knowledge, this is the first attempt to build a large-scale translation system of complete judgments for eventual publication.",
"title": ""
},
{
"docid": "neg:1840393_6",
"text": "We initiate the study of secure multi-party computation (MPC) in a server-aided setting, where the parties have access to a single server that (1) does not have any input to the computation; (2) does not receive any output from the computation; but (3) has a vast (but bounded) amount of computational resources. In this setting, we are concerned with designing protocols that minimize the computation of the parties at the expense of the server. We develop new definitions of security for this server-aided setting that generalize the standard simulation-based definitions for MPC and allow us to formally capture the existence of dishonest but non-colluding participants. This requires us to introduce a formal characterization of non-colluding adversaries that may be of independent interest. We then design general and special-purpose server-aided MPC protocols that are more efficient (in terms of computation and communication) for the parties than the alternative of running a standard MPC protocol (i.e., without the server). Our main general-purpose protocol provides security when there is at least one honest party with input. We also construct a new and efficient server-aided protocol for private set intersection and give a general transformation from any secure delegated computation scheme to a server-aided two-party protocol. ∗Microsoft Research. senyk@microsoft.com. †University of Calgary. pmohassel@cspc.ucalgary.ca. Work done while visiting Microsoft Research. ‡Columbia University. mariana@cs.columbia.edu. Work done as an intern at Microsoft Research.",
"title": ""
},
{
"docid": "neg:1840393_7",
"text": "This book contains materials that come out of the Artificial General Intelligence Research Institute (AGIRI) Workshop, held in May 20-21, 2006 at Washington DC. The theme of the workshop is “Transitioning from Narrow AI to Artificial General Intelligence.” In this introductory chapter, we will clarify the notion of “Artificial General Intelligence”, briefly survey the past and present situation of the field, analyze and refute some common objections and doubts regarding this area of research, and discuss what we believe needs to be addressed by the field as a whole in the near future. Finally, we will briefly summarize the contents of the other chapters in this collection.",
"title": ""
},
{
"docid": "neg:1840393_8",
"text": "In this letter, an ultra-wideband (UWB) bandpass filter (BPF) using stepped-impedance stub-loaded resonator (SISLR) is presented. Characterized by theoretical analysis, the proposed SISLR is found to have the advantage of providing more degrees of freedom to adjust the resonant frequencies. Besides, two transmission zeros can be created at both lower and upper sides of the passband. Benefiting from these features, a UWB BPF is then investigated by incorporating this SISLR and two aperture-backed interdigital coupled-lines. Finally, this filter is built and tested. The simulated and measured results are in good agreement with each other, showing good wideband filtering performance with sharp rejection skirts outside the passband.",
"title": ""
},
{
"docid": "neg:1840393_9",
"text": "Flow experience is often considered as an important standard of ideal user experience (UX). Till now, flow is mainly measured via self-report questionnaires, which cannot evaluate flow immediately and objectively. In this paper, we constructed a physiological evaluation model to evaluate flow in virtual reality (VR) game. The evaluation model consists of five first-level indicators and their respective second-level indicators. Then, we conducted an empirical experiment to test the effectiveness of partial indicators to predict flow experience. Most results supported the model and revealed that heart rate, interbeat interval, heart rate variability (HRV), low-frequency HRV (LF-HRV), high-frequency HRV (HF-HRV), and respiratory rate are all effective indicators in predicting flow experience. Further research should be conducted to improve the evaluation model and conclude practical implications in UX and VR game design.",
"title": ""
},
{
"docid": "neg:1840393_10",
"text": "We introduce two new methods of deriving the classical PCA in the framework of minimizing the mean square error upon performing a lower-dimensional approximation of the data. These methods are based on two forms of the mean square error function. One of the novelties of the presented methods is that the commonly employed process of subtraction of the mean of the data becomes part of the solution of the optimization problem and not a pre-analysis heuristic. We also derive the optimal basis and the minimum error of approximation in this framework and demonstrate the elegance of our solution in comparison with a recent solution in the framework.",
"title": ""
},
{
"docid": "neg:1840393_11",
"text": "The cerebellum is involved in learning and memory of sensory motor skills. However, the way this process takes place in local microcircuits is still unclear. The initial proposal, casted into the Motor Learning Theory, suggested that learning had to occur at the parallel fiber–Purkinje cell synapse under supervision of climbing fibers. However, the uniqueness of this mechanism has been questioned, and multiple forms of long-term plasticity have been revealed at various locations in the cerebellar circuit, including synapses and neurons in the granular layer, molecular layer and deep-cerebellar nuclei. At present, more than 15 forms of plasticity have been reported. There has been a long debate on which plasticity is more relevant to specific aspects of learning, but this question turned out to be hard to answer using physiological analysis alone. Recent experiments and models making use of closed-loop robotic simulations are revealing a radically new view: one single form of plasticity is insufficient, while altogether, the different forms of plasticity can explain the multiplicity of properties characterizing cerebellar learning. These include multi-rate acquisition and extinction, reversibility, self-scalability, and generalization. Moreover, when the circuit embeds multiple forms of plasticity, it can easily cope with multiple behaviors endowing therefore the cerebellum with the properties needed to operate as an effective generalized forward controller.",
"title": ""
},
{
"docid": "neg:1840393_12",
"text": "BACKGROUND\nThis study analyzes the problems and consequences associated with prolonged use of laparoscopic instruments (dissector and needle holder) and equipments.\n\n\nMETHODS\nA total of 390 questionnaires were sent to the laparoscopic surgeons of the Spanish Health System. Questions were structured on the basis of 4 categories: demographics, assessment of laparoscopic dissector, assessment of needle holder, and other informations.\n\n\nRESULTS\nA response rate of 30.26% was obtained. Among them, handle shape of laparoscopic instruments was identified as the main element that needed to be improved. Furthermore, the type of instrument, electrocautery pedals and height of the operating table were identified as major causes of forced positions during the use of both surgical instruments.\n\n\nCONCLUSIONS\nAs far as we know, this is the largest Spanish survey conducted on this topic. From this survey, some ergonomic drawbacks have been identified in: (a) the instruments' design, (b) the operating tables, and (c) the posture of the surgeons.",
"title": ""
},
{
"docid": "neg:1840393_13",
"text": "When building agents and synthetic characters, and in order to achieve believability, we must consider the emotional relations established between users and characters, that is, we must consider the issue of \"empathy\". Defined in broad terms as \"An observer reacting emotionally because he perceives that another is experiencing or about to experience an emotion\", empathy is an important element to consider in the creation of relations between humans and agents. In this paper we will focus on the role of empathy in the construction of synthetic characters, providing some requirements for such construction and illustrating the presented concepts with a specific system called FearNot!. FearNot! was developed to address the difficult and often devastating problem of bullying in schools. By using role playing and empathic synthetic characters in a 3D environment, FearNot! allows children from 8 to 12 to experience a virtual scenario where they can witness (in a third-person perspective) bullying situations. To build empathy into FearNot! we have considered the following components: agentýs architecture; the charactersý embodiment and emotional expression; proximity with the user and emotionally charged situations.We will describe how these were implemented in FearNot! and report on the preliminary results we have with it.",
"title": ""
},
{
"docid": "neg:1840393_14",
"text": "We present Wave menus, a variant of multi-stroke marking menus designed for improving the novice mode of marking while preserving their efficiency in the expert mode of marking. Focusing on the novice mode, a criteria-based analysis of existing marking menus motivates the design of Wave menus. Moreover a user experiment is presented that compares four hierarchical marking menus in novice mode. Results show that Wave and compound-stroke menus are significantly faster and more accurate than multi-stroke menus in novice mode, while it has been shown that in expert mode the multi-stroke menus and therefore the Wave menus outperform the compound-stroke menus. Wave menus also require significantly less screen space than compound-stroke menus. As a conclusion, Wave menus offer the best performance for both novice and expert modes in comparison with existing multi-level marking menus, while requiring less screen space than compound-stroke menus.",
"title": ""
},
{
"docid": "neg:1840393_15",
"text": "Growing network traffic brings huge pressure to the server cluster. Using load balancing technology in server cluster becomes the choice of most enterprises. Because of many limitations, the development of the traditional load balancing technology has encountered bottlenecks. This has forced companies to find new load balancing method. Software Defined Network (SDN) provides a good method to solve the load balancing problem. In this paper, we implemented two load balancing algorithm that based on the latest SDN network architecture. The first one is a static scheduling algorithm and the second is a dynamic scheduling algorithm. Our experiments show that the performance of the dynamic algorithm is better than the static algorithm.",
"title": ""
},
{
"docid": "neg:1840393_16",
"text": "This study presents a new approach to solve the well-known power system Economic Load Dispatch problem (ED) using a hybrid algorithm consisting of Genetic Algorithm (GA), Pattern Search (PS) and Sequential Quadratic Programming (SQP). GA is the main optimizer of this algorithm, whereas PS and SQP are used to fine-tune the results obtained from the GA, thereby increasing solution confidence. To test the effectiveness of this approach it was applied to various test systems. Furthermore, the convergence characteristics and robustness of the proposed method have been explored through comparisons with results reported in literature. The outcome is very encouraging and suggests that the hybrid GA-PS-SQP algorithm is very effective in solving the power system economic load dispatch problem.",
"title": ""
},
{
"docid": "neg:1840393_17",
"text": "We propose a graphical model for representing networks of stochastic processes, the minimal generative model graph. It is based on reduced factorizations of the joint distribution over time. We show that under appropriate conditions, it is unique and consistent with another type of graphical model, the directed information graph, which is based on a generalization of Granger causality. We demonstrate how directed information quantifies Granger causality in a particular sequential prediction setting. We also develop efficient methods to estimate the topological structure from data that obviate estimating the joint statistics. One algorithm assumes upper bounds on the degrees and uses the minimal dimension statistics necessary. In the event that the upper bounds are not valid, the resulting graph is nonetheless an optimal approximation in terms of Kullback-Leibler (KL) divergence. Another algorithm uses near-minimal dimension statistics when no bounds are known, but the distribution satisfies a certain criterion. Analogous to how structure learning algorithms for undirected graphical models use mutual information estimates, these algorithms use directed information estimates. We characterize the sample-complexity of two plug-in directed information estimators and obtain confidence intervals. For the setting when point estimates are unreliable, we propose an algorithm that uses confidence intervals to identify the best approximation that is robust to estimation error. Last, we demonstrate the effectiveness of the proposed algorithms through the analysis of both synthetic data and real data from the Twitter network. In the latter case, we identify which news sources influence users in the network by merely analyzing tweet times.",
"title": ""
},
{
"docid": "neg:1840393_18",
"text": "Coarse-grained semantic categories such as supersenses have proven useful for a range of downstream tasks such as question answering or machine translation. To date, no effort has been put into integrating the supersenses into distributional word representations. We present a novel joint embedding model of words and supersenses, providing insights into the relationship between words and supersenses in the same vector space. Using these embeddings in a deep neural network model, we demonstrate that the supersense enrichment leads to a significant improvement in a range of downstream classification tasks.",
"title": ""
},
{
"docid": "neg:1840393_19",
"text": "Mobile Edge Computing (MEC) consists of deploying computing resources (CPU, storage) at the edge of mobile networks; typically near or with eNodeBs. Besides easing the deployment of applications and services requiring low access to the remote server, such as Virtual Reality and Vehicular IoT, MEC will enable the development of context-aware and context-optimized applications, thanks to the Radio API (e.g. information on user channel quality) exposed by eNodeBs. Although ETSI is defining the architecture specifications, solutions to integrate MEC to the current 3GPP architecture are still open. In this paper, we fill this gap by proposing and implementing a Software Defined Networking (SDN)-based MEC framework, compliant with both ETSI and 3GPP architectures. It provides the required data-plane flexibility and programmability, which can on-the-fly improve the latency as a function of the network deployment and conditions. To illustrate the benefit of using SDN concept for the MEC framework, we present the details of software architecture as well as performance evaluations.",
"title": ""
}
] |
1840394 | A parallel spatial data analysis infrastructure for the cloud | [
{
"docid": "pos:1840394_0",
"text": "Prior research shows that database system performance is dominated by off-chip data stalls, resulting in a concerted effort to bring data into on-chip caches. At the same time, high levels of integration have enabled the advent of chip multiprocessors and increasingly large (and slow) on-chip caches. These two trends pose the imminent technical and research challenge of adapting high-performance data management software to a shifting hardware landscape. In this paper we characterize the performance of a commercial database server running on emerging chip multiprocessor technologies. We find that the major bottleneck of current software is data cache stalls, with L2 hit stalls rising from oblivion to become the dominant execution time component in some cases. We analyze the source of this shift and derive a list of features for future database designs to attain maximum",
"title": ""
}
] | [
{
"docid": "neg:1840394_0",
"text": "The transpalatal arch might be one of the most common intraoral auxiliary fixed appliances used in orthodontics in order to provide dental anchorage. The aim of the present case report is to describe a case in which an adult patient with a tendency to class III, palatal compression, and bilateral posterior crossbite was treated with double transpalatal bars in order to control the torque of both the first and the second molars. Double transpalatal arches on both first and second maxillary molars are a successful appliance in order to control the posterior sectors and improve the torsion of the molars. They allow the professional to gain overbite instead of losing it as may happen with other techniques and avoid enlarging of Wilson curve, obtaining a more stable occlusion without the need for extra help from bone anchorage.",
"title": ""
},
{
"docid": "neg:1840394_1",
"text": "Analogy and similarity are often assumed to be distinct psychological processes. In contrast to this position, the authors suggest that both similarity and analogy involve a process of structural alignment and mapping, that is, that similarity is like analogy. In this article, the authors first describe the structure-mapping process as it has been worked out for analogy. Then, this view is extended to similarity, where it is used to generate new predictions. Finally, the authors explore broader implications of structural alignment for psychological processing.",
"title": ""
},
{
"docid": "neg:1840394_2",
"text": "Currently, color image encryption is important to ensure its confidentiality during its transmission on insecure networks or its storage. The fact that chaotic properties are related with cryptography properties in confusion, diffusion, pseudorandom, etc., researchers around the world have presented several image (gray and color) encryption algorithms based on chaos, but almost all them with serious security problems have been broken with the powerful chosen/known plain image attack. In this work, we present a color image encryption algorithm based on total plain image characteristics (to resist a chosen/known plain image attack), and 1D logistic map with optimized distribution (for fast encryption process) based on Murillo-Escobar's algorithm (Murillo-Escobar et al. (2014) [38]). The security analysis confirms that the RGB image encryption is fast and secure against several known attacks; therefore, it can be implemented in real-time applications where a high security is required. & 2014 Published by Elsevier B.V.",
"title": ""
},
{
"docid": "neg:1840394_3",
"text": "A Software Defined Network (SDN) is a new network architecture that provides central control over the network. Although central control is the major advantage of SDN, it is also a single point of failure if it is made unreachable by a Distributed Denial of Service (DDoS) Attack. To mitigate this threat, this paper proposes to use the central control of SDN for attack detection and introduces a solution that is effective and lightweight in terms of the resources that it uses. More precisely, this paper shows how DDoS attacks can exhaust controller resources and provides a solution to detect such attacks based on the entropy variation of the destination IP address. This method is able to detect DDoS within the first five hundred packets of the attack traffic.",
"title": ""
},
{
"docid": "neg:1840394_4",
"text": "The main aim of this position paper is to identify and briefly discuss design-related issues commonly encountered with the implementation of both behaviour change techniques and persuasive design principles in physical activity smartphone applications. These overlapping issues highlight a disconnect in the perspectives held between health scientists' focus on the application of behaviour change theories and components of interventions, and the information systems designers' focus on the application of persuasive design principles as software design features intended to motivate, facilitate and support individuals through the behaviour change process. A review of the current status and some examples of these different perspectives is presented, leading to the identification of the main issues associated with this disconnection. The main behaviour change technique issues identified are concerned with: the fragmented integration of techniques, hindrances in successful use, diversity of user needs and preferences, and the informational flow and presentation. The main persuasive design issues identified are associated with: the fragmented application of persuasive design principles, hindrances in successful usage, diversity of user needs and preferences, informational flow and presentation, the lack of pragmatic guidance for application designers, and the maintenance of immersive user interactions and engagements. Given the common overlap across four of the identified issues, it is concluded that a methodological approach for integrating these two perspectives, and their associated issues, into a consolidated framework is necessary to address the apparent disconnect between these two independently-established, yet complementary fields.",
"title": ""
},
{
"docid": "neg:1840394_5",
"text": "Öz Supplier evaluation and selection includes both qualitative and quantitative criteria and it is considered as a complex Multi Criteria Decision Making (MCDM) problem. Uncertainty and impreciseness of data is an integral part of decision making process for a real life application. The fuzzy set theory allows making decisions under uncertain environment. In this paper, a trapezoidal type 2 fuzzy multicriteria decision making methods based on TOPSIS is proposed to select convenient supplier under vague information. The proposed method is applied to the supplier selection process of a textile firm in Turkey. In addition, the same problem is solved with type 1 fuzzy TOPSIS to confirm the findings of type 2 fuzzy TOPSIS. A sensitivity analysis is conducted to observe how the decision changes under different scenarios. Results show that the presented type 2 fuzzy TOPSIS method is more appropriate and effective to handle the supplier selection in uncertain environment. Tedarikçi değerlendirme ve seçimi, nitel ve nicel çok sayıda faktörün değerlendirilmesini gerektiren karmaşık birçok kriterli karar verme problemi olarak görülmektedir. Gerçek hayatta, belirsizlikler ve muğlaklık bir karar verme sürecinin ayrılmaz bir parçası olarak karşımıza çıkmaktadır. Bulanık küme teorisi, belirsizlik durumunda karar vermemize imkân sağlayan metotlardan bir tanesidir. Bu çalışmada, ikizkenar yamuk tip 2 bulanık TOPSIS yöntemi kısaca tanıtılmıştır. Tanıtılan yöntem, Türkiye’de bir tekstil firmasının tedarikçi seçimi problemine uygulanmıştır. Ayrıca, tip 2 bulanık TOPSIS yönteminin sonuçlarını desteklemek için aynı problem tip 1 bulanık TOPSIS ile de çözülmüştür. Duyarlılık analizi yapılarak önerilen çözümler farklı senaryolar altında incelenmiştir. Duyarlılık analizi sonuçlarına göre tip 2 bulanık TOPSIS daha efektif ve uygun çözümler üretmektedir.",
"title": ""
},
{
"docid": "neg:1840394_6",
"text": "A new approach for ranking fuzzy numbers based on a distance measure is introduced. A new class of distance measures for interval numbers that takes into account all the points in both intervals is developed -rst, and then it is used to formulate the distance measure for fuzzy numbers. The approach is illustrated by numerical examples, showing that it overcomes several shortcomings such as the indiscriminative and counterintuitive behavior of several existing fuzzy ranking approaches. c © 2002 Elsevier Science B.V. All rights reserved.",
"title": ""
},
{
"docid": "neg:1840394_7",
"text": "The spread of antibiotic-resistant bacteria is a growing problem and a public health issue. In recent decades, various genetic mechanisms involved in the spread of resistance genes among bacteria have been identified. Integrons - genetic elements that acquire, exchange, and express genes embedded within gene cassettes (GC) - are one of these mechanisms. Integrons are widely distributed, especially in Gram-negative bacteria; they are carried by mobile genetic elements, plasmids, and transposons, which promote their spread within bacterial communities. Initially studied mainly in the clinical setting for their involvement in antibiotic resistance, their role in the environment is now an increasing focus of attention. The aim of this review is to provide an in-depth analysis of recent studies of antibiotic-resistance integrons in the environment, highlighting their potential involvement in antibiotic-resistance outside the clinical context. We will focus particularly on the impact of human activities (agriculture, industries, wastewater treatment, etc.).",
"title": ""
},
{
"docid": "neg:1840394_8",
"text": "Human-robotinteractionis becominganincreasinglyimportant researcharea. In this paper , we presentour work on designinga human-robotsystemwith adjustableautonomy anddescribenotonly theprototypeinterfacebut alsothecorrespondingrobot behaviors. In our approach,we grant the humanmeta-level control over the level of robot autonomy, but we allow the robot a varying amountof self-direction with eachlevel. Within this framework of adjustableautonomy, we explore appropriateinterfaceconceptsfor controlling multiple robotsfrom multiple platforms.",
"title": ""
},
{
"docid": "neg:1840394_9",
"text": "A natural image usually conveys rich semantic content and can be viewed from different angles. Existing image description methods are largely restricted by small sets of biased visual paragraph annotations, and fail to cover rich underlying semantics. In this paper, we investigate a semi-supervised paragraph generative framework that is able to synthesize diverse and semantically coherent paragraph descriptions by reasoning over local semantic regions and exploiting linguistic knowledge. The proposed Recurrent Topic-Transition Generative Adversarial Network (RTT-GAN) builds an adversarial framework between a structured paragraph generator and multi-level paragraph discriminators. The paragraph generator generates sentences recurrently by incorporating region-based visual and language attention mechanisms at each step. The quality of generated paragraph sentences is assessed by multi-level adversarial discriminators from two aspects, namely, plausibility at sentence level and topic-transition coherence at paragraph level. The joint adversarial training of RTT-GAN drives the model to generate realistic paragraphs with smooth logical transition between sentence topics. Extensive quantitative experiments on image and video paragraph datasets demonstrate the effectiveness of our RTT-GAN in both supervised and semi-supervised settings. Qualitative results on telling diverse stories for an image verify the interpretability of RTT-GAN.",
"title": ""
},
{
"docid": "neg:1840394_10",
"text": "Current top-N recommendation methods compute the recommendations by taking into account only relations between pairs of items, thus leading to potential unused information when higher-order relations between the items exist. Past attempts to incorporate the higherorder information were done in the context of neighborhood-based methods. However, in many datasets, they did not lead to significant improvements in the recommendation quality. We developed a top-N recommendation method that revisits the issue of higher-order relations, in the context of the model-based Sparse LInear Method (SLIM). The approach followed (Higher-Order Sparse LInear Method, or HOSLIM) learns two sparse aggregation coefficient matrices S and S′ that capture the item-item and itemset-item similarities, respectively. Matrix S′ allows HOSLIM to capture higher-order relations, whose complexity is determined by the length of the itemset. Following the spirit of SLIM, matrices S and S′ are estimated using an elastic net formulation, which promotes model sparsity. We conducted extensive experiments which show that higher-order interactions exist in real datasets and when incorporated in the HOSLIM framework, the recommendations made are improved. The experimental results show that the greater the presence of higher-order relations, the more substantial the improvement in recommendation quality is, over the best existing methods. In addition, our experiments show that the performance of HOSLIM remains good when we select S′ such that its number of nonzeros is comparable to S, which reduces the time required to compute the recommendations.",
"title": ""
},
{
"docid": "neg:1840394_11",
"text": "Fast Downward is a classical planning system based on heuris tic search. It can deal with general deterministic planning problems encoded in the propos itional fragment of PDDL2.2, including advanced features like ADL conditions and effects and deriv ed predicates (axioms). Like other well-known planners such as HSP and FF, Fast Downward is a pro gression planner, searching the space of world states of a planning task in the forward direct ion. However, unlike other PDDL planning systems, Fast Downward does not use the propositional P DDL representation of a planning task directly. Instead, the input is first translated into an alternative representation called multivalued planning tasks , which makes many of the implicit constraints of a propositi nal planning task explicit. Exploiting this alternative representatio n, Fast Downward uses hierarchical decompositions of planning tasks for computing its heuristic fun ction, called thecausal graph heuristic , which is very different from traditional HSP-like heuristi cs based on ignoring negative interactions of operators. In this article, we give a full account of Fast Downward’s app roach to solving multi-valued planning tasks. We extend our earlier discussion of the caus al graph heuristic to tasks involving axioms and conditional effects and present some novel techn iques for search control that are used within Fast Downward’s best-first search algorithm: preferred operatorstransfer the idea of helpful actions from local search to global best-first search, deferred evaluationof heuristic functions mitigates the negative effect of large branching factors on earch performance, and multi-heuristic best-first searchcombines several heuristic evaluation functions within a s ingle search algorithm in an orthogonal way. We also describe efficient data structu es for fast state expansion ( successor generatorsandaxiom evaluators ) and present a new non-heuristic search algorithm called focused iterative-broadening search , which utilizes the information encoded in causal graphs in a ovel way. Fast Downward has proven remarkably successful: It won the “ classical” (i. e., propositional, non-optimising) track of the 4th International Planning Co mpetition at ICAPS 2004, following in the footsteps of planners such as FF and LPG. Our experiments show that it also performs very well on the benchmarks of the earlier planning competitions a d provide some insights about the usefulness of the new search enhancements.",
"title": ""
},
{
"docid": "neg:1840394_12",
"text": "Antennas implanted in a human body are largely applicable to hyperthermia and biotelemetry. To make practical use of antennas inside a human body, resonance characteristics of the implanted antennas and their radiation signature outside the body must be evaluated through numerical analysis and measurement setup. Most importantly, the antenna must be designed with an in-depth consideration given to its surrounding environment. In this paper, the spherical dyadic Green's function (DGF) expansions and finite-difference time-domain (FDTD) code are applied to analyze the electromagnetic characteristics of dipole antennas and low-profile patch antennas implanted in the human head and body. All studies to characterize and design the implanted antennas are performed at the biomedical frequency band of 402-405 MHz. By comparing the results from two numerical methodologies, the accuracy of the spherical DGF application for a dipole antenna at the center of the head is evaluated. We also consider how much impact a shoulder has on the performance of the dipole inside the head using FDTD. For the ease of the design of implanted low-profile antennas, simplified planar geometries based on a real human body are proposed. Two types of low-profile antennas, i.e., a spiral microstrip antenna and a planar inverted-F antenna, with superstrate dielectric layers are initially designed for medical devices implanted in the chest of the human body using FDTD simulations. The radiation performances of the designed low-profile antennas are estimated in terms of radiation patterns, radiation efficiency, and specific absorption rate. Maximum available power calculated to characterize the performance of a communication link between the designed antennas and an exterior antenna show how sensitive receivers are required to build a reliable telemetry link.",
"title": ""
},
{
"docid": "neg:1840394_13",
"text": "Ensuring the efficient and robust operation of distributed computational infrastructures is critical, given that their scale and overall complexity is growing at an alarming rate and that their management is rapidly exceeding human capability. Clustering analysis can be used to find patterns and trends in system operational data, as well as highlight deviations from these patterns. Such analysis can be essential for verifying the correctness and efficiency of the operation of the system, as well as for discovering specific situations of interest, such as anomalies or faults, that require appropriate management actions.\n This work analyzes the automated application of clustering for online system management, from the point of view of the suitability of different clustering approaches for the online analysis of system data in a distributed environment, with minimal prior knowledge and within a timeframe that allows the timely interpretation of and response to clustering results. For this purpose, we evaluate DOC (Decentralized Online Clustering), a clustering algorithm designed to support data analysis for autonomic management, and compare it to existing and widely used clustering algorithms. The comparative evaluations will show that DOC achieves a good balance in the trade-offs inherent in the challenges for this type of online management.",
"title": ""
},
{
"docid": "neg:1840394_14",
"text": "We introduce Similarity Group Proposal Network (SGPN), a simple and intuitive deep learning framework for 3D object instance segmentation on point clouds. SGPN uses a single network to predict point grouping proposals and a corresponding semantic class for each proposal, from which we can directly extract instance segmentation results. Important to the effectiveness of SGPN is its novel representation of 3D instance segmentation results in the form of a similarity matrix that indicates the similarity between each pair of points in embedded feature space, thus producing an accurate grouping proposal for each point. Experimental results on various 3D scenes show the effectiveness of our method on 3D instance segmentation, and we also evaluate the capability of SGPN to improve 3D object detection and semantic segmentation results. We also demonstrate its flexibility by seamlessly incorporating 2D CNN features into the framework to boost performance.",
"title": ""
},
{
"docid": "neg:1840394_15",
"text": "Robot technology is emerging for applications in disaster prevention with devices such as fire-fighting robots, rescue robots, and surveillance robots. In this paper, we suggest an portable fire evacuation guide robot system that can be thrown into a fire site to gather environmental information, search displaced people, and evacuate them from the fire site. This spool-like small and light mobile robot can be easily carried and remotely controlled by means of a laptop-sized tele-operator. It contains the following functional units: a camera to capture the fire site; sensors to gather temperature data, CO gas, and O2 concentrations; and a microphone with speaker for emergency voice communications between firefighter and victims. The robot's design gives its high-temperature protection, excellent waterproofing, and high impact resistance. Laboratory tests were performed for evaluating the performance of the proposed evacuation guide robot system.",
"title": ""
},
{
"docid": "neg:1840394_16",
"text": "We investigated the possibility of using a machine-learning scheme in conjunction with commercial wearable EEG-devices for translating listener's subjective experience of music into scores that can be used in popular on-demand music streaming services. Our study resulted into two variants, differing in terms of performance and execution time, and hence, subserving distinct applications in online streaming music platforms. The first method, NeuroPicks, is extremely accurate but slower. It is based on the well-established neuroscientific concepts of brainwave frequency bands, activation asymmetry index and cross frequency coupling (CFC). The second method, NeuroPicksVQ, offers prompt predictions of lower credibility and relies on a custom-built version of vector quantization procedure that facilitates a novel parameterization of the music-modulated brainwaves. Beyond the feature engineering step, both methods exploit the inherent efficiency of extreme learning machines (ELMs) so as to translate, in a personalized fashion, the derived patterns into a listener's score. NeuroPicks method may find applications as an integral part of contemporary music recommendation systems, while NeuroPicksVQ can control the selection of music tracks. Encouraging experimental results, from a pragmatic use of the systems, are presented.",
"title": ""
},
{
"docid": "neg:1840394_17",
"text": "Online participation and content contribution are pillars of the Internet revolution and are core activities for younger generations online. This study investigated participation patterns, users' contributions and gratification mechanisms, as well as the gender differences of Israeli learners in the Scratch online community. The findings showed that: (1) Participation patterns reveal two distinct participation types \"project creators\" and \"social participators\", suggesting different users' needs. (2) Community members gratified \"project creators\" and \"social participators\" for their investment – using several forms of community feedback. Gratification at the user level was given both to \"project creators\" and \"social participators\" – community members added them as friends. The majority of the variance associated with community feedback was explained by seven predictors. However, gratification at the project level was different for the two participation types active \"project creators\" received less feedback on their projects, while active \"social participators\" received more. Project feedback positively correlated with social participation investment, but negatively correlated with project creation investment. A possible explanation is that community members primarily left feedback to their friends. (3) No gender differences were found in participation patterns or in project complexity, suggesting that Scratch provides similar opportunities to both genders in programming, learning, and participation.",
"title": ""
},
{
"docid": "neg:1840394_18",
"text": "The purpose of this study is to analyze three separate constructs (demographics, study habits, and technology familiarity) that can be used to identify university students’ characteristics and the relationship between each of these constructs with student achievement. A survey method was used for the current study, and the participants included 2,949 university students from 11 faculties of a public university in Turkey. A survey was used to collect data, and the data were analyzed using the chi-squared automatic interaction detection (CHAID) algorithm. The results of the study revealed that female students are significantly more successful than male students. In addition, the more introverted students, whether male or female, have higher grade point averages (GPAs) than those students who are more extroverted. Furthermore, male students who use the Internet more than 22 hours per week and use the Internet for up to six different aims have the lowest GPAs among all students, while female students who use the Internet for up to 21 hours per week have the highest GPAs among all students. The implications of these findings are also discussed herein.",
"title": ""
}
] |
1840395 | To catch a chorus: using chroma-based representations for audio thumbnailing | [
{
"docid": "pos:1840395_0",
"text": "Systems to automatically provide a representative summary or `Key Phrase' of a piece of music are described. For a `rock' song with `verse' and `chorus' sections, we aim to return the chorus or in any case the most repeated and hence most memorable section. The techniques are less applicable to music with more complicated structure although possibly our general framework could still be used with di erent heuristics. Our process consists of three steps. First we parameterize the song into features. Next we use these features to discover the song structure, either by clustering xed-length segments or by training a hidden Markov model (HMM) for the song. Finally, given this structure, we use heuristics to choose the Key Phrase. Results for summaries of 18 Beatles songs evaluated by ten users show that the technique based on clustering is superior to the HMM approach and to choosing the Key Phrase at random.",
"title": ""
}
] | [
{
"docid": "neg:1840395_0",
"text": "Smart agriculture is an emerging concept, because IOT sensors are capable of providing information about agriculture fields and then act upon based on the user input. In this Paper, it is proposed to develop a Smart agriculture System that uses advantages of cutting edge technologies such as Arduino, IOT and Wireless Sensor Network. The paper aims at making use of evolving technology i.e. IOT and smart agriculture using automation. Monitoring environmental conditions is the major factor to improve yield of the efficient crops. The feature of this paper includes development of a system which can monitor temperature, humidity, moisture and even the movement of animals which may destroy the crops in agricultural field through sensors using Arduino board and in case of any discrepancy send a SMS notification as well as a notification on the application developed for the same to the farmer’s smartphone using Wi-Fi/3G/4G. The system has a duplex communication link based on a cellularInternet interface that allows for data inspection and irrigation scheduling to be programmed through an android application. Because of its energy autonomy and low cost, the system has the potential to be useful in water limited geographically isolated areas.",
"title": ""
},
{
"docid": "neg:1840395_1",
"text": "CONTEXT\nYouth worldwide play violent video games many hours per week. Previous research suggests that such exposure can increase physical aggression.\n\n\nOBJECTIVE\nWe tested whether high exposure to violent video games increases physical aggression over time in both high- (United States) and low- (Japan) violence cultures. We hypothesized that the amount of exposure to violent video games early in a school year would predict changes in physical aggressiveness assessed later in the school year, even after statistically controlling for gender and previous physical aggressiveness.\n\n\nDESIGN\nIn 3 independent samples, participants' video game habits and physically aggressive behavior tendencies were assessed at 2 points in time, separated by 3 to 6 months.\n\n\nPARTICIPANTS\nOne sample consisted of 181 Japanese junior high students ranging in age from 12 to 15 years. A second Japanese sample consisted of 1050 students ranging in age from 13 to 18 years. The third sample consisted of 364 United States 3rd-, 4th-, and 5th-graders ranging in age from 9 to 12 years. RESULTS. Habitual violent video game play early in the school year predicted later aggression, even after controlling for gender and previous aggressiveness in each sample. Those who played a lot of violent video games became relatively more physically aggressive. Multisample structure equation modeling revealed that this longitudinal effect was of a similar magnitude in the United States and Japan for similar-aged youth and was smaller (but still significant) in the sample that included older youth.\n\n\nCONCLUSIONS\nThese longitudinal results confirm earlier experimental and cross-sectional studies that had suggested that playing violent video games is a significant risk factor for later physically aggressive behavior and that this violent video game effect on youth generalizes across very different cultures. As a whole, the research strongly suggests reducing the exposure of youth to this risk factor.",
"title": ""
},
{
"docid": "neg:1840395_2",
"text": "In this paper, we present a novel approach of human activity prediction. Human activity prediction is a probabilistic process of inferring ongoing activities from videos only containing onsets (i.e. the beginning part) of the activities. The goal is to enable early recognition of unfinished activities as opposed to the after-the-fact classification of completed activities. Activity prediction methodologies are particularly necessary for surveillance systems which are required to prevent crimes and dangerous activities from occurring. We probabilistically formulate the activity prediction problem, and introduce new methodologies designed for the prediction. We represent an activity as an integral histogram of spatio-temporal features, efficiently modeling how feature distributions change over time. The new recognition methodology named dynamic bag-of-words is developed, which considers sequential nature of human activities while maintaining advantages of the bag-of-words to handle noisy observations. Our experiments confirm that our approach reliably recognizes ongoing activities from streaming videos with a high accuracy.",
"title": ""
},
{
"docid": "neg:1840395_3",
"text": "The Web so far has been incredibly successful at delivering information to human users. So successful actually, that there is now an urgent need to go beyond a browsing human. Unfortunately, the Web is not yet a well organized repository of nicely structured documents but rather a conglomerate of volatile HTML pages. To address this problem, we present the World Wide Web Wrapper Factory (W4F), a toolkit for the generation of wrappers for Web sources, that offers: (1) an expressive language to specify the extraction of complex structures from HTML pages; (2) a declarative mapping to various data formats like XML; (3) some visual tools to make the engineering of wrappers faster and easier.",
"title": ""
},
{
"docid": "neg:1840395_4",
"text": "This paper deals with the estimation of the channel impulse response (CIR) in orthogonal frequency division multiplexed (OFDM) systems. In particular, we focus on two pilot-aided schemes: the maximum likelihood estimator (MLE) and the Bayesian minimum mean square error estimator (MMSEE). The advantage of the former is that it is simpler to implement as it needs no information on the channel statistics. On the other hand, the MMSEE is expected to have better performance as it exploits prior information about the channel. Theoretical analysis and computer simulations are used in the comparisons. At SNR values of practical interest, the two schemes are found to exhibit nearly equal performance, provided that the number of pilot tones is sufficiently greater than the CIRs length. Otherwise, the MMSEE is superior. In any case, the MMSEE is more complex to implement.",
"title": ""
},
{
"docid": "neg:1840395_5",
"text": "Design iteration time in SoC design flow is reduced through performance exploration at a higher level of abstraction. This paper proposes an accurate and fast performance analysis method in early stage of design process using a behavioral model written in C/C++ language. We made a cycle-accurate but fast and flexible compiled instruction set simulator (ISS) and IP models that represent hardware functionality and performance. System performance analyzer configured by the target communication architecture analyzes the performance utilizing event-traces obtained by running the ISS and IP models. This solution is automated and implemented in the tool, HIPA. We obtain diverse performance profiling results and achieve 95% accuracy using an abstracted C model. We also achieve about 20 times speed-up over corresponding co-simulation tools.",
"title": ""
},
{
"docid": "neg:1840395_6",
"text": "In this paper, we present a new and significant theoretical discovery. If the absolute height difference between base station (BS) antenna and user equipment (UE) antenna is larger than zero, then the network capacity performance in terms of the area spectral efficiency (ASE) will continuously decrease as the BS density increases for ultra-dense (UD) small cell networks (SCNs). This performance behavior has a tremendous impact on the deployment of UD SCNs in the 5th- generation (5G) era. Network operators may invest large amounts of money in deploying more network infrastructure to only obtain an even worse network performance. Our study results reveal that it is a must to lower the SCN BS antenna height to the UE antenna height to fully achieve the capacity gains of UD SCNs in 5G. However, this requires a revolutionized approach of BS architecture and deployment, which is explored in this paper too.",
"title": ""
},
{
"docid": "neg:1840395_7",
"text": "We present a motion planner for autonomous highway driving that adapts the state lattice framework pioneered for planetary rover navigation to the structured environment of public roadways. The main contribution of this paper is a search space representation that allows the search algorithm to systematically and efficiently explore both spatial and temporal dimensions in real time. This allows the low-level trajectory planner to assume greater responsibility in planning to follow a leading vehicle, perform lane changes, and merge between other vehicles. We show that our algorithm can readily be accelerated on a GPU, and demonstrate it on an autonomous passenger vehicle.",
"title": ""
},
{
"docid": "neg:1840395_8",
"text": "This paper presents a differential low-noise highresolution switched-capacitor readout circuit that is intended for capacitive sensors. Amplitude modulation/demodulation and correlated double sampling are used to minimize the adverse effects of the amplifier offset and flicker (1/f) noise and improve the sensitivity of the readout circuit. In order to simulate the response of the readout circuit, a Verilog-A model is used to model the variable sense capacitor. The interface circuit is designed and laid out in a 0.8 µm CMOS process. Postlayout simulation results show that the readout interface is able to linearly resolve sense capacitance variation from 2.8 aF to 0.3 fF with a sensitivity of 7.88 mV/aF from a single 5V supply (the capacitance-to-voltage conversion is approximately linear for capacitance changes from 0.3 fF to~1.2 fF). The power consumption of the circuit is 9.38 mW.",
"title": ""
},
{
"docid": "neg:1840395_9",
"text": "Non-orthogonal multiple access (NOMA) is expected to be a promising multiple access technique for 5G networks due to its superior spectral efficiency. In this letter, the ergodic capacity maximization problem is first studied for the Rayleigh fading multiple-input multiple-output (MIMO) NOMA systems with statistical channel state information at the transmitter (CSIT). We propose both optimal and low complexity suboptimal power allocation schemes to maximize the ergodic capacity of MIMO NOMA system with total transmit power constraint and minimum rate constraint of the weak user. Numerical results show that the proposed NOMA schemes significantly outperform the traditional orthogonal multiple access scheme.",
"title": ""
},
{
"docid": "neg:1840395_10",
"text": "Electroencephalography (EEG) is the most popular brain activity recording technique used in wide range of applications. One of the commonly faced problems in EEG recordings is the presence of artifacts that come from sources other than brain and contaminate the acquired signals significantly. Therefore, much research over the past 15 years has focused on identifying ways for handling such artifacts in the preprocessing stage. However, this is still an active area of research as no single existing artifact detection/removal method is complete or universal. This article presents an extensive review of the existing state-of-the-art artifact detection and removal methods from scalp EEG for all potential EEG-based applications and analyses the pros and cons of each method. First, a general overview of the different artifact types that are found in scalp EEG and their effect on particular applications are presented. In addition, the methods are compared based on their ability to remove certain types of artifacts and their suitability in relevant applications (only functional comparison is provided not performance evaluation of methods). Finally, the future direction and expected challenges of current research is discussed. Therefore, this review is expected to be helpful for interested researchers who will develop and/or apply artifact handling algorithm/technique in future for their applications as well as for those willing to improve the existing algorithms or propose a new solution in this particular area of research.",
"title": ""
},
{
"docid": "neg:1840395_11",
"text": "BACKGROUND\nThis longitudinal community study assessed the prevalence and development of psychiatric disorders from age 9 through 16 years and examined homotypic and heterotypic continuity.\n\n\nMETHODS\nA representative population sample of 1420 children aged 9 to 13 years at intake were assessed annually for DSM-IV disorders until age 16 years.\n\n\nRESULTS\nAlthough 3-month prevalence of any disorder averaged 13.3% (95% confidence interval [CI], 11.7%-15.0%), during the study period 36.7% of participants (31% of girls and 42% of boys) had at least 1 psychiatric disorder. Some disorders (social anxiety, panic, depression, and substance abuse) increased in prevalence, whereas others, including separation anxiety disorder and attention-deficit/hyperactivity disorder (ADHD), decreased. Lagged analyses showed that children with a history of psychiatric disorder were 3 times more likely than those with no previous disorder to have a diagnosis at any subsequent wave (odds ratio, 3.7; 95% CI, 2.9-4.9; P<.001). Risk from a previous diagnosis was high among both girls and boys, but it was significantly higher among girls. Continuity of the same disorder (homotypic) was significant for all disorders except specific phobias. Continuity from one diagnosis to another (heterotypic) was significant from depression to anxiety and anxiety to depression, from ADHD to oppositional defiant disorder, and from anxiety and conduct disorder to substance abuse. Almost all the heterotypic continuity was seen in girls.\n\n\nCONCLUSIONS\nThe risk of having at least 1 psychiatric disorder by age 16 years is much higher than point estimates would suggest. Concurrent comorbidity and homotypic and heterotypic continuity are more marked in girls than in boys.",
"title": ""
},
{
"docid": "neg:1840395_12",
"text": "The present paper describes the development of a query focused multi-document automatic summarization. A graph is constructed, where the nodes are sentences of the documents and edge scores reflect the correlation measure between the nodes. The system clusters similar texts having related topical features from the graph using edge scores. Next, query dependent weights for each sentence are added to the edge score of the sentence and accumulated with the corresponding cluster score. Top ranked sentence of each cluster is identified and compressed using a dependency parser. The compressed sentences are included in the output summary. The inter-document cluster is revisited in order until the length of the summary is less than the maximum limit. The summarizer has been tested on the standard TAC 2008 test data sets of the Update Summarization Track. Evaluation of the summarizer yielded accuracy scores of 0.10317 (ROUGE-2) and 0.13998 (ROUGE–SU-4).",
"title": ""
},
{
"docid": "neg:1840395_13",
"text": "Shales of very low metamorphic grade from the 2.78 to 2.45 billion-year-old (Ga) Mount Bruce Supergroup, Pilbara Craton, Western Australia, were analyzed for solvent extractable hydrocarbons. Samples were collected from ten drill cores and two mines in a sampling area centered in the Hamersley Basin near Wittenoom and ranging 200 km to the southeast, 100 km to the southwest and 70 km to the northwest. Almost all analyzed kerogenous sedimentary rocks yielded solvent extractable organic matter. Concentrations of total saturated hydrocarbons were commonly in the range of 1 to 20 ppm ( g/g rock) but reached maximum values of 1000 ppm. The abundance of aromatic hydrocarbons was 1 to 30 ppm. Analysis of the extracts by gas chromatography-mass spectrometry (GC-MS) and GC-MS metastable reaction monitoring (MRM) revealed the presence of n-alkanes, midand end-branched monomethylalkanes, -cyclohexylalkanes, acyclic isoprenoids, diamondoids, trito pentacyclic terpanes, steranes, aromatic steroids and polyaromatic hydrocarbons. Neither plant biomarkers nor hydrocarbon distributions indicative of Phanerozoic contamination were detected. The host kerogens of the hydrocarbons were depleted in C by 2 to 21‰ relative ton-alkanes, a pattern typical of, although more extreme than, other Precambrian samples. Acyclic isoprenoids showed carbon isotopic depletion relative to n-alkanes and concentrations of 2 -methylhopanes were relatively high, features rarely observed in the Phanerozoic but characteristic of many other Precambrian bitumens. Molecular parameters, including sterane and hopane ratios at their apparent thermal maxima, condensate-like alkane profiles, high monoand triaromatic steroid maturity parameters, high methyladamantane and methyldiamantane indices and high methylphenanthrene maturity ratios, indicate thermal maturities in the wet-gas generation zone. Additionally, extracts from shales associated with iron ore deposits at Tom Price and Newman have unusual polyaromatic hydrocarbon patterns indicative of pyrolytic dealkylation. The saturated hydrocarbons and biomarkers in bitumens from the Fortescue and Hamersley Groups are characterized as ‘probably syngenetic with their Archean host rock’ based on their typical Precambrian molecular and isotopic composition, extreme maturities that appear consistent with the thermal history of the host sediments, the absence of biomarkers diagnostic of Phanerozoic age, the absence of younger petroleum source rocks in the basin and the wide geographic distribution of the samples. Aromatic hydrocarbons detected in shales associated with iron ore deposits at Mt Tom Price and Mt Whaleback are characterized as ‘clearly Archean’ based on their hypermature composition and covalent bonding to kerogen. Copyright © 2003 Elsevier Ltd",
"title": ""
},
{
"docid": "neg:1840395_14",
"text": "Wireless sensor networks for environmental monitoring and agricultural applications often face long-range requirements at low bit-rates together with large numbers of nodes. This paper presents the design and test of a novel wireless sensor network that combines a large radio range with very low power consumption and cost. Our asymmetric sensor network uses ultralow-cost 40 MHz transmitters and a sensitive software defined radio receiver with multichannel capability. Experimental radio range measurements in two different outdoor environments demonstrate a single-hop range of up to 1.8 km. A theoretical model for radio propagation at 40 MHz in outdoor environments is proposed and validated with the experimental measurements. The reliability and fidelity of network communication over longer time periods is evaluated with a deployment for distributed temperature measurements. Our results demonstrate the feasibility of the transmit-only low-frequency system design approach for future environmental sensor networks. Although there have been several papers proposing the theoretical benefits of this approach, to the best of our knowledge this is the first paper to provide experimental validation of such claims.",
"title": ""
},
{
"docid": "neg:1840395_15",
"text": "Image descriptors based on activations of Convolutional Neural Networks (CNNs) have become dominant in image retrieval due to their discriminative power, compactness of representation, and search efficiency. Training of CNNs, either from scratch or fine-tuning, requires a large amount of annotated data, where a high quality of annotation is often crucial. In this work, we propose to fine-tune CNNs for image retrieval on a large collection of unordered images in a fully automated manner. Reconstructed 3D models obtained by the state-of-the-art retrieval and structure-from-motion methods guide the selection of the training data. We show that both hard-positive and hard-negative examples, selected by exploiting the geometry and the camera positions available from the 3D models, enhance the performance of particular-object retrieval. CNN descriptor whitening discriminatively learned from the same training data outperforms commonly used PCA whitening. We propose a novel trainable Generalized-Mean (GeM) pooling layer that generalizes max and average pooling and show that it boosts retrieval performance. Applying the proposed method to the VGG network achieves state-of-the-art performance on the standard benchmarks: Oxford Buildings, Paris, and Holidays datasets.",
"title": ""
},
{
"docid": "neg:1840395_16",
"text": "Modeling, simulation and implementation of Voltage Source Inverter (VSI) fed closed loop control of 3-phase induction motor drive is presented in this paper. A mathematical model of the drive system is developed and is used for the simulation study. Simulation is carried out using Scilab/Scicos, which is free and open source software. The above said drive system is implemented in laboratory using a PC and an add-on card. In this study the air gap flux of the machine is kept constant by maintaining Volt/Hertz (v/f) ratio constant. The experimental transient responses of the drive system obtained for change in speed under no load as well as under load conditions are presented.",
"title": ""
},
{
"docid": "neg:1840395_17",
"text": "Introduction. Opinion mining has been receiving increasing attention from a broad range of scientific communities since early 2000s. The present study aims to systematically investigate the intellectual structure of opinion mining research. Method. Using topic search, citation expansion, and patent search, we collected 5,596 bibliographic records of opinion mining research. Then, intellectual landscapes, emerging trends, and recent developments were identified. We also captured domain-level citation trends, subject category assignment, keyword co-occurrence, document co-citation network, and landmark articles. Analysis. Our study was guided by scientometric approaches implemented in CiteSpace, a visual analytic system based on networks of co-cited documents. We also employed a dual-map overlay technique to investigate epistemological characteristics of the domain. Results. We found that the investigation of algorithmic and linguistic aspects of opinion mining has been of the community’s greatest interest to understand, quantify, and apply the sentiment orientation of texts. Recent thematic trends reveal that practical applications of opinion mining such as the prediction of market value and investigation of social aspects of product feedback have received increasing attention from the community. Conclusion. Opinion mining is fast-growing and still developing, exploring the refinements of related techniques and applications in a variety of domains. We plan to apply the proposed analytics to more diverse domains and comprehensive publication materials to gain more generalized understanding of the true structure of a science.",
"title": ""
},
{
"docid": "neg:1840395_18",
"text": "Three studies were conducted to test the hypothesis that high trait aggressive individuals are more affected by violent media than are low trait aggressive individuals. In Study 1, participants read film descriptions and then chose a film to watch. High trait aggressive individuals were more likely to choose a violent film to watch than were low trait aggressive individuals. In Study 2, participants reported their mood before and after the showing of a violet or nonviolent videotape. High trait aggressive individuals felt more angry after viewing the violent videotape than did low trait aggressive individuals. In Study 3, participants first viewed either a violent or a nonviolent videotape and then competed with an \"opponent\" on a reaction time task in which the loser received a blast of unpleasant noise. Videotape violence was more likely to increase aggression in high trait aggressive individuals than in low trait aggressive individuals.",
"title": ""
},
{
"docid": "neg:1840395_19",
"text": "This paper addresses the problem of mapping natural language sentences to lambda–calculus encodings of their meaning. We describe a learning algorithm that takes as input a training set of sentences labeled with expressions in the lambda calculus. The algorithm induces a grammar for the problem, along with a log-linear model that represents a distribution over syntactic and semantic analyses conditioned on the input sentence. We apply the method to the task of learning natural language interfaces to databases and show that the learned parsers outperform previous methods in two benchmark database domains.",
"title": ""
}
] |
1840396 | To BLOB or Not To BLOB: Large Object Storage in a Database or a Filesystem? | [
{
"docid": "pos:1840396_0",
"text": "A reimplementation of the UNIX file system is described. The reimplementation provides substantially higher throughput rates by using more flexible allocation policies that allow better locality of reference and can be adapted to a wide range of peripheral and processor characteristics. The new file system clusters data that is sequentially accessed and provides tw o block sizes to allo w fast access to lar ge files while not wasting large amounts of space for small files. File access rates of up to ten times f aster than the traditional UNIX file system are e xperienced. Longneeded enhancements to the programmers’ interface are discussed. These include a mechanism to place advisory locks on files, extensions of the name space across file systems, the ability to use long file names, and provisions for administrati ve control of resource usage. Revised February 18, 1984 CR",
"title": ""
}
] | [
{
"docid": "neg:1840396_0",
"text": "Advances in tourism economics have enabled us to collect massive amounts of travel tour data. If properly analyzed, this data can be a source of rich intelligence for providing real-time decision making and for the provision of travel tour recommendations. However, tour recommendation is quite different from traditional recommendations, because the tourist's choice is directly affected by the travel cost, which includes the financial cost and the time. To that end, in this paper, we provide a focused study of cost-aware tour recommendation. Along this line, we develop two cost-aware latent factor models to recommend travel packages by considering both the travel cost and the tourist's interests. Specifically, we first design a cPMF model, which models the tourist's cost with a 2-dimensional vector. Also, in this cPMF model, the tourist's interests and the travel cost are learnt by exploring travel tour data. Furthermore, in order to model the uncertainty in the travel cost, we further introduce a Gaussian prior into the cPMF model and develop the GcPMF model, where the Gaussian prior is used to express the uncertainty of the travel cost. Finally, experiments on real-world travel tour data show that the cost-aware recommendation models outperform state-of-the-art latent factor models with a significant margin. Also, the GcPMF model with the Gaussian prior can better capture the impact of the uncertainty of the travel cost, and thus performs better than the cPMF model.",
"title": ""
},
{
"docid": "neg:1840396_1",
"text": "Smart Cities rely on the use of ICTs for a more efficient and intelligent use of resources, whilst improving citizens' quality of life and reducing the environmental footprint. As far as the livability of cities is concerned, traffic is one of the most frequent and complex factors directly affecting citizens. Particularly, drivers in search of a vacant parking spot are a non-negligible source of atmospheric and acoustic pollution. Although some cities have installed sensor-based vacant parking spot detectors in some neighbourhoods, the cost of this approach makes it unfeasible at large scale. As an approach to implement a sustainable solution to the vacant parking spot detection problem in urban environments, this work advocates fusing the information from small-scale sensor-based detectors with that obtained from exploiting the widely-deployed video surveillance camera networks. In particular, this paper focuses on how video analytics can be exploited as a prior step towards Smart City solutions based on data fusion. Through a set of experiments carefully planned to replicate a real-world scenario, the vacant parking spot detection success rate of the proposed system is evaluated through a critical comparison of local and global visual features (either alone or fused at feature level) and different classifier systems applied to the task. Furthermore, the system is tested under setup scenarios of different complexities, and experimental results show that while local features are best when training with small amounts of highly accurate on-site data, they are outperformed by their global counterparts when training with more samples from an external vehicle database.",
"title": ""
},
{
"docid": "neg:1840396_2",
"text": "Assessment of right ventricular afterload in systolic heart failure seems mandatory as it plays an important role in predicting outcome. The purpose of this study is to estimate pulmonary vascular elastance as a reliable surrogate for right ventricular afterload in systolic heart failure. Forty-two patients with systolic heart failure (ejection fraction <35%) were studied by right heart catheterization. Pulmonary arterial elastance was calculated with three methods: Ea(PV) = (end-systolic pulmonary arterial pressure)/stroke volume; Ea*(PV) = (mean pulmonary arterial pressure - pulmonary capillary wedge pressure)/stroke volume; and PPSV = pulmonary arterial pulse pressure (systolic - diastolic)/stroke volume. These measures were compared with pulmonary vascular resistance ([mean pulmonary arterial pressure - pulmonary capillary wedge pressure]/CO). All estimates of pulmonary vascular elastance were significantly correlated with pulmonary vascular resistance (r=0.772, 0.569, and 0.935 for Ea(PV), Ea*(PV), and PPSV, respectively; P <.001). Pulmonary vascular elastance can easily be estimated by routine right heart catheterization in systolic heart failure and seems promising in assessment of right ventricular afterload.",
"title": ""
},
{
"docid": "neg:1840396_3",
"text": "Placing the DRAM in the same package as a processor enables several times higher memory bandwidth than conventional off-package DRAM. Yet, the latency of in-package DRAM is not appreciably lower than that of off-package DRAM. A promising use of in-package DRAM is as a large cache. Unfortunately, most previous DRAM cache designs optimize mainly for cache hit latency and do not consider bandwidth efficiency as a first-class design constraint. Hence, as we show in this paper, these designs are suboptimal for use with in-package DRAM.\n We propose a new DRAM cache design, Banshee, that optimizes for both in-package and off-package DRAM bandwidth efficiency without degrading access latency. Banshee is based on two key ideas. First, it eliminates the tag lookup overhead by tracking the contents of the DRAM cache using TLBs and page table entries, which is efficiently enabled by a new lightweight TLB coherence protocol we introduce. Second, it reduces unnecessary DRAM cache replacement traffic with a new bandwidth-aware frequency-based replacement policy. Our evaluations show that Banshee significantly improves performance (15% on average) and reduces DRAM traffic (35.8% on average) over the best-previous latency-optimized DRAM cache design.",
"title": ""
},
{
"docid": "neg:1840396_4",
"text": "The automotive industry could be facing a situation of profound change and opportunity in the coming decades. There are a number of influencing factors such as increasing urban and aging populations, self-driving cars, 3D parts printing, energy innovation, and new models of transportation service delivery (Zipcar, Uber). The connected car means that vehicles are now part of the connected world, continuously Internet-connected, generating and transmitting data, which on the one hand can be helpfully integrated into applications, like real-time traffic alerts broadcast to smartwatches, but also raises security and privacy concerns. This paper explores the automotive connected world, and describes five killer QS (Quantified Self)-auto sensor applications that link quantified-self sensors (sensors that measure the personal biometrics of individuals like heart rate) and automotive sensors (sensors that measure driver and passenger biometrics or quantitative automotive performance metrics like speed and braking activity). The applications are fatigue detection, real-time assistance for parking and accidents, anger management and stress reduction, keyless authentication and digital identity verification, and DIY diagnostics. These kinds of applications help to demonstrate the benefit of connected world data streams in the automotive industry and beyond where, more fundamentally for human progress, the automation of both physical and now cognitive tasks is underway.",
"title": ""
},
{
"docid": "neg:1840396_5",
"text": "In-wheel-motor drive electric vehicle (EV) is an innovative configuration, in which each wheel is driven individually by an electric motor. It is possible to use an electronic differential (ED) instead of the heavy mechanical differential because of the fast response time of the motor. A new ED control approach for a two-in-wheel-motor drive EV is devised based on the fuzzy logic control method. The fuzzy logic method employs to estimate the slip rate of each wheel considering the complex and nonlinear of the system. Then, the ED system distributes torque and power to each motor according to requirements. The effectiveness and validation of the proposed control method are evaluated in the Matlab/Simulink environment. Simulation results show that the new ED control system can keep the slip rate within the optimized range, ensuring the stability of the vehicle either in a straight or a curve lane.",
"title": ""
},
{
"docid": "neg:1840396_6",
"text": "With Android being the most widespread mobile platform, protecting it against malicious applications is essential. Android users typically install applications from large remote repositories, which provides ample opportunities for malicious newcomers. In this paper, we propose a simple, and yet highly effective technique for detecting malicious Android applications on a repository level. Our technique performs automatic classification based on tracking system calls while applications are executed in a sandbox environment. We implemented the technique in a tool called MALINE, and performed extensive empirical evaluation on a suite of around 12,000 applications. The evaluation yields an overall detection accuracy of 93% with a 5% benign application classification error, while results are improved to a 96% detection accuracy with up-sampling. This indicates that our technique is viable to be used in practice. Finally, we show that even simplistic feature choices are highly effective, suggesting that more heavyweight approaches should be thoroughly (re)evaluated. Android Malware Detection Based on System Calls Marko Dimjašević, Simone Atzeni, Zvonimir Rakamarić University of Utah, USA {marko,simone,zvonimir}@cs.utah.edu Ivo Ugrina University of Zagreb, Croatia",
"title": ""
},
{
"docid": "neg:1840396_7",
"text": "In this work, we propose Residual Attention Network, a convolutional neural network using attention mechanism which can incorporate with state-of-art feed forward network architecture in an end-to-end training fashion. Our Residual Attention Network is built by stacking Attention Modules which generate attention-aware features. The attention-aware features from different modules change adaptively as layers going deeper. Inside each Attention Module, bottom-up top-down feedforward structure is used to unfold the feedforward and feedback attention process into a single feedforward process. Importantly, we propose attention residual learning to train very deep Residual Attention Networks which can be easily scaled up to hundreds of layers. Extensive analyses are conducted on CIFAR-10 and CIFAR-100 datasets to verify the effectiveness of every module mentioned above. Our Residual Attention Network achieves state-of-the-art object recognition performance on three benchmark datasets including CIFAR-10 (3.90% error), CIFAR-100 (20.45% error) and ImageNet (4.8% single model and single crop, top-5 error). Note that, our method achieves 0.6% top-1 accuracy improvement with 46% trunk depth and 69% forward FLOPs comparing to ResNet-200. The experiment also demonstrates that our network is robust against noisy labels.",
"title": ""
},
{
"docid": "neg:1840396_8",
"text": "BACKGROUND\nAcute exacerbations of chronic obstructive pulmonary disease (COPD) are associated with accelerated decline in lung function, diminished quality of life, and higher mortality. Proactively monitoring patients for early signs of an exacerbation and treating them early could prevent these outcomes. The emergence of affordable wearable technology allows for nearly continuous monitoring of heart rate and physical activity as well as recording of audio which can detect features such as coughing. These signals may be able to be used with predictive analytics to detect early exacerbations. Prior to full development, however, it is important to determine the feasibility of using wearable devices such as smartwatches to intensively monitor patients with COPD.\n\n\nOBJECTIVE\nWe conducted a feasibility study to determine if patients with COPD would wear and maintain a smartwatch consistently and whether they would reliably collect and transmit sensor data.\n\n\nMETHODS\nPatients with COPD were recruited from 3 hospitals and were provided with a smartwatch that recorded audio, heart rate, and accelerations. They were asked to wear and charge it daily for 90 days. They were also asked to complete a daily symptom diary. At the end of the study period, participants were asked what would motivate them to regularly use a wearable for monitoring of their COPD.\n\n\nRESULTS\nOf 28 patients enrolled, 16 participants completed the full 90 days. The average age of participants was 68.5 years, and 36% (10/28) were women. Survey, heart rate, and activity data were available for an average of 64.5, 65.1, and 60.2 days respectively. Technical issues caused heart rate and activity data to be unavailable for approximately 13 and 17 days, respectively. Feedback provided by participants indicated that they wanted to actively engage with the smartwatch and receive feedback about their activity, heart rate, and how to better manage their COPD.\n\n\nCONCLUSIONS\nSome patients with COPD will wear and maintain smartwatches that passively monitor audio, heart rate, and physical activity, and wearables were able to reliably capture near-continuous patient data. Further work is necessary to increase acceptability and improve the patient experience.",
"title": ""
},
{
"docid": "neg:1840396_9",
"text": "We present a new fast algorithm for background modeling and subtraction. Sample background values at each pixel are quantized into codebooks which represent a compressed form of background model for a long image sequence. This allows us to capture structural background variation due to periodic-like motion over a long period of time under limited memory. Our method can handle scenes containing moving backgrounds or illumination variations (shadows and highlights), and it achieves robust detection for compressed videos. We compared our method with other multimode modeling techniques.",
"title": ""
},
{
"docid": "neg:1840396_10",
"text": "Today, the concept of brain connectivity plays a central role in the neuroscience. While functional connectivity is defined as the temporal coherence between the activities of different brain areas, the effective connectivity is defined as the simplest brain circuit that would produce the same temporal relationship as observed experimentally between cortical sites. The most used method to estimate effective connectivity in neuroscience is the structural equation modeling (SEM), typically used on data related to the brain hemodynamic behavior. However, the use of hemodynamic measures limits the temporal resolution on which the brain process can be followed. The present research proposes the use of the SEM approach on the cortical waveforms estimated from the high-resolution EEG data, which exhibits a good spatial resolution and a higher temporal resolution than hemodynamic measures. We performed a simulation study, in which different main factors were systematically manipulated in the generation of test signals, and the errors in the estimated connectivity were evaluated by the analysis of variance (ANOVA). Such factors were the signal-to-noise ratio and the duration of the simulated cortical activity. Since SEM technique is based on the use of a model formulated on the basis of anatomical and physiological constraints, different experimental conditions were analyzed, in order to evaluate the effect of errors made in the a priori model formulation on its performances. The feasibility of the proposed approach has been shown in a human study using high-resolution EEG recordings related to finger tapping movements.",
"title": ""
},
{
"docid": "neg:1840396_11",
"text": "Previous phase I-II clinical trials have shown that recombinant human erythropoietin (rHuEpo) can ameliorate anemia in a portion of patients with multiple myeloma (MM) and non-Hodgkin's lymphoma (NHL). Therefore, we performed a randomized controlled multicenter study to define the optimal initial dosage and to identify predictors of response to rHuEpo. A total of 146 patients who had hemoglobin (Hb) levels < or = 11 g/dL and who had no need for transfusion at the time of enrollment entered this trial. Patients were randomized to receive 1,000 U (n = 31), 2,000 U (n = 29), 5,000 U (n = 31), or 10,000 U (n = 26) of rHuEpo daily subcutaneously for 8 weeks or to receive no therapy (n = 29). Of the patients, 84 suffered from MM and 62 from low- to intermediate-grade NHL, including chronic lymphocytic leukemia; 116 of 146 (79%) received chemotherapy during the study. The mean baseline Hb level was 9.4 +/- 1.0 g/dL. The median serum Epo level was 32 mU/mL, and endogenous Epo production was found to be defective in 77% of the patients, as judged by a value for the ratio of observed-to-predicted serum Epo levels (O/P ratio) of < or = 0.9. An intention-to-treat analysis was performed to evaluate treatment efficacy. The median average increase in Hb levels per week was 0.04 g/dL in the control group and -0.04 (P = .57), 0.22 (P = .05), 0.43 (P = .01), and 0.58 (P = .0001) g/dL in the 1,000 U, 2,000 U, 5,000 U, and 10,000 U groups, respectively (P values versus control). The probability of response (delta Hb > or = 2 g/dL) increased steadily and, after 8 weeks, reached 31% (2,000 U), 61% (5,000 U), and 62% (10,000 U), respectively. Regression analysis using Cox's proportional hazard model and classification and regression tree analysis showed that serum Epo levels and the O/P ratio were the most important factors predicting response in patients receiving 5,000 or 10,000 U. Approximately three quarters of patients presenting with Epo levels inappropriately low for the degree of anemia responded to rHuEpo, whereas only one quarter of those with adequate Epo levels did so. Classification and regression tree analysis also showed that doses of 2,000 U daily were effective in patients with an average platelet count greater than 150 x 10(9)/L. About 50% of these patients are expected to respond to rHuEpo. Thus, rHuEpo was safe and effective in ameliorating the anemia of MM and NHL patients who showed defective endogenous Epo production. From a practical point of view, we conclude that the decision to use rHuEpo in an individual anemic patient with MM or NHL should be based on serum Epo levels, whereas the choice of the initial dosage should be based on residual marrow function.",
"title": ""
},
{
"docid": "neg:1840396_12",
"text": "Visible light LEDs, due to their numerous advantages, are expected to become the dominant indoor lighting technology. These lights can also be switched ON/OFF at high frequency, enabling their additional use for wireless communication and indoor positioning. In this article, visible LED light--based indoor positioning systems are surveyed and classified into two broad categories based on the receiver structure. The basic principle and architecture of each design category, along with various position computation algorithms, are discussed and compared. Finally, several new research, implementation, commercialization, and standardization challenges are identified and highlighted for this relatively novel and interesting indoor localization technology.",
"title": ""
},
{
"docid": "neg:1840396_13",
"text": "The present work describes a classification schema for irony detection in Greek political tweets. Our hypothesis states that humorous political tweets could predict actual election results. The irony detection concept is based on subjective perceptions, so only relying on human-annotator driven labor might not be the best route. The proposed approach relies on limited labeled training data, thus a semi-supervised approach is followed, where collective-learning algorithms take both labeled and unlabeled data into consideration. We compare the semi-supervised results with the supervised ones from a previous research of ours. The hypothesis is evaluated via a correlation study between the irony that a party receives on Twitter, its respective actual election results during the Greek parliamentary elections of May 2012, and the difference between these results and the ones of the preceding elections of 2009. & 2016 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "neg:1840396_14",
"text": "To predict the uses of new technology, we present an approach grounded in science and technology studies (STS) that examines the social uses of current technology. As part of ongoing research on next-generation mobile imaging applications, we conducted an empirical study of the social uses of personal photography. We identify three: memory, creating and maintaining relationships, and self-expression. The roles of orality and materiality in these uses help us explain the observed resistances to intangible digital images and to assigning metadata and annotations. We conclude that this approach is useful for understanding the potential uses of technology and for design.",
"title": ""
},
{
"docid": "neg:1840396_15",
"text": "WORK-LIFE BALANCE means bringing work, whether done on the job or at home, and leisure time into balance to live life to its fullest. It doesn’t mean that you spend half of your life working and half of it playing; instead, it means balancing the two to achieve harmony in physical, emotional, and spiritual health. In today’s economy, can nurses achieve work-life balance? Although doing so may be difficult, the consequences to our health can be enormous if we don’t try. This article describes some of the stresses faced by nurses and tips for attaining a healthy balance of work and leisure.",
"title": ""
},
{
"docid": "neg:1840396_16",
"text": "In this paper we give the details of the numerical solution of a three-dimensional multispecies diffuse interface model of tumor growth, which was derived in (Wise et al., J. Theor. Biol. 253 (2008)) and used to study the development of glioma in (Frieboes et al., NeuroImage 37 (2007) and tumor invasion in (Bearer et al., Cancer Research, 69 (2009)) and (Frieboes et al., J. Theor. Biol. 264 (2010)). The model has a thermodynamic basis, is related to recently developed mixture models, and is capable of providing a detailed description of tumor progression. It utilizes a diffuse interface approach, whereby sharp tumor boundaries are replaced by narrow transition layers that arise due to differential adhesive forces among the cell-species. The model consists of fourth-order nonlinear advection-reaction-diffusion equations (of Cahn-Hilliard-type) for the cell-species coupled with reaction-diffusion equations for the substrate components. Numerical solution of the model is challenging because the equations are coupled, highly nonlinear, and numerically stiff. In this paper we describe a fully adaptive, nonlinear multigrid/finite difference method for efficiently solving the equations. We demonstrate the convergence of the algorithm and we present simulations of tumor growth in 2D and 3D that demonstrate the capabilities of the algorithm in accurately and efficiently simulating the progression of tumors with complex morphologies.",
"title": ""
},
{
"docid": "neg:1840396_17",
"text": "Data visualization and feature selection methods are proposed based on the )oint mutual information and ICA. The visualization methods can find many good 2-D projections for high dimensional data interpretation, which cannot be easily found by the other existing methods. The new variable selection method is found to be better in eliminating redundancy in the inputs than other methods based on simple mutual information. The efficacy of the methods is illustrated on a radar signal analysis problem to find 2-D viewing coordinates for data visualization and to select inputs for a neural network classifier.",
"title": ""
},
{
"docid": "neg:1840396_18",
"text": "The fifth generation wireless 5G development initiative is based upon 4G, which at present is struggling to meet its performance goals. The comparison between 3G and 4G wireless communication systems in relation to its architecture, speed, frequency band, switching design basis and forward error correction is studied, and were discovered that their performances are still unable to solve the unending problems of poor coverage, bad interconnectivity, poor quality of service and flexibility. An ideal 5G model to accommodate the challenges and shortfalls of 3G and 4G deployments is discussed as well as the significant system improvements on the earlier wireless technologies. The radio channel propagation characteristics for 4G and 5G systems is discussed. Major advantages of 5G network in providing myriads of services to end users personalization, terminal and network heterogeneity, intelligence networking and network convergence among other benefits are highlighted.The significance of the study is evaluated for a fast and effective connection and communication of devices like mobile phones and computers, including the capability of supporting and allowing a highly flexible network connectivity.",
"title": ""
},
{
"docid": "neg:1840396_19",
"text": "We have developed a multithreaded implementation of breadth-first search (BFS) of a sparse graph using the Cilk++ extensions to C++. Our PBFS program on a single processor runs as quickly as a standar. C++ breadth-first search implementation. PBFS achieves high work-efficiency by using a novel implementation of a multiset data structure, called a \"bag,\" in place of the FIFO queue usually employed in serial breadth-first search algorithms. For a variety of benchmark input graphs whose diameters are significantly smaller than the number of vertices -- a condition met by many real-world graphs -- PBFS demonstrates good speedup with the number of processing cores.\n Since PBFS employs a nonconstant-time \"reducer\" -- \"hyperobject\" feature of Cilk++ -- the work inherent in a PBFS execution depends nondeterministically on how the underlying work-stealing scheduler load-balances the computation. We provide a general method for analyzing nondeterministic programs that use reducers. PBFS also is nondeterministic in that it contains benign races which affect its performance but not its correctness. Fixing these races with mutual-exclusion locks slows down PBFS empirically, but it makes the algorithm amenable to analysis. In particular, we show that for a graph G=(V,E) with diameter D and bounded out-degree, this data-race-free version of PBFS algorithm runs it time O((V+E)/P + Dlg3(V/D)) on P processors, which means that it attains near-perfect linear speedup if P << (V+E)/Dlg3(V/D).",
"title": ""
}
] |
1840397 | Two axes orthogonal drive transmission for omnidirectional crawler with surface contact | [
{
"docid": "pos:1840397_0",
"text": "Holonomic omnidirectional mobile robots are useful because of their high level of mobility in narrow or crowded areas, and omnidirectional robots equipped with normal tires are desired for their ability to surmount difference in level as well as their vibration suppression and ride comfort. A caster-drive mechanism using normal tires has been developed to realize a holonomic omnidiredctional robot, but some problems has remain. Here we describe effective systems to control the caster-drive wheels of an omnidirectional mobile robot. We propose a Differential-Drive Steering System (DDSS) using differential gearing to improve the operation ratio of motors. The DDSS generates driving and steering torque effectively from two motors. Simulation and experimental results show that the proposed system is effective for holonomic omnidirectional mobile robots.",
"title": ""
},
{
"docid": "pos:1840397_1",
"text": "In this paper, a mobile robot with a tetrahedral shape for its basic structure is presented as a thrown robot for search and rescue robot application. The Tetrahedral Mobile Robot has its body in the center of the whole structure. The driving parts that produce the propelling force are located at each corner. As a driving wheel mechanism, we have developed the \"Omni-Ball\" with one active and two passive rotational axes, which are explained in detail. An actual prototype model has been developed to illustrate the concept and to perform preliminary motion experiments, through which the basic performance of the Tetrahedral Mobile Robot was confirmed",
"title": ""
}
] | [
{
"docid": "neg:1840397_0",
"text": "Purpose – The purpose of this paper is to distinguish and describe knowledge management (KM) technologies according to their support for strategy. Design/methodology/approach – This study employed an ontology development method to describe the relations between technology, KM and strategy, and to categorize available KM technologies according to those relations. Ontologies are formal specifications of concepts in a domain and their inter-relationships, and can be used to facilitate common understanding and knowledge sharing. The study focused particularly on two sub-domains of the KM field: KM strategies and KM technologies. Findings – ’’KM strategy’’ has three meanings in the literature: approach to KM, knowledge strategy, and KM implementation strategy. Also, KM technologies support strategy via KM initiatives based on particular knowledge strategies and approaches to KM. The study distinguishes three types of KM technologies: component technologies, KM applications, and business applications. They all can be described in terms of ’’creation’’ and ’’transfer’’ knowledge strategies, and ’’personalization’’ and ’’codification’’ approaches to KM. Research limitations/implications – The resulting framework suggests that KM technologies can be analyzed better in the context of KM initiatives, instead of the usual approach associating them with knowledge processes. KM initiatives provide the background and contextual elements necessary to explain technology adoption and use. Practical implications – The framework indicates three alternative modes for organizational adoption of KM technologies: custom development of KM systems from available component technologies; purchase of KM-specific applications; or purchase of business-driven applications that embed KM functionality. It also lists adequate technologies and provides criteria for selection in any of the cases. Originality/value – Among the many studies analyzing the role of technology in KM, an association with strategy has been missing. This paper contributes to filling this gap, integrating diverse contributions via a clearer definition of concepts and a visual representation of their relationships. This use of ontologies as a method, instead of an artifact, is also uncommon in the literature.",
"title": ""
},
{
"docid": "neg:1840397_1",
"text": "Low-level saliency cues or priors do not produce good enough saliency detection results especially when the salient object presents in a low-contrast background with confusing visual appearance. This issue raises a serious problem for conventional approaches. In this paper, we tackle this problem by proposing a multi-context deep learning framework for salient object detection. We employ deep Convolutional Neural Networks to model saliency of objects in images. Global context and local context are both taken into account, and are jointly modeled in a unified multi-context deep learning framework. To provide a better initialization for training the deep neural networks, we investigate different pre-training strategies, and a task-specific pre-training scheme is designed to make the multi-context modeling suited for saliency detection. Furthermore, recently proposed contemporary deep models in the ImageNet Image Classification Challenge are tested, and their effectiveness in saliency detection are investigated. Our approach is extensively evaluated on five public datasets, and experimental results show significant and consistent improvements over the state-of-the-art methods.",
"title": ""
},
{
"docid": "neg:1840397_2",
"text": "Pseudomonas aeruginosa thrives in many aqueous environments and is an opportunistic pathogen that can cause both acute and chronic infections. Environmental conditions and host defenses cause differing stresses on the bacteria, and to survive in vastly different environments, P. aeruginosa must be able to adapt to its surroundings. One strategy for bacterial adaptation is to self-encapsulate with matrix material, primarily composed of secreted extracellular polysaccharides. P. aeruginosa has the genetic capacity to produce at least three secreted polysaccharides; alginate, Psl, and Pel. These polysaccharides differ in chemical structure and in their biosynthetic mechanisms. Since alginate is often associated with chronic pulmonary infections, its biosynthetic pathway is the best characterized. However, alginate is only produced by a subset of P. aeruginosa strains. Most environmental and other clinical isolates secrete either Pel or Psl. Little information is available on the biosynthesis of these polysaccharides. Here, we review the literature on the alginate biosynthetic pathway, with emphasis on recent findings describing the structure of alginate biosynthetic proteins. This information combined with the characterization of the domain architecture of proteins encoded on the Psl and Pel operons allowed us to make predictive models for the biosynthesis of these two polysaccharides. The results indicate that alginate and Pel share certain features, including some biosynthetic proteins with structurally or functionally similar properties. In contrast, Psl biosynthesis resembles the EPS/CPS capsular biosynthesis pathway of Escherichia coli, where the Psl pentameric subunits are assembled in association with an isoprenoid lipid carrier. These models and the environmental cues that cause the cells to produce predominantly one polysaccharide over the others are subjects of current investigation.",
"title": ""
},
{
"docid": "neg:1840397_3",
"text": "In this paper, we propose a low-rank representation with symmetric constraint (LRRSC) method for robust subspace clustering. Given a collection of data points approximately drawn from multiple subspaces, the proposed technique can simultaneously recover the dimension and members of each subspace. LRRSC extends the original low-rank representation algorithm by integrating a symmetric constraint into the low-rankness property of high-dimensional data representation. The symmetric low-rank representation, which preserves the subspace structures of high-dimensional data, guarantees weight consistency for each pair of data points so that highly correlated data points of subspaces are represented together. Moreover, it can be efficiently calculated by solving a convex optimization problem. We provide a rigorous proof for minimizing the nuclear-norm regularized least square problem with a symmetric constraint. The affinity matrix for spectral clustering can be obtained by further exploiting the angular information of the principal directions of the symmetric low-rank representation. This is a critical step towards evaluating the memberships between data points. Experimental results on benchmark databases demonstrate the effectiveness and robustness of LRRSC compared with several state-of-the-art subspace clustering algorithms.",
"title": ""
},
{
"docid": "neg:1840397_4",
"text": "A multimodal network encodes relationships between the same set of nodes in multiple settings, and network alignment is a powerful tool for transferring information and insight between a pair of networks. We propose a method for multimodal network alignment that computes a matrix which indicates the alignment, but produces the result as a low-rank factorization directly. We then propose new methods to compute approximate maximum weight matchings of low-rank matrices to produce an alignment. We evaluate our approach by applying it on synthetic networks and use it to de-anonymize a multimodal transportation network.",
"title": ""
},
{
"docid": "neg:1840397_5",
"text": "In metazoans, gamma-tubulin acts within two main complexes, gamma-tubulin small complexes (gamma-TuSCs) and gamma-tubulin ring complexes (gamma-TuRCs). In higher eukaryotes, it is assumed that microtubule nucleation at the centrosome depends on gamma-TuRCs, but the role of gamma-TuRC components remains undefined. For the first time, we analyzed the function of all four gamma-TuRC-specific subunits in Drosophila melanogaster: Dgrip75, Dgrip128, Dgrip163, and Dgp71WD. Grip-motif proteins, but not Dgp71WD, appear to be required for gamma-TuRC assembly. Individual depletion of gamma-TuRC components, in cultured cells and in vivo, induces mitotic delay and abnormal spindles. Surprisingly, gamma-TuSCs are recruited to the centrosomes. These defects are less severe than those resulting from the inhibition of gamma-TuSC components and do not appear critical for viability. Simultaneous cosilencing of all gamma-TuRC proteins leads to stronger phenotypes and partial recruitment of gamma-TuSC. In conclusion, gamma-TuRCs are required for assembly of fully functional spindles, but we suggest that gamma-TuSC could be targeted to the centrosomes, which is where basic microtubule assembly activities are maintained.",
"title": ""
},
{
"docid": "neg:1840397_6",
"text": "The support of medical decisions comes from several sources. These include individual physician experience, pathophysiological constructs, pivotal clinical trials, qualitative reviews of the literature, and, increasingly, meta-analyses. Historically, the first of these four sources of knowledge largely informed medical and dental decision makers. Meta-analysis came on the scene around the 1970s and has received much attention. What is meta-analysis? It is the process of combining the quantitative results of separate (but similar) studies by means of formal statistical methods. Statistically, the purpose is to increase the precision with which the treatment effect of an intervention can be estimated. Stated in another way, one can say that meta-analysis combines the results of several studies with the purpose of addressing a set of related research hypotheses. The underlying studies can come in the form of published literature, raw data from individual clinical studies, or summary statistics in reports or abstracts. More broadly, a meta-analysis arises from a systematic review. There are three major components to a systematic review and meta-analysis. The systematic review starts with the formulation of the research question and hypotheses. Clinical or substantive insight about the particular domain of research often identifies not only the unmet investigative needs, but helps prepare for the systematic review by defining the necessary initial parameters. These include the hypotheses, endpoints, important covariates, and exposures or treatments of interest. Like any basic or clinical research endeavor, a prospectively defined and clear study plan enhances the expected utility and applicability of the final results for ultimately influencing practice or policy. After this foundational preparation, the second component, a systematic review, commences. The systematic review proceeds with an explicit and reproducible protocol to locate and evaluate the available data. The collection, abstraction, and compilation of the data follow a more rigorous and prospectively defined objective process. The definitions, structure, and methodologies of the underlying studies must be critically appraised. Hence, both “the content” and “the infrastructure” of the underlying data are analyzed, evaluated, and systematically recorded. Unlike an informal review of the literature, this systematic disciplined approach is intended to reduce the potential for subjectivity or bias in the subsequent findings. Typically, a literature search of an online database is the starting point for gathering the data. The most common sources are MEDLINE (United States Library of Overview, Strengths, and Limitations of Systematic Reviews and Meta-Analyses",
"title": ""
},
{
"docid": "neg:1840397_7",
"text": "BACKGROUND\nGuava leaf tea (GLT), exhibiting a diversity of medicinal bioactivities, has become a popularly consumed daily beverage. To improve the product quality, a new process was recommended to the Ser-Tou Farmers' Association (SFA), who began field production in 2005. The new process comprised simplified steps: one bud-two leaves were plucked at 3:00-6:00 am, in the early dawn period, followed by withering at ambient temperature (25-28 °C), rolling at 50 °C for 50-70 min, with or without fermentation, then drying at 45-50 °C for 70-90 min, and finally sorted.\n\n\nRESULTS\nThe product manufactured by this new process (named herein GLTSF) exhibited higher contents (in mg g(-1), based on dry ethyl acetate fraction/methanolic extract) of polyphenolics (417.9 ± 12.3) and flavonoids (452.5 ± 32.3) containing a compositional profile much simpler than previously found: total quercetins (190.3 ± 9.1), total myricetin (3.3 ± 0.9), total catechins (36.4 ± 5.3), gallic acid (8.8 ± 0.6), ellagic acid (39.1 ± 6.4) and tannins (2.5 ± 9.1).\n\n\nCONCLUSION\nWe have successfully developed a new process for manufacturing GLTSF with a unique polyphenolic profile. Such characteristic compositional distribution can be ascribed to the right harvesting hour in the early dawn and appropriate treatment process at low temperature, avoiding direct sunlight.",
"title": ""
},
{
"docid": "neg:1840397_8",
"text": "Leader election protocols are a fundamental building block for replicated distributed services. They ease the design of leader-based coordination protocols that tolerate failures. In partially synchronous systems, designing a leader election algorithm, that does not permit multiple leaders while the system is unstable, is a complex task. As a result many production systems use third-party distributed coordination services, such as ZooKeeper and Chubby, to provide a reliable leader election service. However, adding a third-party service such as ZooKeeper to a distributed system incurs additional operational costs and complexity. ZooKeeper instances must be kept running on at least three machines to ensure its high availability. In this paper, we present a novel leader election protocol using NewSQL databases for partially synchronous systems, that ensures at most one leader at any given time. The leader election protocol uses the database as distributed shared memory. Our work enables distributed systems that already use NewSQL databases to save the operational overhead of managing an additional third-party service for leader election. Our main contribution is the design, implementation and validation of a practical leader election algorithm, based on NewSQL databases, that has performance comparable to a leader election implementation using a state-of-the-art distributed coordination service, ZooKeeper.",
"title": ""
},
{
"docid": "neg:1840397_9",
"text": "Languages with rich type systems are beginning to employ a blend of type inference and type checking, so that the type inference engine is guided by programmer-supplied type annotations. In this paper we show, for the first time, how to combine the virtues of two well-established ideas: unification-based inference, and bidi-rectional propagation of type annotations. The result is a type system that conservatively extends Hindley-Milner, and yet supports both higher-rank types and impredicativity.",
"title": ""
},
{
"docid": "neg:1840397_10",
"text": "The increasing popularity of wearable devices that continuously capture video, and the prevalence of third-party applications that utilize these feeds have resulted in a new threat to privacy. In many situations, sensitive objects/regions are maliciously (or accidentally) captured in a video frame by third-party applications. However, current solutions do not allow users to specify and enforce fine grained access control over video feeds.\n In this paper, we describe MarkIt, a computer vision based privacy marker framework, that allows users to specify and enforce fine grained access control over video feeds. We present two example privacy marker systems -- PrivateEye and WaveOff. We conclude with a discussion of the computer vision, privacy and systems challenges in building a comprehensive system for fine grained access control over video feeds.",
"title": ""
},
{
"docid": "neg:1840397_11",
"text": "This paper presents two types of dual band (2.4 and 5.8 GHz) wearable planar dipole antennas, one printed on a conventional substrate and the other on a two-dimensional metamaterial surface (Electromagnetic Bandgap (EBG) structure). The operation of both antennas is investigated and compared under different bending conditions (in E and H-planes) around human arm and leg of different radii. A dual band, Electromagnetic Band Gap (EBG) structure on a wearable substrate is used as a high impedance surface to control the Specific Absorption Rate (SAR) as well as to improve the antenna gain up to 4.45 dBi. The EBG inspired antenna has reduced the SAR effects on human body to a safe level (< 2W/Kg). I.e. the SAR is reduced by 83.3% for lower band and 92.8% for higher band as compared to the conventional antenna. The proposed antenna can be used for wearable applications with least health hazard to human body in Industrial, Scientific and Medical (ISM) band (2.4 GHz, 5.2 GHz) applications. The antennas on human body are simulated and analyzed in CST Microwave Studio (CST MWS).",
"title": ""
},
{
"docid": "neg:1840397_12",
"text": "Linguistic creativity is a marriage of form and content in which each works together to convey our meanings with concision, resonance and wit. Though form clearly influences and shapes our content, the most deft formal trickery cannot compensate for a lack of real insight. Before computers can be truly creative with language, we must first imbue them with the ability to formulate meanings that are worthy of creative expression. This is especially true of computer-generated poetry. If readers are to recognize a poetic turn-of-phrase as more than a superficial manipulation of words, they must perceive and connect with the meanings and the intent behind the words. So it is not enough for a computer to merely generate poem-shaped texts; poems must be driven by conceits that build an affective worldview. This paper describes a conceit-driven approach to computational poetry, in which metaphors and blends are generated for a given topic and affective slant. Subtle inferences drawn from these metaphors and blends can then drive the process of poetry generation. In the same vein, we consider the problem of generating witty insights from the banal truisms of common-sense knowledge bases. Ode to a Keatsian Turn Poetic licence is much more than a licence to frill. Indeed, it is not so much a licence as a contract, one that allows a speaker to subvert the norms of both language and nature in exchange for communicating real insights about some relevant state of affairs. Of course, poetry has norms and conventions of its own, and these lend poems a range of recognizably “poetic” formal characteristics. When used effectively, formal devices such as alliteration, rhyme and cadence can mold our meanings into resonant and incisive forms. However, even the most poetic devices are just empty frills when used only to disguise the absence of real insight. Computer models of poem generation must model more than the frills of poetry, and must instead make these formal devices serve the larger goal of meaning creation. Nonetheless, is often said that we “eat with our eyes”, so that the stylish presentation of food can subtly influence our sense of taste. So it is with poetry: a pleasing form can do more than enhance our recall and comprehension of a meaning – it can also suggest a lasting and profound truth. Experiments by McGlone & Tofighbakhsh (1999, 2000) lend empirical support to this so-called Keats heuristic, the intuitive belief – named for Keats’ memorable line “Beauty is truth, truth beauty” – that a meaning which is rendered in an aesthetically-pleasing form is much more likely to be perceived as truthful than if it is rendered in a less poetic form. McGlone & Tofighbakhsh demonstrated this effect by searching a book of proverbs for uncommon aphorisms with internal rhyme – such as “woes unite foes” – and by using synonym substitution to generate non-rhyming (and thus less poetic) variants such as “troubles unite enemies”. While no significant differences were observed in subjects’ ease of comprehension for rhyming/non-rhyming forms, subjects did show a marked tendency to view the rhyming variants as more truthful expressions of the human condition than the corresponding non-rhyming forms. So a well-polished poetic form can lend even a modestly interesting observation the lustre of a profound insight. An automated approach to poetry generation can exploit this symbiosis of form and content in a number of useful ways. It might harvest interesting perspectives on a given topic from a text corpus, or it might search its stores of commonsense knowledge for modest insights to render in immodest poetic forms. We describe here a system that combines both of these approaches for meaningful poetry generation. As shown in the sections to follow, this system – named Stereotrope – uses corpus analysis to generate affective metaphors for a topic on which it is asked to wax poetic. Stereotrope can be asked to view a topic from a particular affective stance (e.g., view love negatively) or to elaborate on a familiar metaphor (e.g. love is a prison). In doing so, Stereotrope takes account of the feelings that different metaphors are likely to engender in an audience. These metaphors are further integrated to yield tight conceptual blends, which may in turn highlight emergent nuances of a viewpoint that are worthy of poetic expression (see Lakoff and Turner, 1989). Stereotrope uses a knowledge-base of conceptual norms to anchor its understanding of these metaphors and blends. While these norms are the stuff of banal clichés and stereotypes, such as that dogs chase cats and cops eat donuts. we also show how Stereotrope finds and exploits corpus evidence to recast these banalities as witty, incisive and poetic insights. Mutual Knowledge: Norms and Stereotypes Samuel Johnson opined that “Knowledge is of two kinds. We know a subject ourselves, or we know where we can find information upon it.” Traditional approaches to the modelling of metaphor and other figurative devices have typically sought to imbue computers with the former (Fass, 1997). More recently, however, the latter kind has gained traction, with the use of the Web and text corpora to source large amounts of shallow knowledge as it is needed (e.g., Veale & Hao 2007a,b; Shutova 2010; Veale & Li, 2011). But the kind of knowledge demanded by knowledgehungry phenomena such as metaphor and blending is very different to the specialist “book” knowledge so beloved of Johnson. These demand knowledge of the quotidian world that we all tacitly share but rarely articulate in words, not even in the thoughtful definitions of Johnson’s dictionary. Similes open a rare window onto our shared expectations of the world. Thus, the as-as-similes “as hot as an oven”, “as dry as sand” and “as tough as leather” illuminate the expected properties of these objects, while the like-similes “crying like a baby”, “singing like an angel” and “swearing like a sailor” reflect intuitons of how these familiar entities are tacitly expected to behave. Veale & Hao (2007a,b) thus harvest large numbers of as-as-similes from the Web to build a rich stereotypical model of familiar ideas and their salient properties, while Özbal & Stock (2012) apply a similar approach on a smaller scale using Google’s query completion service. Fishelov (1992) argues convincingly that poetic and non-poetic similes are crafted from the same words and ideas. Poetic conceits use familiar ideas in non-obvious combinations, often with the aim of creating semantic tension. The simile-based model used here thus harvests almost 10,000 familiar stereotypes (drawing on a range of ~8,000 features) from both as-as and like-similes. Poems construct affective conceits, but as shown in Veale (2012b), the features of a stereotype can be affectively partitioned as needed into distinct pleasant and unpleasant perspectives. We are thus confident that a stereotype-based model of common-sense knowledge is equal to the task of generating and elaborating affective conceits for a poem. A stereotype-based model of common-sense knowledge requires both features and relations, with the latter showing how stereotypes relate to each other. It is not enough then to know that cops are tough and gritty, or that donuts are sweet and soft; our stereotypes of each should include the cliché that cops eat donuts, just as dogs chew bones and cats cough up furballs. Following Veale & Li (2011), we acquire inter-stereotype relationships from the Web, not by mining similes but by mining questions. As in Özbal & Stock (2012), we target query completions from a popular search service (Google), which offers a smaller, public proxy for a larger, zealously-guarded search query log. We harvest questions of the form “Why do Xs <relation> Ys”, and assume that since each relationship is presupposed by the question (so “why do bikers wear leathers” presupposes that everyone knows that bikers wear leathers), the triple of subject/relation/object captures a widely-held norm. In this way we harvest over 40,000 such norms from the Web. Generating Metaphors, N-Gram Style! The Google n-grams (Brants & Franz, 2006) is a rich source of popular metaphors of the form Target is Source, such as “politicians are crooks”, “Apple is a cult”, “racism is a disease” and “Steve Jobs is a god”. Let src(T) denote the set of stereotypes that are commonly used to describe a topic T, where commonality is defined as the presence of the corresponding metaphor in the Google n-grams. To find metaphors for proper-named entities, we also analyse n-grams of the form stereotype First [Middle] Last, such as “tyrant Adolf Hitler” and “boss Bill Gates”. Thus, e.g.: src(racism) = {problem, disease, joke, sin, poison, crime, ideology, weapon} src(Hitler) = {monster, criminal, tyrant, idiot, madman, vegetarian, racist, ...} Let typical(T) denote the set of properties and behaviors harvested for T from Web similes (see previous section), and let srcTypical(T) denote the aggregate set of properties and behaviors ascribable to T via the metaphors in src(T): (1) srcTypical (T) = M∈src(T) typical(M) We can generate conceits for a topic T by considering not just obvious metaphors for T, but metaphors of metaphors: (2) conceits(T) = src(T) ∪ M∈src(T) src(M) The features evoked by the conceit T as M are given by: (3) salient (T,M) = [srcTypical(T) ∪ typical(T)]",
"title": ""
},
{
"docid": "neg:1840397_13",
"text": "In this paper, the dynamic modeling of a doubly-fed induction generator-based wind turbine connected to infinite bus (SMIB) system, is carried out in detail. In most of the analysis, the DFIG stator transients and network transients are neglected. In this paper the interfacing problems while considering stator transients and network transients in the modeling of SMIB system are resolved by connecting a resistor across the DFIG terminals. The effect of simplification of shaft system on the controller gains is also discussed. In addition, case studies are presented to demonstrate the effect of mechanical parameters and controller gains on system stability when accounting the two-mass shaft model for the drive train system.",
"title": ""
},
{
"docid": "neg:1840397_14",
"text": "The study presented in this paper examines the fit of total quality management (TQM) practices in mediating the relationship between organization strategy and organization performance. By examining TQM in relation to organization strategy, the study seeks to advance the understanding of TQM in a broader context. It also resolves some controversies that appear in the literature concerning the relationship between TQM and differentiation and cost leadership strategies as well as quality and innovation performance. The empirical data for this study was drawn from a survey of 194 middle/senior managers from Australian firms. The analysis was conducted using structural equation modeling (SEM) technique by examining two competing models that represent full and partial mediation. The findings indicate that TQM is positively and significantly related to differentiation strategy, and it only partially mediates the relationship between differentiation strategy and three performance measures (product quality, product innovation, and process innovation). The implication is that TQM needs to be complemented by other resources to more effectively realize the strategy in achieving a high level of performance, particularly innovation. 2004 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "neg:1840397_15",
"text": "This paper presents the novel design of a wideband circularly polarized (CP) Radio Frequency Identification (RFID) reader microstrip patch antenna for worldwide Ultra High Frequency (UHF) band which covers 840–960 MHz. The proposed antenna, which consists of a microstrip patch with truncated corners and a cross slot, is placed on a foam substrate (εr = 1.06) above a ground plane and is fed through vias through ground plane holes that extend from the quadrature 3 dB branch line hybrid coupler placed below the ground plane. This helps to separate feed network radiation, from the patch antenna and keeping the CP purity. The prototype antenna was fabricated with a total size of 225 × 250 × 12.8 mm3 which shows a measured impedance matching band of 840–1150MHz (31.2%) as well as measured rotating linear based circularly polarized radiation patterns. The simulated and measured 3 dB Axial Ratio (AR) bandwidth is better than 23% from 840–1050 MHz meeting and exceeding the target worldwide RFID UHF band.",
"title": ""
},
{
"docid": "neg:1840397_16",
"text": "In this paper, mathematical models for permutation flow shop scheduling and job shop scheduling problems are proposed. The first problem is based on a mixed integer programming model. As the problem is NP-complete, this model can only be used for smaller instances where an optimal solution can be computed. For large instances, another model is proposed which is suitable for solving the problem by stochastic heuristic methods. For the job shop scheduling problem, a mathematical model and its main representation schemes are presented. Keywords—Flow shop, job shop, mixed integer model, representation scheme.",
"title": ""
},
{
"docid": "neg:1840397_17",
"text": "WORK-LIFE BALANCE means bringing work, whether done on the job or at home, and leisure time into balance to live life to its fullest. It doesn’t mean that you spend half of your life working and half of it playing; instead, it means balancing the two to achieve harmony in physical, emotional, and spiritual health. In today’s economy, can nurses achieve work-life balance? Although doing so may be difficult, the consequences to our health can be enormous if we don’t try. This article describes some of the stresses faced by nurses and tips for attaining a healthy balance of work and leisure.",
"title": ""
},
{
"docid": "neg:1840397_18",
"text": "A new form of variational autoencoder (VAE) is developed, in which the joint distribution of data and codes is considered in two (symmetric) forms: (i) from observed data fed through the encoder to yield codes, and (ii) from latent codes drawn from a simple prior and propagated through the decoder to manifest data. Lower bounds are learned for marginal log-likelihood fits observed data and latent codes. When learning with the variational bound, one seeks to minimize the symmetric Kullback-Leibler divergence of joint density functions from (i) and (ii), while simultaneously seeking to maximize the two marginal log-likelihoods. To facilitate learning, a new form of adversarial training is developed. An extensive set of experiments is performed, in which we demonstrate state-of-the-art data reconstruction and generation on several image benchmark datasets.",
"title": ""
},
{
"docid": "neg:1840397_19",
"text": "The number and variety of mobile multicast applications are growing at an unprecedented and unanticipated pace. Mobile network providers are in front of a dramatic increase in multicast traffic load, and this growth is forecasted to continue in fifth-generation (5G) networks. The major challenges come from the fact that multicast traffic not only targets groups of end-user devices; it also involves machine-type communications (MTC) for the Internet of Things (IoT). The increase in the MTC load, predicted for 5G, calls into question the effectiveness of the current multimedia broadcast multicast service (MBMS). The aim of this paper is to provide a survey of 5G challenges in the view of effective management of multicast applications, and to identify how to enhance the mobile network architecture to enable multicast applications in future 5G scenarios. By accounting for the presence of both human and machine-related traffic, strengths and weaknesses of the state-of-the-art achievements in multicasting are critically analyzed to provide guidelines for future research on 5G networks and more conscious design choices.",
"title": ""
}
] |
1840398 | Pedestrian Detection: An Evaluation of the State of the Art | [
{
"docid": "pos:1840398_0",
"text": "We present an approach for learning to detect objects in still gray images, that is based on a sparse, part-based representation of objects. A vocabulary of information-rich object parts is automatically constructed from a set of sample images of the object class of interest. Images are then represented using parts from this vocabulary, along with spatial relations observed among them. Based on this representation, a feature-efficient learning algorithm is used to learn to detect instances of the object class. The framework developed can be applied to any object with distinguishable parts in a relatively fixed spatial configuration. We report experiments on images of side views of cars. Our experiments show that the method achieves high detection accuracy on a difficult test set of real-world images, and is highly robust to partial occlusion and background variation. In addition, we discuss and offer solutions to several methodological issues that are significant for the research community to be able to evaluate object detection",
"title": ""
},
{
"docid": "pos:1840398_1",
"text": "The goal of this work is to accurately detect and localize boundaries in natural scenes using local image measurements. We formulate features that respond to characteristic changes in brightness, color, and texture associated with natural boundaries. In order to combine the information from these features in an optimal way, we train a classifier using human labeled images as ground truth. The output of this classifier provides the posterior probability of a boundary at each image location and orientation. We present precision-recall curves showing that the resulting detector significantly outperforms existing approaches. Our two main results are 1) that cue combination can be performed adequately with a simple linear model and 2) that a proper, explicit treatment of texture is required to detect boundaries in natural images.",
"title": ""
},
{
"docid": "pos:1840398_2",
"text": "Significant research has been devoted to detecting people in images and videos. In this paper we describe a human detection method that augments widely used edge-based features with texture and color information, providing us with a much richer descriptor set. This augmentation results in an extremely high-dimensional feature space (more than 170,000 dimensions). In such high-dimensional spaces, classical machine learning algorithms such as SVMs are nearly intractable with respect to training. Furthermore, the number of training samples is much smaller than the dimensionality of the feature space, by at least an order of magnitude. Finally, the extraction of features from a densely sampled grid structure leads to a high degree of multicollinearity. To circumvent these data characteristics, we employ Partial Least Squares (PLS) analysis, an efficient dimensionality reduction technique, one which preserves significant discriminative information, to project the data onto a much lower dimensional subspace (20 dimensions, reduced from the original 170,000). Our human detection system, employing PLS analysis over the enriched descriptor set, is shown to outperform state-of-the-art techniques on three varied datasets including the popular INRIA pedestrian dataset, the low-resolution gray-scale DaimlerChrysler pedestrian dataset, and the ETHZ pedestrian dataset consisting of full-length videos of crowded scenes.",
"title": ""
},
{
"docid": "pos:1840398_3",
"text": "Both detection and tracking people are challenging problems, especially in complex real world scenes that commonly involve multiple people, complicated occlusions, and cluttered or even moving backgrounds. People detectors have been shown to be able to locate pedestrians even in complex street scenes, but false positives have remained frequent. The identification of particular individuals has remained challenging as well. Tracking methods are able to find a particular individual in image sequences, but are severely challenged by real-world scenarios such as crowded street scenes. In this paper, we combine the advantages of both detection and tracking in a single framework. The approximate articulation of each person is detected in every frame based on local features that model the appearance of individual body parts. Prior knowledge on possible articulations and temporal coherency within a walking cycle are modeled using a hierarchical Gaussian process latent variable model (hGPLVM). We show how the combination of these results improves hypotheses for position and articulation of each person in several subsequent frames. We present experimental results that demonstrate how this allows to detect and track multiple people in cluttered scenes with reoccurring occlusions.",
"title": ""
}
] | [
{
"docid": "neg:1840398_0",
"text": "Is Facebook becoming a place where people mistakenly think they can literally get away with murder? In a 2011 Facebook murder-for-hire case in Philadelphia, PA, a 19-yearold mother offered $1,000 on Facebook to kill her 22-year-old boyfriend, the father of her 2-year-old daughter. The boyfriend was killed while the only two suspects responding to the mother’s post were in custody, so there is speculation that the murder was drug related. The mother pleaded guilty to conspiracy to commit murder, and was immediately paroled on a 3to 23-month sentence. Other ‘‘Facebook murder’’ perpetrators are being brought to justice, one way or another:",
"title": ""
},
{
"docid": "neg:1840398_1",
"text": " Random walks on an association graph using candidate matches as nodes. Rank candidate matches by stationary distribution Personalized jump for enforcing the matching constraints during the random walks process Matching constraints satisfying reweighting vector is calculated iteratively by inflation and bistochastic normalization Due to object motion or viewpoint change, relationships between two nodes are not exactly same Outlier Noise Deformation Noise",
"title": ""
},
{
"docid": "neg:1840398_2",
"text": "Knowledge graph (KG) is known to be helpful for the task of question answering (QA), since it provides well-structured relational information between entities, and allows one to further infer indirect facts. However, it is challenging to build QA systems which can learn to reason over knowledge graphs based on question-answer pairs alone. First, when people ask questions, their expressions are noisy (for example, typos in texts, or variations in pronunciations), which is non-trivial for the QA system to match those mentioned entities to the knowledge graph. Second, many questions require multi-hop logic reasoning over the knowledge graph to retrieve the answers. To address these challenges, we propose a novel and unified deep learning architecture, and an end-to-end variational learning algorithm which can handle noise in questions, and learn multi-hop reasoning simultaneously. Our method achieves state-of-the-art performance on a recent benchmark dataset in the literature. We also derive a series of new benchmark datasets, including questions for multi-hop reasoning, questions paraphrased by neural translation model, and questions in human voice. Our method yields very promising results on all these challenging datasets.",
"title": ""
},
{
"docid": "neg:1840398_3",
"text": "OBJECTIVES\nUnilateral strength training produces an increase in strength of the contralateral homologous muscle group. This process of strength transfer, known as cross education, is generally attributed to neural adaptations. It has been suggested that unilateral strength training of the free limb may assist in maintaining the functional capacity of an immobilised limb via cross education of strength, potentially enhancing recovery outcomes following injury. Therefore, the purpose of this review is to examine the impact of immobilisation, the mechanisms that may contribute to cross education, and possible implications for the application of unilateral training to maintain strength during immobilisation.\n\n\nDESIGN\nCritical review of literature.\n\n\nMETHODS\nSearch of online databases.\n\n\nRESULTS\nImmobilisation is well known for its detrimental effects on muscular function. Early reductions in strength outweigh atrophy, suggesting a neural contribution to strength loss, however direct evidence for the role of the central nervous system in this process is limited. Similarly, the precise neural mechanisms responsible for cross education strength transfer remain somewhat unknown. Two recent studies demonstrated that unilateral training of the free limb successfully maintained strength in the contralateral immobilised limb, although the role of the nervous system in this process was not quantified.\n\n\nCONCLUSIONS\nCross education provides a unique opportunity for enhancing rehabilitation following injury. By gaining an understanding of the neural adaptations occurring during immobilisation and cross education, future research can utilise the application of unilateral training in clinical musculoskeletal injury rehabilitation.",
"title": ""
},
{
"docid": "neg:1840398_4",
"text": "A risk-metric framework that supports Enterprise Risk Management is described. At the heart of the framework is the notion of a risk profile that provides risk measurement for risk elements. By providing a generic template in which metrics can be codified in terms of metric space operators, risk profiles can be used to construct a variety of risk measures for different business contexts. These measures can vary from conventional economic risk calculations to the kinds of metrics that are used by decision support systems, such as those supporting inexact reasoning and which are considered to closely match how humans combine information.",
"title": ""
},
{
"docid": "neg:1840398_5",
"text": "The symmetric travelling salesman problem is a real world combinatorial optimization problem and a well researched domain. When solving combinatorial optimization problems such as the travelling salesman problem a low-level construction heuristic is usually used to create an initial solution, rather than randomly creating a solution, which is further optimized using techniques such as tabu search, simulated annealing and genetic algorithms, amongst others. These heuristics are usually manually derived by humans and this is a time consuming process requiring many man hours. The research presented in this paper forms part of a larger initiative aimed at automating the process of deriving construction heuristics for combinatorial optimization problems.\n The study investigates genetic programming to induce low-level construction heuristics for the symmetric travelling salesman problem. While this has been examined for other combinatorial optimization problems, to the authors' knowledge this is the first attempt at evolving low-level construction heuristics for the travelling salesman problem. In this study a generational genetic programming algorithm randomly creates an initial population of low-level construction heuristics which is iteratively refined over a set number of generations by the processes of fitness evaluation, selection of parents and application of genetic operators.\n The approach is tested on 23 problem instances, of varying problem characteristics, from the TSPLIB and VLSI benchmark sets. The evolved heuristics were found to perform better than the human derived heuristic, namely, the nearest neighbourhood heuristic, generally used to create initial solutions for the travelling salesman problem.",
"title": ""
},
{
"docid": "neg:1840398_6",
"text": "greatest cause of mortality from cardiovascular disease, after myocardial infarction and cerebrovascular stroke. From hospital epidemiological data it has been calculated that the incidence of PE in the USA is 1 per 1,000 annually. The real number is likely to be larger, since the condition goes unrecognised in many patients. Mortality due to PE has been estimated to exceed 15% in the first three months after diagnosis. PE is a dramatic and life-threatening complication of deep venous thrombosis (DVT). For this reason, the prevention, diagnosis and treatment of DVT is of special importance, since symptomatic PE occurs in 30% of those affected. If asymptomatic episodes are also included, it is estimated that 50-60% of DVT patients develop PE. DVT and PE are manifestations of the same entity, namely thromboembolic disease. If we extrapolate the epidemiological data from the USA to Greece, which has a population of about ten million, 20,000 new cases of thromboembolic disease may be expected annually. Of these patients, PE will occur in 10,000, of which 6,000 will have symptoms and 900 will die during the first trimester.",
"title": ""
},
{
"docid": "neg:1840398_7",
"text": "Ahstract- For many people suffering from motor disabilities, assistive devices controlled with only brain activity are the only way to interact with their environment [1]. Natural tasks often require different kinds of interactions, involving different controllers the user should be able to select in a self-paced way. We developed a Brain-Computer Interface (BCI) allowing users to switch between four control modes in a self-paced way in real-time. Since the system is devised to be used in domestic environments in a user-friendly way, we selected non-invasive electroencephalographic (EEG) signals and convolutional neural networks (CNNs), known for their ability to find the optimal features in classification tasks. We tested our system using the Cybathlon BCI computer game, which embodies all the challenges inherent to real-time control. Our preliminary results show that an efficient architecture (SmallNet), with only one convolutional layer, can classify 4 mental activities chosen by the user. The BCI system is run and validated online. It is kept up-to-date through the use of newly collected signals along playing, reaching an online accuracy of 47.6% where most approaches only report results obtained offline. We found that models trained with data collected online better predicted the behaviour of the system in real-time. This suggests that similar (CNN based) offline classifying methods found in the literature might experience a drop in performance when applied online. Compared to our previous decoder of physiological signals relying on blinks, we increased by a factor 2 the amount of states among which the user can transit, bringing the opportunity for finer control of specific subtasks composing natural grasping in a self-paced way. Our results are comparable to those showed at the Cybathlon's BCI Race but further improvements on accuracy are required.",
"title": ""
},
{
"docid": "neg:1840398_8",
"text": "We aim to detect complex events in long Internet videos that may last for hours. A major challenge in this setting is that only a few shots in a long video are relevant to the event of interest while others are irrelevant or even misleading. Instead of indifferently pooling the shots, we first define a novel notion of semantic saliency that assesses the relevance of each shot with the event of interest. We then prioritize the shots according to their saliency scores since shots that are semantically more salient are expected to contribute more to the final event detector. Next, we propose a new isotonic regularizer that is able to exploit the semantic ordering information. The resulting nearly-isotonic SVM classifier exhibits higher discriminative power. Computationally, we develop an efficient implementation using the proximal gradient algorithm, and we prove new, closed-form proximal steps. We conduct extensive experiments on three real-world video datasets and confirm the effectiveness of the proposed approach.",
"title": ""
},
{
"docid": "neg:1840398_9",
"text": "Submitted: 1 December 2015 Accepted: 6 April 2016 doi:10.1111/zsc.12190 Sotka, E.E., Bell, T., Hughes, L.E., Lowry, J.K. & Poore, A.G.B. (2016). A molecular phylogeny of marine amphipods in the herbivorous family Ampithoidae. —Zoologica Scripta, 00, 000–000. Ampithoid amphipods dominate invertebrate assemblages associated with shallow-water macroalgae and seagrasses worldwide and represent the most species-rich family of herbivorous amphipod known. To generate the first molecular phylogeny of this family, we sequenced 35 species from 10 genera at two mitochondrial genes [the cytochrome c oxidase subunit I (COI) and the large subunit of 16 s (LSU)] and two nuclear loci [sodium–potassium ATPase (NAK) and elongation factor 1-alpha (EF1)], for a total of 1453 base pairs. All 10 genera are embedded within an apparently monophyletic Ampithoidae (Amphitholina, Ampithoe, Biancolina, Cymadusa, Exampithoe, Paragrubia, Peramphithoe, Pleonexes, Plumithoe, Pseudoamphithoides and Sunamphitoe). Biancolina was previously placed within its own superfamily in another suborder. Within the family, single-locus trees were generally poor at resolving relationships among genera. Combined-locus trees were better at resolving deeper nodes, but complete resolution will require greater taxon sampling of ampithoids and closely related outgroup species, and more molecular characters. Despite these difficulties, our data generally support the monophyly of Ampithoidae, novel evolutionary relationships among genera, several currently accepted genera that will require revisions via alpha taxonomy and the presence of cryptic species. Corresponding author: Erik Sotka, Department of Biology and the College of Charleston Marine Laboratory, 205 Fort Johnson Road, Charleston, SC 29412, USA. E-mail: SotkaE@cofc.edu Erik E. Sotka, and Tina Bell, Department of Biology and Grice Marine Laboratory, College of Charleston, 205 Fort Johnson Road, Charleston, SC 29412, USA. E-mails: SotkaE@cofc.edu, tinamariebell@gmail.com Lauren E. Hughes, and James K. Lowry, Australian Museum Research Institute, 6 College Street, Sydney, NSW 2010, Australia. E-mails: megaluropus@gmail.com, stephonyx@gmail.com Alistair G. B. Poore, Evolution & Ecology Research Centre, School of Biological, Earth and Environmental Sciences, University of New South Wales, Sydney, NSW 2052, Australia. E-mail: a.poore@unsw.edu.au",
"title": ""
},
{
"docid": "neg:1840398_10",
"text": "INTRODUCTION Oxygen support therapy should be given to the patients with acute hypoxic respiratory insufficiency in order to provide oxygenation of the tissues until the underlying pathology improves. The inspiratory flow rate requirement of patients with respiratory insufficiency varies between 30 and 120 L/min. Low flow and high flow conventional oxygen support systems produce a maximum flow rate of 15 L/min, and FiO2 changes depending on the patient’s peak inspiratory flow rate, respiratory pattern, the mask that is used, or the characteristics of the cannula. The inability to provide adequate airflow leads to discomfort in tachypneic patients. With high-flow nasal oxygen (HFNO) cannulas, warmed and humidified air matching the body temperature can be regulated at flow rates of 5–60 L/min, and oxygen delivery varies between 21% and 100%. When HFNO, first used in infants, was reported to increase the risk of infection, its long-term use was stopped. This problem was later eliminated with the use of sterile water, and its use has become a current issue in critical adult patients as well. Studies show that HFNO treatment improves physiological parameters when compared to conventional oxygen systems. Although there are studies indicating successful applications in different patient groups, there are also studies indicating that it does not create any difference in clinical parameters, but patient comfort is better in HFNO when compared with standard oxygen therapy and noninvasive mechanical ventilation (NIMV) (1-6). In this compilation, the physiological effect mechanisms of HFNO treatment and its use in various clinical situations are discussed in the light of current studies.",
"title": ""
},
{
"docid": "neg:1840398_11",
"text": "Direct current (DC) motors are controlled easily and have very high performance. The speed of the motors could be adjusted within a wide range. Today, classical control techniques (such as Proportional Integral Differential PID) are very commonly used for speed control purposes. However, it is observed that the classical control techniques do not have an adequate performance in the case of nonlinear systems. Thus, instead, a modern technique is preferred: fuzzy logic. In this paper the control system is modelled using MATLAB/Simulink. Using both PID controller and fuzzy logic techniques, the results are compared for different speed values.",
"title": ""
},
{
"docid": "neg:1840398_12",
"text": "Human aesthetic preference in the visual domain is reviewed from definitional, methodological, empirical, and theoretical perspectives. Aesthetic science is distinguished from the perception of art and from philosophical treatments of aesthetics. The strengths and weaknesses of important behavioral techniques are presented and discussed, including two-alternative forced-choice, rank order, subjective rating, production/adjustment, indirect, and other tasks. Major findings are reviewed about preferences for colors (single colors, color combinations, and color harmony), spatial structure (low-level spatial properties, shape properties, and spatial composition within a frame), and individual differences in both color and spatial structure. Major theoretical accounts of aesthetic response are outlined and evaluated, including explanations in terms of mere exposure effects, arousal dynamics, categorical prototypes, ecological factors, perceptual and conceptual fluency, and the interaction of multiple components. The results of the review support the conclusion that aesthetic response can be studied rigorously and meaningfully within the framework of scientific psychology.",
"title": ""
},
{
"docid": "neg:1840398_13",
"text": "Functional genomics studies have led to the discovery of a large amount of non-coding RNAs from the human genome; among them are long non-coding RNAs (lncRNAs). Emerging evidence indicates that lncRNAs could have a critical role in the regulation of cellular processes such as cell growth and apoptosis as well as cancer progression and metastasis. As master gene regulators, lncRNAs are capable of forming lncRNA–protein (ribonucleoprotein) complexes to regulate a large number of genes. For example, lincRNA-RoR suppresses p53 in response to DNA damage through interaction with heterogeneous nuclear ribonucleoprotein I (hnRNP I). The present study demonstrates that hnRNP I can also form a functional ribonucleoprotein complex with lncRNA urothelial carcinoma-associated 1 (UCA1) and increase the UCA1 stability. Of interest, the phosphorylated form of hnRNP I, predominantly in the cytoplasm, is responsible for the interaction with UCA1. Moreover, although hnRNP I enhances the translation of p27 (Kip1) through interaction with the 5′-untranslated region (5′-UTR) of p27 mRNAs, the interaction of UCA1 with hnRNP I suppresses the p27 protein level by competitive inhibition. In support of this finding, UCA1 has an oncogenic role in breast cancer both in vitro and in vivo. Finally, we show a negative correlation between p27 and UCA in the breast tumor cancer tissue microarray. Together, our results suggest an important role of UCA1 in breast cancer.",
"title": ""
},
{
"docid": "neg:1840398_14",
"text": "Vertebral angioma is a common bone tumor. We report a case of L1 vertebral angioma revealed by type A3.2 traumatic pathological fracture of the same vertebra. Management comprised emergency percutaneous osteosynthesis and, after stabilization of the multiple trauma, arterial embolization and percutaneous kyphoplasty.",
"title": ""
},
{
"docid": "neg:1840398_15",
"text": "Sex differences are prominent in mood and anxiety disorders and may provide a window into mechanisms of onset and maintenance of affective disturbances in both men and women. With the plethora of sex differences in brain structure, function, and stress responsivity, as well as differences in exposure to reproductive hormones, social expectations and experiences, the challenge is to understand which sex differences are relevant to affective illness. This review will focus on clinical aspects of sex differences in affective disorders including the emergence of sex differences across developmental stages and the impact of reproductive events. Biological, cultural, and experiential factors that may underlie sex differences in the phenomenology of mood and anxiety disorders are discussed.",
"title": ""
},
{
"docid": "neg:1840398_16",
"text": "In an end-to-end dialog system, the aim of dialog state tracking is to accurately estimate a compact representation of the current dialog status from a sequence of noisy observations produced by the speech recognition and the natural language understanding modules. A state tracking module is primarily meant to act as support for a dialog policy but it can also be used as support for dialog corpus summarization and other kinds of information extraction from transcription of dialogs. From a probabilistic view, this is achieved by maintaining a posterior distribution over hidden dialog states composed, in the simplest case, of a set of context dependent variables. Once a dialog policy is defined, deterministic or learnt, it is in charge of selecting an optimal dialog act given the estimated dialog state and a defined reward function. This paper introduces a novel method of dialog state tracking based on the general paradigm of machine reading and proposes to solve it using a memory-enhanced neural network architecture. We evaluate the proposed approach on the second Dialog State Tracking Challenge (DSTC-2) dataset that has been converted for the occasion in order to fit the relaxed assumption of a machine reading formulation where the true state is only provided at the very end of each dialog instead of providing the state updates at the utterance level. We show that the proposed tracker gives encouraging results. Finally, we propose to extend the DSTC-2 dataset with specific reasoning capabilities requirement like counting, list maintenance, yes-no question answering and indefinite knowledge management.",
"title": ""
},
{
"docid": "neg:1840398_17",
"text": "The recently introduced method, which was called ldquostretching,rdquo is extended to timed Petri nets which may have both controllable and uncontrollable transitions. Using this method, a new Petri net, called ldquostretched Petri net,rdquo which has only unit firing durations, is obtained to represent a timed-transition Petri net. Using this net, the state of the original timed Petri net can be represented easily. This representation also makes it easy to design a supervisory controller for a timed Petri net for any purpose. In this paper, supervisory controller design to avoid deadlock is considered in particular. Using this method, a controller is first designed for the stretched Petri net. Then, using this controller, a controller for the original timed Petri net is obtained. Algorithms to construct the reachability sets of the stretched and original timed Petri nets, as well as algorithms to obtain the controller for the original timed Petri net are presented. These algorithms are implemented using Matlab. Examples are also presented to illustrate the introduced approach.",
"title": ""
},
{
"docid": "neg:1840398_18",
"text": "Current Web applications are very complex and high sophisticated software products, whose usability can heavily determine their success or failure. Defining methods for ensuring usability is one of the current goals of the Web Engineering research. Also, much attention on usability is currently paid by Industry, which is recognizing the importance of adopting methods for usability evaluation before and after the application deployment. This chapter introduces principles and evaluation methods to be adopted during the whole application lifecycle for promoting usability. For each evaluation method, the main features, as well as the emerging advantages and drawbacks are illustrated, so as to support the choice of an evaluation plan that best fits the goals to be pursued and the available resources. The design and evaluation of a real application is also described for exemplifying the introduced concepts and methods.",
"title": ""
},
{
"docid": "neg:1840398_19",
"text": "The Factored Language Model (FLM) is a flexible framework for incorporating various information sources, such as morphology and part-of-speech, into language modeling. FLMs have so far been successfully applied to tasks such as speech recognition and machine translation; it has the potential to be used in a wide variety of problems in estimating probability tables from sparse data. This tutorial serves as a comprehensive description of FLMs and related algorithms. We document the FLM functionalities as implemented in the SRI Language Modeling toolkit and provide an introductory walk-through using FLMs on an actual dataset. Our goal is to provide an easy-to-understand tutorial and reference for researchers interested in applying FLMs to their problems. Overview of the Tutorial We first describe the factored language model (Section 1) and generalized backoff (Section 2), two complementary techniques that attempt to improve statistical estimation (i.e., reduce parameter variance) in language models, and that also attempt to better describe the way in which language (and sequences of words) might be produced. Researchers familar with the algorithms behind FLMs may skip to Section 3, which describes the FLM programs and file formats in the publicly-available SRI Language Modeling (SRILM) toolkit.1 Section 4 is a step-by-step walkthrough with several FLM examples on a real language modeling dataset. This may be useful for beginning users of the FLMs. Finally, Section 5 discusses the problem of automatically tuning FLM parameters on real datasets and refers to existing software. This may be of interest to advanced users of FLMs.",
"title": ""
}
] |
1840399 | "How Old Do You Think I Am?" A Study of Language and Age in Twitter | [
{
"docid": "pos:1840399_0",
"text": "We present TweetMotif, an exploratory search application for Twitter. Unlike traditional approaches to information retrieval, which present a simple list of messages, TweetMotif groups messages by frequent significant terms — a result set’s subtopics — which facilitate navigation and drilldown through a faceted search interface. The topic extraction system is based on syntactic filtering, language modeling, near-duplicate detection, and set cover heuristics. We have used TweetMotif to deflate rumors, uncover scams, summarize sentiment, and track political protests in real-time. A demo of TweetMotif, plus its source code, is available at http://tweetmotif.com. Introduction and Description On the microblogging service Twitter, users post millions of very short messages every day. Organizing and searching through this large corpus is an exciting research problem. Since messages are so small, we believe microblog search requires summarization across many messages at once. Our system, TweetMotif, responds to user queries, first retrieving several hundred recent matching messages from a simple index; we use the Twitter Search API. Instead of simply showing this result set as a list, TweetMotif extracts a set of themes (topics) to group and summarize these messages. A topic is simultaneously characterized by (1) a 1to 3-word textual label, and (2) a set of messages, whose texts must all contain the label. TweetMotif’s user interface is inspired by faceted search, which has been shown to aid Web search tasks (Hearst et al. 2002). The main screen is a two-column layout. The left column is a list of themes that are related to the current search term, while the right column presents actual tweets, grouped by theme. As themes are selected on the left column, a sample of tweets for that theme appears at the top of the right column, pushing down (but not removing) tweet results for any previously selected related themes. This allows users to explore and compare multiple related themes at once. The set of topics is chosen to try to satisfy several criteria, which often conflict: Copyright c © 2010, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Figure 1: Screenshot of TweetMotif. 1. Frequency contrast: Topic label phrases should be frequent in the query subcorpus, but infrequent among general Twitter messages. This ensures relevance to the query while eliminating overly generic terms. 2. Topic diversity: Topics should be chosen such that their messages and label phrases minimally overlap. Overlapping topics repetitively fill the same information niche; only one should be used. 3. Topic size: A topic that includes too few messages is bad; it is overly specific. 4. Small number of topics: Screen real-estate and concomitant user cognitive load are limited resources. The goal is to provide the user a concise summary of themes and variation in the query subcorpus, then allow the user to navigate to individual topics to see their associated messages, and allow recursive drilldown. The approach is related to document clustering (though a message can belong to multiple topics) and text summarization (topic labels are a high-relevance subset of text across messages). We heuristically proceed through several stages of analysis.",
"title": ""
}
] | [
{
"docid": "neg:1840399_0",
"text": "A promising approach to learn to play board games is to use reinforcement learning algorithms that can learn a game position evaluation function. In this paper we examine and compare three different methods for generating training games: (1) Learning by self-play, (2) Learning by playing against an expert program, and (3) Learning from viewing experts play against themselves. Although the third possibility generates highquality games from the start compared to initial random games generated by self-play, the drawback is that the learning program is never allowed to test moves which it prefers. We compared these three methods using temporal difference methods to learn the game of backgammon. For particular games such as draughts and chess, learning from a large database containing games played by human experts has as a large advantage that during the generation of (useful) training games, no expensive lookahead planning is necessary for move selection. Experimental results in this paper show how useful this method is for learning to play chess and draughts.",
"title": ""
},
{
"docid": "neg:1840399_1",
"text": "Rabbani, A, Kargarfard, M, and Twist, C. Reliability and validity of a submaximal warm-up test for monitoring training status in professional soccer players. J Strength Cond Res 32(2): 326-333, 2018-Two studies were conducted to assess the reliability and validity of a submaximal warm-up test (SWT) in professional soccer players. For the reliability study, 12 male players performed an SWT over 3 trials, with 1 week between trials. For the validity study, 14 players of the same team performed an SWT and a 30-15 intermittent fitness test (30-15IFT) 7 days apart. Week-to-week reliability in selected heart rate (HR) responses (exercise heart rate [HRex], heart rate recovery [HRR] expressed as the number of beats recovered within 1 minute [HRR60s], and HRR expressed as the mean HR during 1 minute [HRpost1]) was determined using the intraclass correlation coefficient (ICC) and typical error of measurement expressed as coefficient of variation (CV). The relationships between HR measures derived from the SWT and the maximal speed reached at the 30-15IFT (VIFT) were used to assess validity. The range for ICC and CV values was 0.83-0.95 and 1.4-7.0% in all HR measures, respectively, with the HRex as the most reliable HR measure of the SWT. Inverse large (r = -0.50 and 90% confidence limits [CLs] [-0.78 to -0.06]) and very large (r = -0.76 and CL, -0.90 to -0.45) relationships were observed between HRex and HRpost1 with VIFT in relative (expressed as the % of maximal HR) measures, respectively. The SWT is a reliable and valid submaximal test to monitor high-intensity intermittent running fitness in professional soccer players. In addition, the test's short duration (5 minutes) and simplicity mean that it can be used regularly to assess training status in high-level soccer players.",
"title": ""
},
{
"docid": "neg:1840399_2",
"text": "The increase in high-precision, high-sample-rate telemetry timeseries poses a problem for existing timeseries databases which can neither cope with the throughput demands of these streams nor provide the necessary primitives for effective analysis of them. We present a novel abstraction for telemetry timeseries data and a data structure for providing this abstraction: a timepartitioning version-annotated copy-on-write tree. An implementation in Go is shown to outperform existing solutions, demonstrating a throughput of 53 million inserted values per second and 119 million queried values per second on a four-node cluster. The system achieves a 2.9x compression ratio and satisfies statistical queries spanning a year of data in under 200ms, as demonstrated on a year-long production deployment storing 2.1 trillion data points. The principles and design of this database are generally applicable to a large variety of timeseries types and represent a significant advance in the development of technology for the Internet of Things.",
"title": ""
},
{
"docid": "neg:1840399_3",
"text": "Mobile computing is a revolutionary technology, born as a result of remarkable advances in computer hardware and wireless communication. Mobile applications have become increasingly popular in recent years. Today, it is not uncommon to see people playing games or reading mails on handphones. With the rapid advances in mobile computing technology, there is an increasing demand for processing realtime transactions in a mobile environment. Hence there is a strong need for efficient transaction management, data access modes and data management, consistency control and other mobile data management issues. This survey paper will cover issues related to concurrency control in mobile database. This paper studies concurrency control problem in mobile database systems, we analyze the features of mobile database and concurrency control techniques. With the increasing number of mobile hosts there are many new solutions and algorithms for concurrency control being proposed and implemented. We wish that our paper has served as a survey of the important solutions in the fields of concurrency control in mobile database. Keywords-component; Distributed Real-time Databases, Mobile Real-time Databases, Concurrency Control, Data Similarity, and Transaction Scheduling.",
"title": ""
},
{
"docid": "neg:1840399_4",
"text": "We study the Shannon capacity of adaptive transmission techniques in conjunction with diversity combining. This capacity provides an upper bound on spectral efficiency using these techniques. We obtain closed-form solutions for the Rayleigh fading channel capacity under three adaptive policies: optimal power and rate adaptation, constant power with optimal rate adaptation, and channel inversion with fixed rate. Optimal power and rate adaptation yields a small increase in capacity over just rate adaptation, and this increase diminishes as the average received carrier-to-noise ratio (CNR) or the number of diversity branches increases. Channel inversion suffers the largest capacity penalty relative to the optimal technique, however, the penalty diminishes with increased diversity. Although diversity yields large capacity gains for all the techniques, the gain is most pronounced with channel inversion. For example, the capacity using channel inversion with two-branch diversity exceeds that of a single-branch system using optimal rate and power adaptation. Since channel inversion is the least complex scheme to implement, there is a tradeoff between complexity and capacity for the various adaptation methods and diversity-combining techniques.",
"title": ""
},
{
"docid": "neg:1840399_5",
"text": "This chapter provides information on commonly used equipment in industrial mammalian cell culture, with an emphasis on bioreactors. The actual equipment used in the cell culture process can vary from one company to another, but the main steps remain the same. The process involves expansion of cells in seed train and inoculation train processes followed by cultivation of cells in a production bioreactor. Process and equipment options for each stage of the cell culture process are introduced and examples are provided. Finally, the use of disposables during seed train and cell culture production is discussed.",
"title": ""
},
{
"docid": "neg:1840399_6",
"text": "The human hand is a masterpiece of mechanical complexity, able to perform fine motor manipulations and powerful work alike. Designing an animatable human hand model that features the abilities of the archetype created by Nature requires a great deal of anatomical detail to be modeled. In this paper, we present a human hand model with underlying anatomical structure. Animation of the hand model is controlled by muscle contraction values. We employ a physically based hybrid muscle model to convert these contraction values into movement of skin and bones. Pseudo muscles directly control the rotation of bones based on anatomical data and mechanical laws, while geometric muscles deform the skin tissue using a mass-spring system. Thus, resulting animations automatically exhibit anatomically and physically correct finger movements and skin deformations. In addition, we present a deformation technique to create individual hand models from photographs. A radial basis warping function is set up from the correspondence of feature points and applied to the complete structure of the reference hand model, making the deformed hand model instantly animatable.",
"title": ""
},
{
"docid": "neg:1840399_7",
"text": "With the fast development pace of deep submicron technology, the size and density of semiconductor memory grows rapidly. However, keeping a high level of yield and reliability for memory products is more and more difficult. Both the redundancy repair and ECC techniques have been widely used for enhancing the yield and reliability of memory chips. Specifically, the redundancy repair and ECC techniques are conventionally used to repair or correct the hard faults and soft errors, respectively. In this paper, we propose an integrated ECC and redundancy repair scheme for memory reliability enhancement. Our approach can identify the hard faults and soft errors during the memory normal operation mode, and repair the hard faults during the memory idle time as long as there are unused redundant elements. We also develop a method for evaluating the memory reliability. Experimental results show that the proposed approach is effective, e.g., the MTTF of a 32K /spl times/ 64 memory is improved by 1.412 hours (7.1%) with our integrated ECC and repair scheme.",
"title": ""
},
{
"docid": "neg:1840399_8",
"text": "As P4 and its associated compilers move beyond relative immaturity, there is a need for common evaluation criteria. In this paper, we propose Whippersnapper, a set of benchmarks for P4. Rather than simply selecting a set of representative data-plane programs, the benchmark is designed from first principles, identifying and exploring key features and metrics. We believe the benchmark will not only provide a vehicle for comparing implementations and designs, but will also generate discussion within the larger community about the requirements for data-plane languages.",
"title": ""
},
{
"docid": "neg:1840399_9",
"text": "The diffusion decision model allows detailed explanations of behavior in two-choice discrimination tasks. In this article, the model is reviewed to show how it translates behavioral dataaccuracy, mean response times, and response time distributionsinto components of cognitive processing. Three experiments are used to illustrate experimental manipulations of three components: stimulus difficulty affects the quality of information on which a decision is based; instructions emphasizing either speed or accuracy affect the criterial amounts of information that a subject requires before initiating a response; and the relative proportions of the two stimuli affect biases in drift rate and starting point. The experiments also illustrate the strong constraints that ensure the model is empirically testable and potentially falsifiable. The broad range of applications of the model is also reviewed, including research in the domains of aging and neurophysiology.",
"title": ""
},
{
"docid": "neg:1840399_10",
"text": "In this paper, the dynamic modeling of a doubly-fed induction generator-based wind turbine connected to infinite bus (SMIB) system, is carried out in detail. In most of the analysis, the DFIG stator transients and network transients are neglected. In this paper the interfacing problems while considering stator transients and network transients in the modeling of SMIB system are resolved by connecting a resistor across the DFIG terminals. The effect of simplification of shaft system on the controller gains is also discussed. In addition, case studies are presented to demonstrate the effect of mechanical parameters and controller gains on system stability when accounting the two-mass shaft model for the drive train system.",
"title": ""
},
{
"docid": "neg:1840399_11",
"text": "Nowadays people work on computers for hours and hours they don’t have time to take care of themselves. Due to hectic schedules and consumption of junk food it affects the health of people and mainly heart. So to we are implementing an heart disease prediction system using data mining technique Naïve Bayes and k-means clustering algorithm. It is the combination of both the algorithms. This paper gives an overview for the same. It helps in predicting the heart disease using various attributes and it predicts the output as in the prediction form. For grouping of various attributes it uses k-means algorithm and for predicting it uses naïve bayes algorithm. Index Terms —Data mining, Comma separated files, naïve bayes, k-means algorithm, heart disease.",
"title": ""
},
{
"docid": "neg:1840399_12",
"text": "There exists a big demand for innovative secure electronic communications while the expertise level of attackers increases rapidly and that causes even bigger demands and needs for an extreme secure connection. An ideal security protocol should always be protecting the security of connections in many aspects, and leaves no trapdoor for the attackers. Nowadays, one of the popular cryptography protocols is hybrid cryptosystem that uses private and public key cryptography to change secret message. In available cryptography protocol attackers are always aware of transmission of sensitive data. Even non-interested attackers can get interested to break the ciphertext out of curiosity and challenge, when suddenly catches some scrambled data over the network. First of all, we try to explain the roles of innovative approaches in cryptography. After that we discuss about the disadvantages of public key cryptography to exchange secret key. Furthermore, DNA steganography is explained as an innovative paradigm to diminish the usage of public cryptography to exchange session key. In this protocol, session key between a sender and receiver is hidden by novel DNA data hiding technique. Consequently, the attackers are not aware of transmission of session key through unsecure channel. Finally, the strength point of the DNA steganography is discussed.",
"title": ""
},
{
"docid": "neg:1840399_13",
"text": "Boosting takes on various forms with different programs using different loss functions, different base models, and different optimization schemes. The gbm package takes the approach described in [3] and [4]. Some of the terminology differs, mostly due to an effort to cast boosting terms into more standard statistical terminology (e.g. deviance). In addition, the gbm package implements boosting for models commonly used in statistics but not commonly associated with boosting. The Cox proportional hazard model, for example, is an incredibly useful model and the boosting framework applies quite readily with only slight modification [7]. Also some algorithms implemented in the gbm package differ from the standard implementation. The AdaBoost algorithm [2] has a particular loss function and a particular optimization algorithm associated with it. The gbm implementation of AdaBoost adopts AdaBoost’s exponential loss function (its bound on misclassification rate) but uses Friedman’s gradient descent algorithm rather than the original one proposed. So the main purposes of this document is to spell out in detail what the gbm package implements.",
"title": ""
},
{
"docid": "neg:1840399_14",
"text": "In this paper, we use a blog corpus to demonstrate that we can often identify the author of an anonymous text even where there are many thousands of candidate authors. Our approach combines standard information retrieval methods with a text categorization meta-learning scheme that determines when to even venture a guess.",
"title": ""
},
{
"docid": "neg:1840399_15",
"text": "UNLABELLED\nThe evolution in adhesive dentistry has broadened the indication of esthetic restorative procedures especially with the use of resin composite material. Depending on the clinical situation, some restorative techniques are best indicated. As an example, indirect adhesive restorations offer many advantages over direct techniques in extended cavities. In general, the indirect technique requires two appointments and a laboratory involvement, or it can be prepared chairside in a single visit either conventionally or by the use of computer-aided design/computer-aided manufacturing systems. In both cases, there will be an extra cost as well as the need of specific materials. This paper describes the clinical procedures for the chairside semidirect technique for composite onlay fabrication without the use of special equipments. The use of this technique combines the advantages of the direct and the indirect restoration.\n\n\nCLINICAL SIGNIFICANCE\nThe semidirect technique for composite onlays offers the advantages of an indirect restoration and low cost, and can be the ideal treatment option for extended cavities in case of financial limitations.",
"title": ""
},
{
"docid": "neg:1840399_16",
"text": "Canada has been the world’s leader in e-Government maturity for the last five years. The global average for government website usage by citizens is about 30%. In Canada, this statistic is over 51%. The vast majority of Canadians visit government websites to obtain information, rather than interacting or transacting with the government. It seems that the rate of adoption of e-Government has globally fallen below expectations although some countries are doing better than others. Clearly, a better understanding of why and how citizens use government websites, and their general dispositions towards e-Government is an important research issue. This paper initiates discussion of this issue by proposing a conceptual model of e-Government adoption that places users as the focal point for e-Government adoption strategy.",
"title": ""
},
{
"docid": "neg:1840399_17",
"text": "We explore the concept of co-design in the context of neural network verification. Specifically, we aim to train deep neural networks that not only are robust to adversarial perturbations but also whose robustness can be verified more easily. To this end, we identify two properties of network models – weight sparsity and so-called ReLU stability – that turn out to significantly impact the complexity of the corresponding verification task. We demonstrate that improving weight sparsity alone already enables us to turn computationally intractable verification problems into tractable ones. Then, improving ReLU stability leads to an additional 4–13x speedup in verification times. An important feature of our methodology is its “universality,” in the sense that it can be used with a broad range of training procedures and verification approaches.",
"title": ""
},
{
"docid": "neg:1840399_18",
"text": "Recently, IT trends such as big data, cloud computing, internet of things (IoT), 3D visualization, network, and so on demand terabyte/s bandwidth computer performance in a graphics card. In order to meet these performance, terabyte/s bandwidth graphics module using 2.5D-IC with high bandwidth memory (HBM) technology has been emerged. Due to the difference in scale of interconnect pitch between GPU or HBM and package substrate, the HBM interposer is certainly required for terabyte/s bandwidth graphics module. In this paper, the electrical performance of the HBM interposer channel in consideration of the manufacturing capabilities is analyzed by simulation both the frequency- and time-domain. Furthermore, although the silicon substrate is most widely employed for the HBM interposer fabrication, the organic and glass substrate are also proposed to replace the high cost and high loss silicon substrate. Therefore, comparison and analysis of the electrical performance of the HBM interposer channel using silicon, organic, and glass substrate are conducted.",
"title": ""
},
{
"docid": "neg:1840399_19",
"text": "This article presents the main outcome findings from two inter-related randomized trials conducted at four sites to evaluate the effectiveness and cost-effectiveness of five short-term outpatient interventions for adolescents with cannabis use disorders. Trial 1 compared five sessions of Motivational Enhancement Therapy plus Cognitive Behavioral Therapy (MET/CBT) with a 12-session regimen of MET and CBT (MET/CBT12) and another that included family education and therapy components (Family Support Network [FSN]). Trial II compared the five-session MET/CBT with the Adolescent Community Reinforcement Approach (ACRA) and Multidimensional Family Therapy (MDFT). The 600 cannabis users were predominately white males, aged 15-16. All five CYT interventions demonstrated significant pre-post treatment during the 12 months after random assignment to a treatment intervention in the two main outcomes: days of abstinence and the percent of adolescents in recovery (no use or abuse/dependence problems and living in the community). Overall, the clinical outcomes were very similar across sites and conditions; however, after controlling for initial severity, the most cost-effective interventions were MET/CBT5 and MET/CBT12 in Trial 1 and ACRA and MET/CBT5 in Trial 2. It is possible that the similar results occurred because outcomes were driven more by general factors beyond the treatment approaches tested in this study; or because of shared, general helping factors across therapies that help these teens attend to and decrease their connection to cannabis and alcohol.",
"title": ""
}
] |
1840400 | Right Answer for the Wrong Reason: Discovery and Mitigation | [
{
"docid": "pos:1840400_0",
"text": "Natural language sentence matching is a fundamental technology for a variety of tasks. Previous approaches either match sentences from a single direction or only apply single granular (wordby-word or sentence-by-sentence) matching. In this work, we propose a bilateral multi-perspective matching (BiMPM) model. Given two sentences P and Q, our model first encodes them with a BiLSTM encoder. Next, we match the two encoded sentences in two directions P against Q and Q against P . In each matching direction, each time step of one sentence is matched against all timesteps of the other sentence from multiple perspectives. Then, another BiLSTM layer is utilized to aggregate the matching results into a fixed-length matching vector. Finally, based on the matching vector, a decision is made through a fully connected layer. We evaluate our model on three tasks: paraphrase identification, natural language inference and answer sentence selection. Experimental results on standard benchmark datasets show that our model achieves the state-of-the-art performance on all tasks.",
"title": ""
},
{
"docid": "pos:1840400_1",
"text": "This paper presents a summary of the first Workshop on Building Linguistically Generalizable Natural Language Processing Systems, and the associated Build It Break It, The Language Edition shared task. The goal of this workshop was to bring together researchers in NLP and linguistics with a shared task aimed at testing the generalizability of NLP systems beyond the distributions of their training data. We describe the motivation, setup, and participation of the shared task, provide discussion of some highlighted results, and discuss lessons learned.",
"title": ""
},
{
"docid": "pos:1840400_2",
"text": "Character-based neural machine translation (NMT) models alleviate out-ofvocabulary issues, learn morphology, and move us closer to completely end-toend translation systems. Unfortunately, they are also very brittle and easily falter when presented with noisy data. In this paper, we confront NMT models with synthetic and natural sources of noise. We find that state-of-the-art models fail to translate even moderately noisy texts that humans have no trouble comprehending. We explore two approaches to increase model robustness: structure-invariant word representations and robust training on noisy texts. We find that a model based on a character convolutional neural network is able to simultaneously learn representations robust to multiple kinds of noise.",
"title": ""
},
{
"docid": "pos:1840400_3",
"text": "Despite widespread adoption, machine learning models remain mostly black boxes. Understanding the reasons behind predictions is, however, quite important in assessing trust, which is fundamental if one plans to take action based on a prediction, or when choosing whether to deploy a new model. Such understanding also provides insights into the model, which can be used to transform an untrustworthy model or prediction into a trustworthy one.\n In this work, we propose LIME, a novel explanation technique that explains the predictions of any classifier in an interpretable and faithful manner, by learning an interpretable model locally varound the prediction. We also propose a method to explain models by presenting representative individual predictions and their explanations in a non-redundant way, framing the task as a submodular optimization problem. We demonstrate the flexibility of these methods by explaining different models for text (e.g. random forests) and image classification (e.g. neural networks). We show the utility of explanations via novel experiments, both simulated and with human subjects, on various scenarios that require trust: deciding if one should trust a prediction, choosing between models, improving an untrustworthy classifier, and identifying why a classifier should not be trusted.",
"title": ""
}
] | [
{
"docid": "neg:1840400_0",
"text": "A Gram-stain-negative, rod-shaped, aerobic, straw yellow, motile strain, designated KNDSW-TSA6T, belonging to the genus Acidovorax, was isolated from a water sample of the river Ganges, downstream of the city of Kanpur, Uttar Pradesh, India. Cells were aerobic, non-endospore-forming and motile with single polar flagella. It differed from its phylogenetically related strains by phenotypic characteristics such as hydrolysis of urea, gelatin, casein and DNA, and the catalase reaction. The major fatty acids were C16 : 1ω7c/C16 : 1ω6c, C16 : 0 and C18 : 1ω7c/C18 : 1ω6c. Phylogenetic analysis based on 16S rRNA and housekeeping genes (gyrb, recA and rpoB gene sequences), confirmed its placement within the genus Acidovorax as a novel species. Strain KNDSW-TSA6T showed highest 16S rRNA sequence similarity to Acidovorax soli BL21T (98.9 %), Acidovorax delafieldii ATCC 17505T (98.8 %), Acidovorax temperans CCUG 11779T (98.2 %), Acidovorax caeni R-24608T (97.9 %) and Acidovorax radicis N35T (97.6 %). The digital DNA-DNA hybridization and average nucleotide identity values calculated from whole genome sequences between strain KNDSW-TSA6T and the two most closely related strains A. soli BL21T and A. delafieldii ATCC 17505T were below the threshold values of 70 and 95 % respectively. Thus, the data from the polyphasic taxonomic analysis clearly indicates that strain KNDSW-TSA6T represents a novel species, for which the name Acidovorax kalamii sp. nov. is proposed. The type strain is Acidovorax kalamii (=MTCC 12652T=KCTC 52819T=VTCC-B-910010T).",
"title": ""
},
{
"docid": "neg:1840400_1",
"text": "This study focuses on the task of multipassage reading comprehension (RC) where an answer is provided in natural language. Current mainstream approaches treat RC by extracting the answer span from the provided passages and cannot generate an abstractive summary from the given question and passages. Moreover, they cannot utilize and control different styles of answers, such as concise phrases and well-formed sentences, within a model. In this study, we propose a style-controllable Multi-source Abstractive Summarization model for QUEstion answering, called Masque. The model is an end-toend deep neural network that can generate answers conditioned on a given style. Experiments with MS MARCO 2.1 show that our model achieved state-of-the-art performance on two tasks with different answer styles.",
"title": ""
},
{
"docid": "neg:1840400_2",
"text": "We recently proposed a structural model for the Si!331\"-!12!1\" surface reconstruction containing silicon pentamers and adatoms as elementary structural building blocks. Using first-principles density functional theory we here investigate the stability of a variety of adatom configurations and determine the lowest-energy configuration. We also present a detailed comparison of the energetics between our model for Si!331\"-!12 !1\" and the adatom-tetramer-interstitial model for Si!110\"-!16!2\", which shares the same structural building blocks.",
"title": ""
},
{
"docid": "neg:1840400_3",
"text": "and is made available as an electronic reprint (preprint) with permission of SPIE. One print or electronic copy may be made for personal use only. Systematic or multiple reproduction, distribution to multiple locations via electronic or other means, duplication of any material in this paper for a fee or for commercial purposes, or modification of the content of the paper are prohibited. ABSTRACT We describe the architecture and design of a through-the-wall radar. The radar is applied for the detection and localization of people hidden behind obstacles. It implements a new adaptive processing technique for people detection, which is introduced in this article. This processing technique is based on exponential averaging with adopted weighting coefficients. Through-the-wall detection and localization of a moving person is demonstrated by a measurement example. The localization relies on the time-of-flight approach.",
"title": ""
},
{
"docid": "neg:1840400_4",
"text": "This paper presents a versatile solution in an effort of improving the accuracy in monitoring the environmental conditions and reducing manpower for industrial households shrimp farming. A ZigBee-based wireless sensor network (WSN) was used to monitor the critical environmental conditions and all the control processes are done with the help of a series of low-power embedded MSP430 microcontrollers from Texas Instruments. This system is capable of collecting, analyzing and presenting data on a Graphical User Interface (GUI), programmed with LabVIEW. It also allows the user to get the updated sensor information online based on Google Spreadsheets application, via Internet connectivity, or at any time through the SMS gateway service and sends alert message promptly enabling user interventions when needed. Thereby the system minimizes the effects of environmental fluctuations caused by sudden changes and reduces the expended labor power of farms. Because of that, the proposed system saves the cost of hiring labor as well as the electricity usage. The design promotes a versatile, low-cost, and commercial version which will function best for small to medium sized farming operations as it does not require any refitting or reconstruction of the pond.",
"title": ""
},
{
"docid": "neg:1840400_5",
"text": "Unattended ground sensors (UGS) are widely used to monitor human activities, such as pedestrian motion and detection of intruders in a secure region. Efficacy of UGS systems is often limited by high false alarm rates, possibly due to inadequacies of the underlying algorithms and limitations of onboard computation. In this regard, this paper presents a wavelet-based method for target detection and classification. The proposed method has been validated on data sets of seismic and passive infrared sensors for target detection and classification, as well as for payload and movement type identification of the targets. The proposed method has the advantages of fast execution time and low memory requirements and is potentially well-suited for real-time implementation with onboard UGS systems.",
"title": ""
},
{
"docid": "neg:1840400_6",
"text": "1Department of Computer Science, Faculty of Science and Technology, Universidade Nova de Lisboa, Lisboa, Portugal 2Center for Biomedical Technology, Universidad Politécnica de Madrid, 28223 Pozuelo de Alarcón, Madrid, Spain 3Data, Networks and Cybersecurity Research Institute, Univ. Rey Juan Carlos, 28028 Madrid, Spain 4Department of Applied Mathematics, Universidad Rey Juan Carlos, 28933 Móstoles, Madrid, Spain 5Center for Computational Simulation, 28223 Pozuelo de Alarcón, Madrid, Spain 6Cyber Security & Digital Trust, BBVA Group, 28050 Madrid, Spain",
"title": ""
},
{
"docid": "neg:1840400_7",
"text": "Previous studies and surgeon interviews have shown that most surgeons prefer quality standard de nition (SD)TV 2D scopes to rst generation 3D endoscopes. The use of a telesurgical system has eased many of the design constraints on traditional endoscopes, enabling the design of a high quality SDTV 3D endoscope and an HDTV endoscopic system with outstanding resolution. The purpose of this study was to examine surgeon performance and preference given the choice between these. The study involved two perceptual tasks and four visual-motor tasks using a telesurgical system using the 2D HDTV endoscope and the SDTV endoscope in both 2D and 3D mode. The use of a telesurgical system enabled recording of all the subjects motions for later analysis. Contrary to experience with early 3D scopes and SDTV 2D scopes, this study showed that despite the superior resolution of the HDTV system surgeons performed better with and preferred the SDTV 3D scope.",
"title": ""
},
{
"docid": "neg:1840400_8",
"text": "We propose a new computational approach for tracking and detecting statistically significant linguistic shifts in the meaning and usage of words. Such linguistic shifts are especially prevalent on the Internet, where the rapid exchange of ideas can quickly change a word's meaning. Our meta-analysis approach constructs property time series of word usage, and then uses statistically sound change point detection algorithms to identify significant linguistic shifts. We consider and analyze three approaches of increasing complexity to generate such linguistic property time series, the culmination of which uses distributional characteristics inferred from word co-occurrences. Using recently proposed deep neural language models, we first train vector representations of words for each time period. Second, we warp the vector spaces into one unified coordinate system. Finally, we construct a distance-based distributional time series for each word to track its linguistic displacement over time.\n We demonstrate that our approach is scalable by tracking linguistic change across years of micro-blogging using Twitter, a decade of product reviews using a corpus of movie reviews from Amazon, and a century of written books using the Google Book Ngrams. Our analysis reveals interesting patterns of language usage change commensurate with each medium.",
"title": ""
},
{
"docid": "neg:1840400_9",
"text": "Social networking sites, especially Facebook, are an integral part of the lifestyle of contemporary youth. The facilities are increasingly being used by older persons as well. Usage is mainly for social purposes, but the groupand discussion facilities of Facebook hold potential for focused academic use. This paper describes and discusses a venture in which postgraduate distancelearning students joined an optional group for the purpose of discussions on academic, contentrelated topics, largely initiated by the students themselves. Learning and insight were enhanced by these discussions and the students, in their environment of distance learning, are benefiting by contact with fellow students.",
"title": ""
},
{
"docid": "neg:1840400_10",
"text": "Recurrent neural networks that are <italic>trained</italic> to behave like deterministic finite-state automata (DFAs) can show deteriorating performance when tested on long strings. This deteriorating performance can be attributed to the instability of the internal representation of the learned DFA states. The use of a sigmoidel discriminant function together with the recurrent structure contribute to this instability. We prove that a simple algorithm can <italic>construct</italic> second-order recurrent neural networks with a sparse interconnection topology and sigmoidal discriminant function such that the internal DFA state representations are stable, that is, the constructed network correctly classifies strings of <italic>arbitrary length</italic>. The algorithm is based on encoding strengths of weights directly into the neural network. We derive a relationship between the weight strength and the number of DFA states for robust string classification. For a DFA with <italic>n</italic> state and <italic>m</italic>input alphabet symbols, the constructive algorithm generates a “programmed” neural network with <italic>O</italic>(<italic>n</italic>) neurons and <italic>O</italic>(<italic>mn</italic>) weights. We compare our algorithm to other methods proposed in the literature.",
"title": ""
},
{
"docid": "neg:1840400_11",
"text": "Robust brain magnetic resonance (MR) segmentation algorithms are critical to analyze tissues and diagnose tumor and edema in a quantitative way. In this study, we present a new tissue segmentation algorithm that segments brain MR images into tumor, edema, white matter (WM), gray matter (GM), and cerebrospinal fluid (CSF). The detection of the healthy tissues is performed simultaneously with the diseased tissues because examining the change caused by the spread of tumor and edema on healthy tissues is very important for treatment planning. We used T1, T2, and FLAIR MR images of 20 subjects suffering from glial tumor. We developed an algorithm for stripping the skull before the segmentation process. The segmentation is performed using self-organizing map (SOM) that is trained with unsupervised learning algorithm and fine-tuned with learning vector quantization (LVQ). Unlike other studies, we developed an algorithm for clustering the SOM instead of using an additional network. Input feature vector is constructed with the features obtained from stationary wavelet transform (SWT) coefficients. The results showed that average dice similarity indexes are 91% for WM, 87% for GM, 96% for CSF, 61% for tumor, and 77% for edema.",
"title": ""
},
{
"docid": "neg:1840400_12",
"text": "A main aspect of the Android platform is Inter-Application Communication (IAC), which enables reuse of functionality across apps and app components via message passing. While a powerful feature, IAC also constitutes a serious attack surface. A malicious app can embed a payload into an IAC message, thereby driving the recipient app into a potentially vulnerable behavior if the message is processed without its fields first being sanitized or validated. We present what to our knowledge is the first comprehensive testing algorithm for Android IAC vulnerabilities. Toward this end, we first describe a catalog, stemming from our field experience, of 8 concrete vulnerability types that can potentially arise due to unsafe handling of incoming IAC messages. We then explain the main challenges that automated discovery of Android IAC vulnerabilities entails, including in particular path coverage and custom data fields, and present simple yet surprisingly effective solutions to these challenges. We have realized our testing approach as the IntentDroid system, which is available as a commercial cloud service. IntentDroid utilizes lightweight platform-level instrumentation, implemented via debug breakpoints (to run atop any Android device without any setup or customization), to recover IAC-relevant app-level behaviors. Evaluation of IntentDroid over a set of 80 top-popular apps has revealed a total 150 IAC vulnerabilities — some already fixed by the developers following our report — with a recall rate of 92% w.r.t. a ground truth established via manual auditing by a security expert.",
"title": ""
},
{
"docid": "neg:1840400_13",
"text": "We present a novel approach to video segmentation using multiple object proposals. The problem is formulated as a minimization of a novel energy function defined over a fully connected graph of object proposals. Our model combines appearance with long-range point tracks, which is key to ensure robustness with respect to fast motion and occlusions over longer video sequences. As opposed to previous approaches based on object proposals, we do not seek the best per-frame object hypotheses to perform the segmentation. Instead, we combine multiple, potentially imperfect proposals to improve overall segmentation accuracy and ensure robustness to outliers. Overall, the basic algorithm consists of three steps. First, we generate a very large number of object proposals for each video frame using existing techniques. Next, we perform an SVM-based pruning step to retain only high quality proposals with sufficiently discriminative power. Finally, we determine the fore-and background classification by solving for the maximum a posteriori of a fully connected conditional random field, defined using our novel energy function. Experimental results on a well established dataset demonstrate that our method compares favorably to several recent state-of-the-art approaches.",
"title": ""
},
{
"docid": "neg:1840400_14",
"text": "Feature selection is often an essential data processing step prior to applying a learning algorithm The re moval of irrelevant and redundant information often improves the performance of machine learning algo rithms There are two common approaches a wrapper uses the intended learning algorithm itself to evaluate the usefulness of features while a lter evaluates fea tures according to heuristics based on general charac teristics of the data The wrapper approach is generally considered to produce better feature subsets but runs much more slowly than a lter This paper describes a new lter approach to feature selection that uses a correlation based heuristic to evaluate the worth of fea ture subsets When applied as a data preprocessing step for two common machine learning algorithms the new method compares favourably with the wrapper but re quires much less computation",
"title": ""
},
{
"docid": "neg:1840400_15",
"text": "The massive technological advancements around the world have created significant challenging competition among companies where each of the companies tries to attract the customers using different techniques. One of the recent techniques is Augmented Reality (AR). The AR is a new technology which is capable of presenting possibilities that are difficult for other technologies to offer and meet. Nowadays, numerous augmented reality applications have been used in the industry of different kinds and disseminated all over the world. AR will really alter the way individuals view the world. The AR is yet in its initial phases of research and development at different colleges and high-tech institutes. Throughout the last years, AR apps became transportable and generally available on various devices. Besides, AR begins to occupy its place in our audio-visual media and to be used in various fields in our life in tangible and exciting ways such as news, sports and is used in many domains in our life such as electronic commerce, promotion, design, and business. In addition, AR is used to facilitate the learning whereas it enables students to access location-specific information provided through various sources. Such growth and spread of AR applications pushes organizations to compete one another, and every one of them exerts its best to gain the customers. This paper provides a comprehensive study of AR including its history, architecture, applications, current challenges and future trends.",
"title": ""
},
{
"docid": "neg:1840400_16",
"text": "The trend towards shorter delivery lead-times reduces operational efficiency and increases transportation costs for internet retailers. Mobile technology, however, creates new opportunities to organize the last-mile. In this paper, we study the concept of crowdsourced delivery that aims to use excess capacity on journeys that already take place to make deliveries. We consider a peer-to-peer platform that automatically creates matches between parcel delivery tasks and ad-hoc drivers. The platform also operates a fleet of backup vehicles to serve the tasks that cannot be served by the ad-hoc drivers. The matching of tasks, drivers and backup vehicles gives rise to a new variant of the dynamic pick-up and delivery problem. We propose a rolling horizon framework and develop an exact solution approach to solve the various subproblems. In order to investigate the potential benefit of crowdsourced delivery, we conduct a wide range of computational experiments. The experiments provide insights into the viability of crowdsourced delivery under various assumptions about the environment and the behavior of the ad-hoc drivers. The results suggest that the use of ad-hoc drivers has the potential to make the last-mile more cost-efficient and can reduce the system-wide vehicle-miles.",
"title": ""
},
{
"docid": "neg:1840400_17",
"text": "We present a robot localization system using biologically inspired vision. Our system models two extensively studied human visual capabilities: (1) extracting the ldquogistrdquo of a scene to produce a coarse localization hypothesis and (2) refining it by locating salient landmark points in the scene. Gist is computed here as a holistic statistical signature of the image, thereby yielding abstract scene classification and layout. Saliency is computed as a measure of interest at every image location, which efficiently directs the time-consuming landmark-identification process toward the most likely candidate locations in the image. The gist features and salient regions are then further processed using a Monte Carlo localization algorithm to allow the robot to generate its position. We test the system in three different outdoor environments-building complex (38.4 m times 54.86 m area, 13 966 testing images), vegetation-filled park (82.3 m times 109.73 m area, 26 397 testing images), and open-field park (137.16 m times 178.31 m area, 34 711 testing images)-each with its own challenges. The system is able to localize, on average, within 0.98, 2.63, and 3.46 m, respectively, even with multiple kidnapped-robot instances.",
"title": ""
},
{
"docid": "neg:1840400_18",
"text": "In order to enhance the scanning range of planar phased arrays, a planar substrate integrated waveguide slot (SIWslot) antenna with wide beamwidth is proposed in this letter. The proposed antenna is fabricated on a single-layer substrate, which is fully covered with a metal ground on the back. The SIW works like a dielectric-filled rectangular waveguide working in the TE10 mode. There are four inclined slots etched on the top metal layer following the rules of rectangular waveguide slot antenna. The electric fields in the slots work as equivalent magnetic currents. As opposed to normal microstrip antennas, the equivalent magnetic currents from the slots over a larger metal ground can radiate with a wide beamwidth. Its operating bandwidth is from 5.4 to 6.45 GHz with a relative bandwidth of 17.7%. Meanwhile, the 3-dB beamwidth in the xz-plane is between 130° and 148° in the whole operating band. Furthermore, the SIW-slot element is employed in a 1 × 8 planar phased array. The measured results show that the main lobe of phased array can obtain a wide-angle scanning from -71° to 73° in the whole operating band.",
"title": ""
},
{
"docid": "neg:1840400_19",
"text": "A widespread folklore for explaining the success of Convolutional Neural Networks (CNNs) is that CNNs use a more compact representation than the Fullyconnected Neural Network (FNN) and thus require fewer training samples to accurately estimate their parameters. We initiate the study of rigorously characterizing the sample complexity of estimating CNNs. We show that for an m-dimensional convolutional filter with linear activation acting on a d-dimensional input, the sample complexity of achieving population prediction error of is r Opm{ q 2, whereas the sample-complexity for its FNN counterpart is lower bounded by Ωpd{ q samples. Since, in typical settings m ! d, this result demonstrates the advantage of using a CNN. We further consider the sample complexity of estimating a onehidden-layer CNN with linear activation where both the m-dimensional convolutional filter and the r-dimensional output weights are unknown. For this model, we show that the sample complexity is r O ` pm` rq{ 2 ̆ when the ratio between the stride size and the filter size is a constant. For both models, we also present lower bounds showing our sample complexities are tight up to logarithmic factors. Our main tools for deriving these results are a localized empirical process analysis and a new lemma characterizing the convolutional structure. We believe that these tools may inspire further developments in understanding CNNs.",
"title": ""
}
] |
1840401 | Wide Pulse Combined With Narrow-Pulse Generator for Food Sterilization | [
{
"docid": "pos:1840401_0",
"text": "Apoptosis — the regulated destruction of a cell — is a complicated process. The decision to die cannot be taken lightly, and the activity of many genes influence a cell's likelihood of activating its self-destruction programme. Once the decision is taken, proper execution of the apoptotic programme requires the coordinated activation and execution of multiple subprogrammes. Here I review the basic components of the death machinery, describe how they interact to regulate apoptosis in a coordinated manner, and discuss the main pathways that are used to activate cell death.",
"title": ""
}
] | [
{
"docid": "neg:1840401_0",
"text": "Fire accidents can cause numerous casualties and heavy property losses, especially, in petrochemical industry, such accidents are likely to cause secondary disasters. However, common fire drill training would cause loss of resources and pollution. We designed a multi-dimensional interactive somatosensory (MDIS) cloth system based on virtual reality technology to simulate fire accidents in petrochemical industry. It provides a vivid visual and somatosensory experience. A thermal radiation model is built in a virtual environment, and it could predict the destruction radius of a fire. The participant position changes are got from Kinect, and shown in virtual environment synchronously. The somatosensory cloth, which could both heat and refrigerant, provides temperature feedback based on thermal radiation results and actual distance. In this paper, we demonstrate the details of the design, and then verified its basic function. Heating deviation from model target is lower than 3.3 °C and refrigerant efficiency is approximately two times faster than heating efficiency.",
"title": ""
},
{
"docid": "neg:1840401_1",
"text": "Given the significance of placement in IC physical design, extensive research studies performed over the last 50 years addressed numerous aspects of global and detailed placement. The objectives and the constraints dominant in placement have been revised many times over, and continue to evolve. Additionally, the increasing scale of placement instances affects the algorithms of choice for high-performance tools. We survey the history of placement research, the progress achieved up to now, and outstanding challenges.",
"title": ""
},
{
"docid": "neg:1840401_2",
"text": "As neurobiological evidence points to the neocortex as the brain region mainly involved in high-level cognitive functions, an innovative model of neocortical information processing has been recently proposed. Based on a simplified model of a neocortical neuron, and inspired by experimental evidence of neocortical organisation, the Hierarchical Temporal Memory (HTM) model attempts at understanding intelligence, but also at building learning machines. This paper focuses on analysing HTM's ability for online, adaptive learning of sequences. In particular, we seek to determine whether the approach is robust to noise in its inputs, and to compare and contrast its performance and attributes to an alternative Hidden Markov Model (HMM) approach. We reproduce a version of a HTM network and apply it to a visual pattern recognition task under various learning conditions. Our first set of experiments explore the HTM network's capability to learn repetitive patterns and sequences of patterns within random data streams. Further experimentation involves assessing the network's learning performance in terms of inference and prediction under different noise conditions. HTM results are compared with those of a HMM trained at the same tasks. Online learning performance results demonstrate the HTM's capacity to make use of context in order to generate stronger predictions, whereas results on robustness to noise reveal an ability to deal with noisy environments. Our comparisons also, however, emphasise a manner in which HTM differs significantly from HMM, which is that HTM generates predicted observations rather than hidden states, and each observation is a sparse distributed representation.",
"title": ""
},
{
"docid": "neg:1840401_3",
"text": "Infrared sensors are used in Photoplethysmography measurements (PPG) to get blood flow parameters in the vascular system. It is a simple, low-cost non-invasive optical technique that is commonly placed on a finger or toe, to detect blood volume changes in the micro-vascular bed of tissue. The sensor use an infrared source and a photo detector to detect the infrared wave which is not absorbed. The recorded infrared waveform at the detector side is called the PPG signal. This paper reviews the various blood flow parameters that can be extracted from this PPG signal including the existence of an endothelial disfunction as an early detection tool of vascular diseases.",
"title": ""
},
{
"docid": "neg:1840401_4",
"text": "Metamorphism is a technique that mutates the binary code using different obfuscations and never keeps the same sequence of opcodes in the memory. This stealth technique provides the capability to a malware for evading detection by simple signature-based (such as instruction sequences, byte sequences and string signatures) anti-malware programs. In this paper, we present a new scheme named Annotated Control Flow Graph (ACFG) to efficiently detect such kinds of malware. ACFG is built by annotating CFG of a binary program and is used for graph and pattern matching to analyse and detect metamorphic malware. We also optimize the runtime of malware detection through parallelization and ACFG reduction, maintaining the same accuracy (without ACFG reduction) for malware detection. ACFG proposed in this paper: (i) captures the control flow semantics of a program; (ii) provides a faster matching of ACFGs and can handle malware with smaller CFGs, compared with other such techniques, without compromising the accuracy; (iii) contains more information and hence provides more accuracy than a CFG. Experimental evaluation of the proposed scheme using an existing dataset yields malware detection rate of 98.9% and false positive rate of 4.5%.",
"title": ""
},
{
"docid": "neg:1840401_5",
"text": "In this work, we tackle the problem of instance segmentation, the task of simultaneously solving object detection and semantic segmentation. Towards this goal, we present a model, called MaskLab, which produces three outputs: box detection, semantic segmentation, and direction prediction. Building on top of the Faster-RCNN object detector, the predicted boxes provide accurate localization of object instances. Within each region of interest, MaskLab performs foreground/background segmentation by combining semantic and direction prediction. Semantic segmentation assists the model in distinguishing between objects of different semantic classes including background, while the direction prediction, estimating each pixel's direction towards its corresponding center, allows separating instances of the same semantic class. Moreover, we explore the effect of incorporating recent successful methods from both segmentation and detection (e.g., atrous convolution and hypercolumn). Our proposed model is evaluated on the COCO instance segmentation benchmark and shows comparable performance with other state-of-art models.",
"title": ""
},
{
"docid": "neg:1840401_6",
"text": "There has been great interest in developing methodologies that are capable of dealing with imprecision and uncertainty. The large amount of research currently being carried out in fuzzy and rough sets is representative of this. Many deep relationships have been established, and recent studies have concluded as to the complementary nature of the two methodologies. Therefore, it is desirable to extend and hybridize the underlying concepts to deal with additional aspects of data imperfection. Such developments offer a high degree of flexibility and provide robust solutions and advanced tools for data analysis. Fuzzy-rough set-based feature (FS) selection has been shown to be highly useful at reducing data dimensionality but possesses several problems that render it ineffective for large datasets. This paper proposes three new approaches to fuzzy-rough FS-based on fuzzy similarity relations. In particular, a fuzzy extension to crisp discernibility matrices is proposed and utilized. Initial experimentation shows that the methods greatly reduce dimensionality while preserving classification accuracy.",
"title": ""
},
{
"docid": "neg:1840401_7",
"text": "Among the different existing cryptographic file systems, EncFS has a unique feature that makes it attractive for backup setups involving untrusted (cloud) storage. It is a file-based overlay file system in normal operation (i.e., it maintains a directory hierarchy by storing encrypted representations of files and folders in a specific source folder), but its reverse mode allows to reverse this process: Users can mount deterministic, encrypted views of their local, unencrypted files on the fly, allowing synchronization to untrusted storage using standard tools like rsync without having to store encrypted representations on the local hard drive. So far, EncFS is a single-user solution: All files of a folder are encrypted using the same, static key; file access rights are passed through to the encrypted representation, but not otherwise considered. In this paper, we work out how multi-user support can be integrated into EncFS and its reverse mode in particular. We present an extension that a) stores individual files' owner/group information and permissions in a confidential and authenticated manner, and b) cryptographically enforces thereby specified read rights. For this, we introduce user-specific keys and an appropriate, automatic key management. Given a user's key and a complete encrypted source directory, the extension allows access to exactly those files the user is authorized for according to the corresponding owner/group/permissions information. Just like EncFS, our extension depends only on symmetric cryptographic primitives.",
"title": ""
},
{
"docid": "neg:1840401_8",
"text": "We present the results of an investigation into the nature of information needs of software developers who work in projects that are part of larger ecosystems. This work is based on a quantitative survey of 75 professional software developers. We corroborate the results identified in the survey with needs and motivations proposed in a previous survey and discover that tool support for developers working in an ecosystem context is even more meager than we thought: mailing lists and internet search are the most popular tools developers use to satisfy their ecosystem-related information needs.",
"title": ""
},
{
"docid": "neg:1840401_9",
"text": "Shallow trench isolation(STI) is the mainstream CMOS isolation technology for advanced integrated circuits. While STI process gives the isolation benefits due to its scalable characteristics, exploiting the compressive stress exerted by STI wells on device active regions to improve performance of devices has been one of the major industry focuses. However, in the present research of VLSI physical design, there has no yet a global optimization methodology on the whole chip layout to control the size of the STI wells, which affects the stress magnitude along with the size of active region of transistors. In this paper, we present a novel methodology that is capable of determining globally the optimal STI well width following the chip placement stage. The methodology is based on the observation that both of the terms in charge of chip width minimization and transistor channel mobility optimization in the objective function can be modeled as posynomials of the design variables, that is, the width of STI wells. Then, this stress aware placement optimization problem could be solved efficiently as a convex geometric programming (GP) problem. Finally, by a MOSEK GP problem solver, we do our STI width aware placement optimization on the given placements of some GSRC and IBM-PLACE benchmarks. Experiment results demonstrated that our methodology can obtain decent results with an acceptable runtime when satisfy the necessary location constraints from DRC specifications.",
"title": ""
},
{
"docid": "neg:1840401_10",
"text": "Pulmonary administration of drugs presents several advantages in the treatment of many diseases. Considering local and systemic delivery, drug inhalation enables a rapid and predictable onset of action and induces fewer side effects than other routes of administration. Three main inhalation systems have been developed for the aerosolization of drugs; namely, nebulizers, pressurized metered-dose inhalers (MDIs) and dry powder inhalers (DPIs). The latter are currently the most convenient alternative as they are breath-actuated and do not require the use of any propellants. The deposition site in the respiratory tract and the efficiency of inhaled aerosols are critically influenced by the aerodynamic diameter, size distribution, shape and density of particles. In the case of DPIs, since micronized particles are generally very cohesive and exhibit poor flow properties, drug particles are usually blended with coarse and fine carrier particles. This increases particle aerodynamic behavior and flow properties of the drugs and ensures accurate dosage of active ingredients. At present, particles with controlled properties are obtained by milling, spray drying or supercritical fluid techniques. Several excipients such as sugars, lipids, amino acids, surfactants, polymers and absorption enhancers have been tested for their efficacy in improving drug pulmonary administration. The purpose of this article is to describe various observations that have been made in the field of inhalation product development, especially for the dry powder inhalation formulation, and to review the use of various additives, their effectiveness and their potential toxicity for pulmonary administration.",
"title": ""
},
{
"docid": "neg:1840401_11",
"text": "Memory access bugs, including buffer overflows and uses of freed heap memory, remain a serious problem for programming languages like C and C++. Many memory error detectors exist, but most of them are either slow or detect a limited set of bugs, or both. This paper presents AddressSanitizer, a new memory error detector. Our tool finds out-of-bounds accesses to heap, stack, and global objects, as well as use-after-free bugs. It employs a specialized memory allocator and code instrumentation that is simple enough to be implemented in any compiler, binary translation system, or even in hardware. AddressSanitizer achieves efficiency without sacrificing comprehensiveness. Its average slowdown is just 73% yet it accurately detects bugs at the point of occurrence. It has found over 300 previously unknown bugs in the Chromium browser and many bugs in other software.",
"title": ""
},
{
"docid": "neg:1840401_12",
"text": "In this report, we provide a comparative analysis of different techniques for user intent classification towards the task of app recommendation. We analyse the performance of different models and architectures for multi-label classification over a dataset with a relative large number of classes and only a handful examples of each class. We focus, in particular, on memory network architectures, and compare how well the different versions perform under the task constraints. Since the classifier is meant to serve as a module in a practical dialog system, it needs to be able to work with limited training data and incorporate new data on the fly. We devise a 1-shot learning task to test the models under the above constraint. We conclude that relatively simple versions of memory networks perform better than other approaches. Although, for tasks with very limited data, simple non-parametric methods perform comparably, without needing the extra training data.",
"title": ""
},
{
"docid": "neg:1840401_13",
"text": "Dengue is the second most common mosquito-borne disease affecting human beings. In 2009, WHO endorsed new guidelines that, for the first time, consider neurological manifestations in the clinical case classification for severe dengue. Dengue can manifest with a wide range of neurological features, which have been noted--depending on the clinical setting--in 0·5-21% of patients with dengue admitted to hospital. Furthermore, dengue was identified in 4-47% of admissions with encephalitis-like illness in endemic areas. Neurological complications can be categorised into dengue encephalopathy (eg, caused by hepatic failure or metabolic disorders), encephalitis (caused by direct virus invasion), neuromuscular complications (eg, Guillain-Barré syndrome or transient muscle dysfunctions), and neuro-ophthalmic involvement. However, overlap of these categories is possible. In endemic countries and after travel to these regions, dengue should be considered in patients presenting with fever and acute neurological manifestations.",
"title": ""
},
{
"docid": "neg:1840401_14",
"text": "Plants endure a variety of abiotic and biotic stresses, all of which cause major limitations to production. Among abiotic stressors, heavy metal contamination represents a global environmental problem endangering humans, animals, and plants. Exposure to heavy metals has been documented to induce changes in the expression of plant proteins. Proteins are macromolecules directly responsible for most biological processes in a living cell, while protein function is directly influenced by posttranslational modifications, which cannot be identified through genome studies. Therefore, it is necessary to conduct proteomic studies, which enable the elucidation of the presence and role of proteins under specific environmental conditions. This review attempts to present current knowledge on proteomic techniques developed with an aim to detect the response of plant to heavy metal stress. Significant contributions to a better understanding of the complex mechanisms of plant acclimation to metal stress are also discussed.",
"title": ""
},
{
"docid": "neg:1840401_15",
"text": "The fundamental problem of finding a suitable representation of the orientation of 3D surfaces is considered. A representation is regarded suitable if it meets three basic requirements: Uniqueness, Uniformity and Polar separability. A suitable tensor representation is given. At the heart of the problem lies the fact that orientation can only be defined mod 180◦ , i.e the fact that a 180◦ rotation of a line or a plane amounts to no change at all. For this reason representing a plane using its normal vector leads to ambiguity and such a representation is consequently not suitable. The ambiguity can be eliminated by establishing a mapping between R3 and a higherdimensional tensor space. The uniqueness requirement implies a mapping that map all pairs of 3D vectors x and -x onto the same tensor T. Uniformity implies that the mapping implicitly carries a definition of distance between 3D planes (and lines) that is rotation invariant and monotone with the angle between the planes. Polar separability means that the norm of the representing tensor T is rotation invariant. One way to describe the mapping is that it maps a 3D sphere into 6D in such a way that the surface is uniformly stretched and all pairs of antipodal points maps onto the same tensor. It is demonstrated that the above mapping can be realized by sampling the 3D space using a specified class of symmetrically distributed quadrature filters. It is shown that 6 quadrature filters are necessary to realize the desired mapping, the orientations of the filters given by lines trough the vertices of an icosahedron. The desired tensor representation can be obtained by simply performing a weighted summation of the quadrature filter outputs. This situation is indeed satisfying as it implies a simple implementation of the theory and that requirements on computational capacity can be kept within reasonable limits. Noisy neigborhoods and/or linear combinations of tensors produced by the mapping will in general result in a tensor that has no direct counterpart in R3. In an adaptive hierarchical signal processing system, where information is flowing both up (increasing the level of abstraction) and down (for adaptivity and guidance), it is necessary that a meaningful inverse exists for each levelaltering operation. It is shown that the point in R3 that corresponds to the best approximation of a given tensor is given by the largest eigenvalue times the corresponding eigenvector of the tensor.",
"title": ""
},
{
"docid": "neg:1840401_16",
"text": "The supernodal method for sparse Cholesky factorization represents the factor L as a set of supernodes, each consisting of a contiguous set of columns of L with identical nonzero pattern. A conventional supernode is stored as a dense submatrix. While this is suitable for sparse Cholesky factorization where the nonzero pattern of L does not change, it is not suitable for methods that modify a sparse Cholesky factorization after a low-rank change to A (an update/downdate, Ā = A ± WWT). Supernodes merge and split apart during an update/downdate. Dynamic supernodes are introduced which allow a sparse Cholesky update/downdate to obtain performance competitive with conventional supernodal methods. A dynamic supernodal solver is shown to exceed the performance of the conventional (BLAS-based) supernodal method for solving triangular systems. These methods are incorporated into CHOLMOD, a sparse Cholesky factorization and update/downdate package which forms the basis of x = A\\b MATLAB when A is sparse and symmetric positive definite.",
"title": ""
},
{
"docid": "neg:1840401_17",
"text": "Intelligent fault diagnosis techniques have replaced time-consuming and unreliable human analysis, increasing the efficiency of fault diagnosis. Deep learning models can improve the accuracy of intelligent fault diagnosis with the help of their multilayer nonlinear mapping ability. This paper proposes a novel method named Deep Convolutional Neural Networks with Wide First-layer Kernels (WDCNN). The proposed method uses raw vibration signals as input (data augmentation is used to generate more inputs), and uses the wide kernels in the first convolutional layer for extracting features and suppressing high frequency noise. Small convolutional kernels in the preceding layers are used for multilayer nonlinear mapping. AdaBN is implemented to improve the domain adaptation ability of the model. The proposed model addresses the problem that currently, the accuracy of CNN applied to fault diagnosis is not very high. WDCNN can not only achieve 100% classification accuracy on normal signals, but also outperform the state-of-the-art DNN model which is based on frequency features under different working load and noisy environment conditions.",
"title": ""
},
{
"docid": "neg:1840401_18",
"text": "Users with anomalous behaviors in online communication systems (e.g. email and social medial platforms) are potential threats to society. Automated anomaly detection based on advanced machine learning techniques has been developed to combat this issue; challenges remain, though, due to the difficulty of obtaining proper ground truth for model training and evaluation. Therefore, substantial human judgment on the automated analysis results is often required to better adjust the performance of anomaly detection. Unfortunately, techniques that allow users to understand the analysis results more efficiently, to make a confident judgment about anomalies, and to explore data in their context, are still lacking. In this paper, we propose a novel visual analysis system, TargetVue, which detects anomalous users via an unsupervised learning model and visualizes the behaviors of suspicious users in behavior-rich context through novel visualization designs and multiple coordinated contextual views. Particularly, TargetVue incorporates three new ego-centric glyphs to visually summarize a user's behaviors which effectively present the user's communication activities, features, and social interactions. An efficient layout method is proposed to place these glyphs on a triangle grid, which captures similarities among users and facilitates comparisons of behaviors of different users. We demonstrate the power of TargetVue through its application in a social bot detection challenge using Twitter data, a case study based on email records, and an interview with expert users. Our evaluation shows that TargetVue is beneficial to the detection of users with anomalous communication behaviors.",
"title": ""
},
{
"docid": "neg:1840401_19",
"text": "Due to filmmakers focusing on violence, traumatic events, and hallucinations when depicting characters with schizophrenia, critics have scrutinized the representation of mental disorders in contemporary films for years. This study compared previous research on schizophrenia with the fictional representation of the disease in contemporary films. Through content analysis, this study examined 10 films featuring a schizophrenic protagonist, tallying moments of violence and charting if they fell into four common stereotypes. Results showed a high frequency of violent behavior in films depicting schizophrenic characters, implying that those individuals are overwhelmingly dangerous and to be feared.",
"title": ""
}
] |
1840402 | Towards Generalization and Simplicity in Continuous Control | [
{
"docid": "pos:1840402_0",
"text": "We explore learning-based approaches for feedback control of a dexterous five-finger hand performing non-prehensile manipulation. First, we learn local controllers that are able to perform the task starting at a predefined initial state. These controllers are constructed using trajectory optimization with respect to locally-linear time-varying models learned directly from sensor data. In some cases, we initialize the optimizer with human demonstrations collected via teleoperation in a virtual environment. We demonstrate that such controllers can perform the task robustly, both in simulation and on the physical platform, for a limited range of initial conditions around the trained starting state. We then consider two interpolation methods for generalizing to a wider range of initial conditions: deep learning, and nearest neighbors. We find that nearest neighbors achieve higher performance under full observability, while a neural network proves advantages under partial observability: it uses only tactile and proprioceptive feedback but no feedback about the object (i.e. it performs the task blind) and learns a time-invariant policy. In contrast, the nearest neighbors method switches between time-varying local controllers based on the proximity of initial object states sensed via motion capture. While both generalization methods leave room for improvement, our work shows that (i) local trajectory-based controllers for complex non-prehensile manipulation tasks can be constructed from surprisingly small amounts of training data, and (ii) collections of such controllers can be interpolated to form more global controllers. Results are summarized in the supplementary video: https://youtu.be/E0wmO6deqjo",
"title": ""
}
] | [
{
"docid": "neg:1840402_0",
"text": "Visual attention plays an important role to understand images and demonstrates its effectiveness in generating natural language descriptions of images. On the other hand, recent studies show that language associated with an image can steer visual attention in the scene during our cognitive process. Inspired by this, we introduce a text-guided attention model for image captioning, which learns to drive visual attention using associated captions. For this model, we propose an exemplarbased learning approach that retrieves from training data associated captions with each image, and use them to learn attention on visual features. Our attention model enables to describe a detailed state of scenes by distinguishing small or confusable objects effectively. We validate our model on MSCOCO Captioning benchmark and achieve the state-of-theart performance in standard metrics.",
"title": ""
},
{
"docid": "neg:1840402_1",
"text": "Metastasis is a characteristic trait of most tumour types and the cause for the majority of cancer deaths. Many tumour types, including melanoma and breast and prostate cancers, first metastasize via lymphatic vessels to their regional lymph nodes. Although the connection between lymph node metastases and shorter survival times of patients was made decades ago, the active involvement of the lymphatic system in cancer, metastasis has been unravelled only recently, after molecular markers of lymphatic vessels were identified. A growing body of evidence indicates that tumour-induced lymphangiogenesis is a predictive indicator of metastasis to lymph nodes and might also be a target for prevention of metastasis. This article reviews the current understanding of lymphangiogenesis in cancer anti-lymphangiogenic strategies for prevention and therapy of metastatic disease, quantification of lymphangiogenesis for the prognosis and diagnosis of metastasis and in vivo imaging technologies for the assessment of lymphatic vessels, drainage and lymph nodes.",
"title": ""
},
{
"docid": "neg:1840402_2",
"text": "In recent years, transition-based parsers have shown promise in terms of efficiency and accuracy. Though these parsers have been extensively explored for multiple Indian languages, there is still considerable scope for improvement by properly incorporating syntactically relevant information. In this article, we enhance transition-based parsing of Hindi and Urdu by redefining the features and feature extraction procedures that have been previously proposed in the parsing literature of Indian languages. We propose and empirically show that properly incorporating syntactically relevant information like case marking, complex predication and grammatical agreement in an arc-eager parsing model can significantly improve parsing accuracy. Our experiments show an absolute improvement of ∼2% LAS for parsing of both Hindi and Urdu over a competitive baseline which uses rich features like part-of-speech (POS) tags, chunk tags, cluster ids and lemmas. We also propose some heuristics to identify ezafe constructions in Urdu texts which show promising results in parsing these constructions.",
"title": ""
},
{
"docid": "neg:1840402_3",
"text": "Received: 12 June 2006 Revised: 10 May 2007 Accepted: 22 July 2007 Abstract Although there is widespread agreement that leadership has important effects on information technology (IT) acceptance and use, relatively little empirical research to date has explored this phenomenon in detail. This paper integrates the unified theory of acceptance and use of technology (UTAUT) with charismatic leadership theory, and examines the role of project champions influencing user adoption. PLS analysis of survey data collected from 209 employees in seven organizations that had engaged in a large-scale IT implementation revealed that project champion charisma was positively associated with increased performance expectancy, effort expectancy, social influence and facilitating condition perceptions of users. Theoretical and managerial implications are discussed, and suggestions for future research in this area are provided. European Journal of Information Systems (2007) 16, 494–510. doi:10.1057/palgrave.ejis.3000682",
"title": ""
},
{
"docid": "neg:1840402_4",
"text": "In this paper, we compare the differences between traditional Kelly Criterion and Vince's optimal f through backtesting actual financial transaction data. We apply a momentum trading strategy to the Taiwan Weighted Index Futures, and analyze its profit-and-loss vectors of Kelly Criterion and Vince's optimal f, respectively. Our numerical experiments demonstrate that there is nearly 90% chance that the difference gap between the bet ratio recommended by Kelly criterion and and Vince's optimal f lies within 2%. Therefore, in the actual transaction, the values from Kelly Criterion could be taken directly as the optimal bet ratio for funds control.",
"title": ""
},
{
"docid": "neg:1840402_5",
"text": "Generating a reasonable ending for a given story context, i.e., story ending generation, is a strong indication of story comprehension. This task requires not only to understand the context clues which play an important role in planning the plot, but also to handle implicit knowledge to make a reasonable, coherent story. In this paper, we devise a novel model for story ending generation. The model adopts an incremental encoding scheme to represent context clues which are spanning in the story context. In addition, commonsense knowledge is applied through multi-source attention to facilitate story comprehension, and thus to help generate coherent and reasonable endings. Through building context clues and using implicit knowledge, the model is able to produce reasonable story endings. Automatic and manual evaluation shows that our model can generate more reasonable story endings than state-of-the-art baselines. 1",
"title": ""
},
{
"docid": "neg:1840402_6",
"text": "Magnetic resonance (MR) is the best way to assess the new anatomy of the pelvis after male to female (MtF) sex reassignment surgery. The aim of the study was to evaluate the radiological appearance of the small pelvis after MtF surgery and to compare it with the normal women's anatomy. Fifteen patients who underwent MtF surgery were subjected to pelvic MR at least 6 months after surgery. The anthropometric parameters of the small pelvis were measured and compared with those of ten healthy women (control group). Our personal technique (creation of the mons Veneris under the pubic skin) was performed in all patients. In patients who underwent MtF surgery, the mean neovaginal depth was slightly superior than in women (P=0.009). The length of the inferior pelvic aperture and of the inlet of pelvis was higher in the control group (P<0.005). The inclination between the axis of the neovagina and the inferior pelvis aperture, the thickness of the mons Veneris and the thickness of the rectovaginal septum were comparable between the two study groups. MR consents a detailed assessment of the new pelvic anatomy after MtF surgery. The anthropometric parameters measured in our patients were comparable with those of women.",
"title": ""
},
{
"docid": "neg:1840402_7",
"text": "The objective of this study is to present an offline control of highly non-linear inverted pendulum system moving on a plane inclined at an angle of 10° from horizontal. The stabilisation was achieved using three different soft-computing control techniques i.e. Proportional-integral-derivative (PID), Fuzzy logic and Adaptive neuro fuzzy inference system (ANFIS). A Matlab-Simulink model of the proposed system was initially developed which was further simulated using PID controllers based on trial and error method. The ANFIS controller were trained using data sets generated from simulation results of PID controller. The ANFIS controllers were designed using only three membership functions. A fuzzy logic control of the proposed system is also shown using nine membership functions. The study compares the three techniques in terms of settling time, maximum overshoot and steady state error. The simulation results are shown with the help of graphs and tables which validates the effectiveness of proposed techniques.",
"title": ""
},
{
"docid": "neg:1840402_8",
"text": "In 2001, JPL commissioned four industry teams to make a fresh examination of Mars Sample Return (MSR) mission architectures. As new fiscal realities of a cost-capped Mars Exploration Program unfolded, it was evident that the converged-upon MSR concept did not fit reasonably within a balanced program. Therefore, along with a new MSR Science Steering Group, JPL asked the industry teams plus JPL's Team-X to explore ways to reduce the cost. A paper presented at last year's conference described the emergence of a new, affordable \"Groundbreaking-MSR\" concept (Mattingly et al., 2003). This work addresses the continued evolution of the Groundbreaking MSR concept over the last year. One of the tenets of the low-cost approach is to use substantial heritage from an earlier mission, Mars Science Laboratory (MSL). Recently, the MSL project developed and switched its baseline to a revolutionary landing approach, coined \"skycrane\" where the MSL, which is a rover, would be lowered gently to the Martian surface from a hovering vehicle. MSR has adopted this approach in its mission studies, again continuing to capitalize on the heritage for a significant portion of the new lander. In parallel, a MSR Technology Board was formed to reexamine MSR technology needs and participate in a continuing refinement of architectural trades. While the focused technology program continues to be definitized through the remainder of this year, the current assessment of what technology development is required, is discussed in this paper. In addition, the results of new trade studies and considerations will be discussed. Adopting these changes, the Groundbreaking MSR concept has shifted to that presented in this paper. It remains a project that is affordable and meets the basic science needs defined by the MSR Science Steering Group in 2002.",
"title": ""
},
{
"docid": "neg:1840402_9",
"text": "Scene-agnostic visual inpainting remains very challenging despite progress in patch-based methods. Recently, Pathak et al. [26] have introduced convolutional \"context encoders'' (CEs) for unsupervised feature learning through image completion tasks. With the additional help of adversarial training, CEs turned out to be a promising tool to complete complex structures in real inpainting problems. In the present paper we propose to push further this key ability by relying on perceptual reconstruction losses at training time. We show on a wide variety of visual scenes the merit of the approach forstructural inpainting, and confirm it through a user study. Combined with the optimization-based refinement of [32] with neural patches, our context encoder opens up new opportunities for prior-free visual inpainting.",
"title": ""
},
{
"docid": "neg:1840402_10",
"text": "We present a method to generate a robot control strategy that maximizes the probability to accomplish a task. The task is given as a Linear Temporal Logic (LTL) formula over a set of properties that can be satisfied at the regions of a partitioned environment. We assume that the probabilities with which the properties are satisfied at the regions are known, and the robot can determine the truth value of a proposition only at the current region. Motivated by several results on partitioned-based abstractions, we assume that the motion is performed on a graph. To account for noisy sensors and actuators, we assume that a control action enables several transitions with known probabilities. We show that this problem can be reduced to the problem of generating a control policy for a Markov Decision Process (MDP) such that the probability of satisfying an LTL formula over its states is maximized. We provide a complete solution for the latter problem that builds on existing results from probabilistic model checking. We include an illustrative case study.",
"title": ""
},
{
"docid": "neg:1840402_11",
"text": "Many organizations aspire to adopt agile processes to take advantage of the numerous benefits that they offer to an organization. Those benefits include, but are not limited to, quicker return on investment, better software quality, and higher customer satisfaction. To date, however, there is no structured process (at least that is published in the public domain) that guides organizations in adopting agile practices. To address this situation, we present the agile adoption framework and the innovative approach we have used to implement it. The framework consists of two components: an agile measurement index, and a four-stage process, that together guide and assist the agile adoption efforts of organizations. More specifically, the Sidky Agile Measurement Index (SAMI) encompasses five agile levels that are used to identify the agile potential of projects and organizations. The four-stage process, on the other hand, helps determine (a) whether or not organizations are ready for agile adoption, and (b) guided by their potential, what set of agile practices can and should be introduced. To help substantiate the “goodness” of the Agile Adoption Framework, we presented it to various members of the agile community, and elicited responses through questionnaires. The results of that substantiation effort are encouraging, and are also presented in this paper.",
"title": ""
},
{
"docid": "neg:1840402_12",
"text": "OBJECTIVE\nSelf-stigma is highly prevalent in schizophrenia and can be seen as an important factor leading to low self-esteem. It is however unclear how psychological factors and actual adverse events contribute to self-stigma. This study empirically examines how symptom severity and the experience of being victimized affect both self-stigma and self-esteem.\n\n\nMETHODS\nPersons with a schizophrenia spectrum disorder (N = 102) were assessed with a battery of self-rating questionnaires and interviews. Structural equation modelling (SEM) was subsequently applied to test the fit of three models: a model with symptoms and victimization as direct predictors of self-stigma and negative self-esteem, a model with an indirect effect for symptoms mediated by victimization and a third model with a direct effect for negative symptoms and an indirect effect for positive symptoms mediated by victimization.\n\n\nRESULTS\nResults showed good model fit for the direct effects of both symptoms and victimization: both lead to an increase of self-stigma and subsequent negative self-esteem. Negative symptoms had a direct association with self-stigma, while the relationship between positive symptoms and self-stigma was mediated by victimization.\n\n\nCONCLUSIONS\nOur findings suggest that symptoms and victimization may contribute to self-stigma, leading to negative self-esteem in individuals with a schizophrenia spectrum disorder. Especially for patients with positive symptoms victimization seems to be an important factor in developing self-stigma. Given the burden of self-stigma on patients and the constraining effects on societal participation and service use, interventions targeting victimization as well as self-stigma are needed.",
"title": ""
},
{
"docid": "neg:1840402_13",
"text": "Nairovirus, one of five bunyaviral genera, includes seven species. Genomic sequence information is limited for members of the Dera Ghazi Khan, Hughes, Qalyub, Sakhalin, and Thiafora nairovirus species. We used next-generation sequencing and historical virus-culture samples to determine 14 complete and nine coding-complete nairoviral genome sequences to further characterize these species. Previously unsequenced viruses include Abu Mina, Clo Mor, Great Saltee, Hughes, Raza, Sakhalin, Soldado, and Tillamook viruses. In addition, we present genomic sequence information on additional isolates of previously sequenced Avalon, Dugbe, Sapphire II, and Zirqa viruses. Finally, we identify Tunis virus, previously thought to be a phlebovirus, as an isolate of Abu Hammad virus. Phylogenetic analyses indicate the need for reassignment of Sapphire II virus to Dera Ghazi Khan nairovirus and reassignment of Hazara, Tofla, and Nairobi sheep disease viruses to novel species. We also propose new species for the Kasokero group (Kasokero, Leopards Hill, Yogue viruses), the Ketarah group (Gossas, Issyk-kul, Keterah/soft tick viruses) and the Burana group (Wēnzhōu tick virus, Huángpí tick virus 1, Tǎchéng tick virus 1). Our analyses emphasize the sister relationship of nairoviruses and arenaviruses, and indicate that several nairo-like viruses (Shāyáng spider virus 1, Xīnzhōu spider virus, Sānxiá water strider virus 1, South Bay virus, Wǔhàn millipede virus 2) require establishment of novel genera in a larger nairovirus-arenavirus supergroup.",
"title": ""
},
{
"docid": "neg:1840402_14",
"text": "Relational database management systems (RDBMSs) are powerful because they are able to optimize and answer queries against any relational database. A natural language interface (NLI) for a database, on the other hand, is tailored to support that specific database. In this work, we introduce a general purpose transfer-learnable NLI with the goal of learning one model that can be used as NLI for any relational database. We adopt the data management principle of separating data and its schema, but with the additional support for the idiosyncrasy and complexity of natural languages. Specifically, we introduce an automatic annotation mechanism that separates the schema and the data, where the schema also covers knowledge about natural language. Furthermore, we propose a customized sequence model that translates annotated natural language queries to SQL statements. We show in experiments that our approach outperforms previous NLI methods on the WikiSQL dataset and the model we learned can be applied to another benchmark dataset OVERNIGHT without retraining.",
"title": ""
},
{
"docid": "neg:1840402_15",
"text": "A patient with upper limb dimelia including a double scapula, humerus, radius, and ulna, 11 metacarpals and digits (5 on the superior side, 6 on the inferior side) was treated with a simple amputation of the inferior limb resulting in cosmetic improvement and maintenance of range of motion in the preserved limb. During the amputation, the 2 limbs were found to be anatomically separate except for the ulnar nerve, which, in the superior limb, bifurcated into the sensory branch of radial nerve in the inferior limb, and the brachial artery, which bifurcated into the radial artery. Each case of this rare anomaly requires its own individually carefully planned surgical procedure.",
"title": ""
},
{
"docid": "neg:1840402_16",
"text": "It’s useful to automatically transform an image from its original form to some synthetic form (style, partial contents, etc.), while keeping the original structure or semantics. We define this requirement as the ”image-to-image translation” problem, and propose a general approach to achieve it, based on deep convolutional and conditional generative adversarial networks (GANs), which has gained a phenomenal success to learn mapping images from noise input since 2014. In this work, we develop a two step (unsupervised) learning method to translate images between different domains by using unlabeled images without specifying any correspondence between them, so that to avoid the cost of acquiring labeled data. Compared with prior works, we demonstrated the capacity of generality in our model, by which variance of translations can be conduct by a single type of model. Such capability is desirable in applications like bidirectional translation",
"title": ""
},
{
"docid": "neg:1840402_17",
"text": "In the last two decades, the Lattice Boltzmann method (LBM) has emerged as a promising tool for modelling the Navier-Stokes equations and simulating complex fluid flows. LBM is based on microscopic models and mesoscopic kinetic equations. In some perspective, it can be viewed as a finite difference method for solving the Boltzmann transport equation. Moreover the Navier-Stokes equations can be recovered by LBM with a proper choice of the collision operator. In Section 2 and 3, we first introduce this method and describe some commonly used boundary conditions. In Section 4, the validity of this method is confirmed by comparing the numerical solution to the exact solution of the steady plane Poiseuille flow and convergence of solution is established. Some interesting numerical simulations, including the lid-driven cavity flow, flow past a circular cylinder and the Rayleigh-Bénard convection for a range of Reynolds numbers, are carried out in Section 5, 6 and 7. In Section 8, we briefly highlight the procedure of recovering the Navier-Stokes equations from LBM. A summary is provided in Section 9.",
"title": ""
},
{
"docid": "neg:1840402_18",
"text": "Autoencoders learn data representations (codes) in such a way that the input is reproduced at the output of the network. However, it is not always clear what kind of properties of the input data need to be captured by the codes. Kernel machines have experienced great success by operating via inner-products in a theoretically well-defined reproducing kernel Hilbert space, hence capturing topological properties of input data. In this paper, we enhance the autoencoder’s ability to learn effective data representations by aligning inner products between codes with respect to a kernel matrix. By doing so, the proposed kernelized autoencoder allows learning similarity-preserving embeddings of input data, where the notion of similarity is explicitly controlled by the user and encoded in a positive semi-definite kernel matrix. Experiments are performed for evaluating both reconstruction and kernel alignment performance in classification tasks and visualization of high-dimensional data. Additionally, we show that our method is capable to emulate kernel principal component analysis on a denoising task, obtaining competitive results at a much lower computational cost.",
"title": ""
}
] |
1840403 | Warning traffic sign recognition using a HOG-based K-d tree | [
{
"docid": "pos:1840403_0",
"text": "This paper describes a computer vision based system for real-time robust traffic sign detection, tracking, and recognition. Such a framework is of major interest for driver assistance in an intelligent automotive cockpit environment. The proposed approach consists of two components. First, signs are detected using a set of Haar wavelet features obtained from AdaBoost training. Compared to previously published approaches, our solution offers a generic, joint modeling of color and shape information without the need of tuning free parameters. Once detected, objects are efficiently tracked within a temporal information propagation framework. Second, classification is performed using Bayesian generative modeling. Making use of the tracking information, hypotheses are fused over multiple frames. Experiments show high detection and recognition accuracy and a frame rate of approximately 10 frames per second on a standard PC.",
"title": ""
}
] | [
{
"docid": "neg:1840403_0",
"text": "There is growing evidence that client firms expect outsourcing suppliers to transform their business. Indeed, most outsourcing suppliers have delivered IT operational and business process innovation to client firms; however, achieving strategic innovation through outsourcing has been perceived to be far more challenging. Building on the growing interest in the IS outsourcing literature, this paper seeks to advance our understanding of the role that relational and contractual governance plays in achieving strategic innovation through outsourcing. We hypothesized and tested empirically the relationship between the quality of client-supplier relationships and the likelihood of achieving strategic innovation, and the interaction effect of different contract types, such as fixed-price, time and materials, partnership and their combinations. Results from a pan-European survey of 248 large firms suggest that high-quality relationships between clients and suppliers may indeed help achieve strategic innovation through outsourcing. However, within the spectrum of various outsourcing contracts, only the partnership contract, when included in the client contract portfolio alongside either fixed-price, time and materials or their combination, presents a significant positive effect on relational governance and is likely to strengthen the positive effect of the quality of client-supplier relationships on strategic innovation.",
"title": ""
},
{
"docid": "neg:1840403_1",
"text": "Social media comprises interactive applications and platforms for creating, sharing and exchange of user-generated contents. The past ten years have brought huge growth in social media, especially online social networking services, and it is changing our ways to organize and communicate. It aggregates opinions and feelings of diverse groups of people at low cost. Mining the attributes and contents of social media gives us an opportunity to discover social structure characteristics, analyze action patterns qualitatively and quantitatively, and sometimes the ability to predict future human related events. In this paper, we firstly discuss the realms which can be predicted with current social media, then overview available predictors and techniques of prediction, and finally discuss challenges and possible future directions.",
"title": ""
},
{
"docid": "neg:1840403_2",
"text": "The primary goals in use of half-bridge LLC series-resonant converter (LLC-SRC) are high efficiency, low noise, and wide-range regulation. A voltage-clamped drive circuit for simultaneously driving both primary and secondary switches is proposed to achieve synchronous rectification (SR) at switching frequency higher than the dominant resonant frequency. No high/low-side driver circuit for half-bridge switches of LLC-SRC is required and less circuit complexity is achieved. The SR mode LLC-SRC developed for reducing output rectification losses is described along with steady-state analysis, gate drive strategy, and its experiments. Design consideration is described thoroughly so as to build up a reference for design and realization. A design example of 240W SR LLC-SRC is examined and an average efficiency as high as 95% at full load is achieved. All performances verified by simulation and experiment are close to the theoretical predictions.",
"title": ""
},
{
"docid": "neg:1840403_3",
"text": "Human research biobanks have rapidly expanded in the past 20 years, in terms of both their complexity and utility. To date there exists no agreement upon classification schema for these biobanks. This is an important issue to address for several reasons: to ensure that the diversity of biobanks is appreciated, to assist researchers in understanding what type of biobank they need access to, and to help institutions/funding bodies appreciate the varying level of support required for different types of biobanks. To capture the degree of complexity, specialization, and diversity that exists among human research biobanks, we propose here a new classification schema achieved using a conceptual classification approach. This schema is based on 4 functional biobank \"elements\" (donor/participant, design, biospecimens, and brand), which we feel are most important to the major stakeholder groups (public/participants, members of the biobank community, health care professionals/researcher users, sponsors/funders, and oversight bodies), and multiple intrinsic features or \"subelements\" (eg, the element \"biospecimens\" could be further classified based on preservation method into fixed, frozen, fresh, live, and desiccated). We further propose that the subelements relating to design (scale, accrual, data format, and data content) and brand (user, leadership, and sponsor) should be specifically recognized by individual biobanks and included in their communications to the broad stakeholder audience.",
"title": ""
},
{
"docid": "neg:1840403_4",
"text": "The idea of applying IOT technologies to smart home system is introduced. An original architecture of the integrated system is analyzed with its detailed introduction. This architecture has great scalability. Based on this proposed architecture many applications can be integrated into the system through uniform interface. Agents are proposed to communicate with appliances through RFID tags. Key issues to be solved to promote the development of smart home system are also discussed.",
"title": ""
},
{
"docid": "neg:1840403_5",
"text": "Weather forecasting provides numerous societal benefits, from extreme weather warnings to agricultural planning. In recent decades, advances in forecasting have been rapid, arising from improved observations and models, and better integration of these through data assimilation and related techniques. Further improvements are not yet constrained by limits on predictability. Better forecasting, in turn, can contribute to a wide range of environmental forecasting, from forest-fire smoke to bird migrations.",
"title": ""
},
{
"docid": "neg:1840403_6",
"text": "In this work we present a technique for using natural language to help reinforcement learning generalize to unseen environments using neural machine translation techniques. These techniques are then integrated into policy shaping to make it more effective at learning in unseen environments. We evaluate this technique using the popular arcade game, Frogger, and show that our modified policy shaping algorithm improves over a Q-learning agent as well as a baseline version of policy shaping.",
"title": ""
},
{
"docid": "neg:1840403_7",
"text": "Inspired by the recent development of deep network-based methods in semantic image segmentation, we introduce an end-to-end trainable model for face mask extraction in video sequence. Comparing to landmark-based sparse face shape representation, our method can produce the segmentation masks of individual facial components, which can better reflect their detailed shape variations. By integrating convolutional LSTM (ConvLSTM) algorithm with fully convolutional networks (FCN), our new ConvLSTM-FCN model works on a per-sequence basis and takes advantage of the temporal correlation in video clips. In addition, we also propose a novel loss function, called segmentation loss, to directly optimise the intersection over union (IoU) performances. In practice, to further increase segmentation accuracy, one primary model and two additional models were trained to focus on the face, eyes, and mouth regions, respectively. Our experiment shows the proposed method has achieved a 16.99% relative improvement (from 54.50 to 63.76% mean IoU) over the baseline FCN model on the 300 Videos in the Wild (300VW) dataset.",
"title": ""
},
{
"docid": "neg:1840403_8",
"text": "Human papillomavirus (HPV) is the most important etiological factor for cervical cancer. A recent study demonstrated that more than 20 HPV types were thought to be oncogenic for uterine cervical cancer. Notably, more than one-half of women show cervical HPV infections soon after their sexual debut, and about 90 % of such infections are cleared within 3 years. Immunity against HPV might be important for elimination of the virus. The innate immune responses involving macrophages, natural killer cells, and natural killer T cells may play a role in the first line of defense against HPV infection. In the second line of defense, adaptive immunity via cytotoxic T lymphocytes (CTLs) targeting HPV16 E2 and E6 proteins appears to eliminate cells infected with HPV16. However, HPV can evade host immune responses. First, HPV does not kill host cells during viral replication and therefore neither presents viral antigen nor induces inflammation. HPV16 E6 and E7 proteins downregulate the expression of type-1 interferons (IFNs) in host cells. The lack of co-stimulatory signals by inflammatory cytokines including IFNs during antigen recognition may induce immune tolerance rather than the appropriate responses. Moreover, HPV16 E5 protein downregulates the expression of HLA-class 1, and it facilitates evasion of CTL attack. These mechanisms of immune evasion may eventually support the establishment of persistent HPV infection, leading to the induction of cervical cancer. Considering such immunological events, prophylactic HPV16 and 18 vaccine appears to be the best way to prevent cervical cancer in women who are immunized in adolescence.",
"title": ""
},
{
"docid": "neg:1840403_9",
"text": "Social media sites (e.g., Flickr, YouTube, and Facebook) are a popular distribution outlet for users looking to share their experiences and interests on the Web. These sites host substantial amounts of user-contributed materials (e.g., photographs, videos, and textual content) for a wide variety of real-world events of different type and scale. By automatically identifying these events and their associated user-contributed social media documents, which is the focus of this paper, we can enable event browsing and search in state-of-the-art search engines. To address this problem, we exploit the rich \"context\" associated with social media content, including user-provided annotations (e.g., title, tags) and automatically generated information (e.g., content creation time). Using this rich context, which includes both textual and non-textual features, we can define appropriate document similarity metrics to enable online clustering of media to events. As a key contribution of this paper, we explore a variety of techniques for learning multi-feature similarity metrics for social media documents in a principled manner. We evaluate our techniques on large-scale, real-world datasets of event images from Flickr. Our evaluation results suggest that our approach identifies events, and their associated social media documents, more effectively than the state-of-the-art strategies on which we build.",
"title": ""
},
{
"docid": "neg:1840403_10",
"text": "In this paper, we investigate the Chinese calligraphy synthesis problem: synthesizing Chinese calligraphy images with specified style from standard font(eg. Hei font) images (Fig. 1(a)). Recent works mostly follow the stroke extraction and assemble pipeline which is complex in the process and limited by the effect of stroke extraction. In this work we treat the calligraphy synthesis problem as an image-to-image translation problem and propose a deep neural network based model which can generate calligraphy images from standard font images directly. Besides, we also construct a large scale benchmark that contains various styles for Chinese calligraphy synthesis. We evaluate our method as well as some baseline methods on the proposed dataset, and the experimental results demonstrate the effectiveness of our proposed model.",
"title": ""
},
{
"docid": "neg:1840403_11",
"text": "Purpose – The purpose of this paper is to evaluate the influence of psychological hardiness, social judgment, and “Big Five” personality dimensions on leader performance in U.S. military academy cadets at West Point. Design/methodology/approach – Army Cadets were studied in two different organizational contexts: (a)summer field training, and (b)during academic semesters. Leader performance was measured with leadership grades (supervisor ratings) aggregated over four years at West Point. Findings After controlling for general intellectual abilities, hierarchical regression results showed leader performance in the summer field training environment is predicted by Big Five Extraversion, and Hardiness, and a trend for Social Judgment. During the academic period context, leader performance is predicted by mental abilities, Big Five Conscientiousness, and Hardiness, with a trend for Social Judgment. Research limitations/implications Results confirm the importance of psychological hardiness, extraversion, and conscientiousness as factors influencing leader effectiveness, and suggest that social judgment aspects of emotional intelligence can also be important. These results also show that different Big Five personality factors may influence leadership in different organizational",
"title": ""
},
{
"docid": "neg:1840403_12",
"text": "One of the most popular user activities on the Web is watching videos. Services like YouTube, Vimeo, and Hulu host and stream millions of videos, providing content that is on par with TV. While some of this content is popular all over the globe, some videos might be only watched in a confined, local region.\n In this work we study the relationship between popularity and locality of online YouTube videos. We investigate whether YouTube videos exhibit geographic locality of interest, with views arising from a confined spatial area rather than from a global one. Our analysis is done on a corpus of more than 20 millions YouTube videos, uploaded over one year from different regions. We find that about 50% of the videos have more than 70% of their views in a single region. By relating locality to viralness we show that social sharing generally widens the geographic reach of a video. If, however, a video cannot carry its social impulse over to other means of discovery, it gets stuck in a more confined geographic region. Finally, we analyze how the geographic properties of a video's views evolve on a daily basis during its lifetime, providing new insights on how the geographic reach of a video changes as its popularity peaks and then fades away.\n Our results demonstrate how, despite the global nature of the Web, online video consumption appears constrained by geographic locality of interest: this has a potential impact on a wide range of systems and applications, spanning from delivery networks to recommendation and discovery engines, providing new directions for future research.",
"title": ""
},
{
"docid": "neg:1840403_13",
"text": "Rhinophyma is a subtype of rosacea characterized by nodular thickening of the skin, sebaceous gland hyperplasia, dilated pores, and in its late stage, fibrosis. Phymatous changes in rosacea are most common on the nose but can also occur on the chin (gnatophyma), ears (otophyma), and eyelids (blepharophyma). In severe cases, phymatous changes result in the loss of normal facial contours, significant disfigurement, and social isolation. Additionally, patients with profound rhinophyma can experience nare obstruction and difficulty breathing due to the weight and bulk of their nose. Treatment options for severe advanced rhinophyma include cryosurgery, partial-thickness decortication with subsequent secondary repithelialization, carbon dioxide (CO2) or erbium-doped yttrium aluminum garnet (Er:YAG) laser ablation, full-thickness resection with graft or flap reconstruction, excision by electrocautery or radio frequency, and sculpting resection using a heated Shaw scalpel. We report a severe case of rhinophyma resulting in marked facial disfigurement and nasal obstruction treated successfully using the Shaw scalpel. Rhinophymectomy using the Shaw scalpel allows for efficient and efficacious treatment of rhinophyma without the need for multiple procedures or general anesthesia and thus should be considered in patients with nare obstruction who require intervention.",
"title": ""
},
{
"docid": "neg:1840403_14",
"text": "With the increase of an ageing population and chronic diseases, society becomes more health conscious and patients become “health consumers” looking for better health management. People’s perception is shifting towards patient-centered, rather than the classical, hospital–centered health services which has been propelling the evolution of telemedicine research from the classic e-Health to m-Health and now is to ubiquitous healthcare (u-Health). It is expected that mobile & ubiquitous Telemedicine, integrated with Wireless Body Area Network (WBAN), have a great potential in fostering the provision of next-generation u-Health. Despite the recent efforts and achievements, current u-Health proposed solutions still suffer from shortcomings hampering their adoption today. This paper presents a comprehensive review of up-to-date requirements in hardware, communication, and computing for next-generation u-Health systems. It compares new technological and technical trends and discusses how they address expected u-Health requirements. A thorough survey on various worldwide recent system implementations is presented in an attempt to identify shortcomings in state-of-the art solutions. In particular, challenges in WBAN and ubiquitous computing were emphasized. The purpose of this survey is not only to help beginners with a holistic approach toward understanding u-Health systems but also present to researchers new technological trends and design challenges they have to cope with, while designing such systems.",
"title": ""
},
{
"docid": "neg:1840403_15",
"text": "Design of a low-energy power-ON reset (POR) circuit is proposed to reduce the energy consumed by the stable supply of the dual supply static random access memory (SRAM), as the other supply is ramping up. The proposed POR circuit, when embedded inside dual supply SRAM, removes its ramp-up constraints related to voltage sequencing and pin states. The circuit consumes negligible energy during ramp-up, does not consume dynamic power during operations, and includes hysteresis to improve noise immunity against voltage fluctuations on the power supply. The POR circuit, designed in the 40-nm CMOS technology within 10.6-μm2 area, enabled 27× reduction in the energy consumed by the SRAM array supply during periphery power-up in typical conditions.",
"title": ""
},
{
"docid": "neg:1840403_16",
"text": "It's been over a decade now. We've forgotten how slow the adoption of consumer Internet commerce has been compared to other Internet growth metrics. And we're surprised when security scares like spyware and phishing result in lurches in consumer use.This paper re-visits an old theme, and finds that consumer marketing is still characterised by aggression and dominance, not sensitivity to customer needs. This conclusion is based on an examination of terms and privacy policy statements, which shows that businesses are confronting the people who buy from them with fixed, unyielding interfaces. Instead of generating trust, marketers prefer to wield power.These hard-headed approaches can work in a number of circumstances. Compelling content is one, but not everyone sells sex, gambling services, short-shelf-life news, and even shorter-shelf-life fashion goods. And, after decades of mass-media-conditioned consumer psychology research and experimentation, it's far from clear that advertising can convert everyone into salivating consumers who 'just have to have' products and services brand-linked to every new trend, especially if what you sell is groceries or handyman supplies.The thesis of this paper is that the one-dimensional, aggressive concept of B2C has long passed its use-by date. Trading is two-way -- consumers' attention, money and loyalty, in return for marketers' products and services, and vice versa.So B2C is conceptually wrong, and needs to be replaced by some buzzphrase that better conveys 'B-with-C' rather than 'to-C' and 'at-C'. Implementations of 'customised' services through 'portals' have to mature beyond data-mining-based manipulation to support two-sided relationships, and customer-managed profiles.It's all been said before, but now it's time to listen.",
"title": ""
},
{
"docid": "neg:1840403_17",
"text": "Blur is a key determinant in the perception of image quality. Generally, blur causes spread of edges, which leads to shape changes in images. Discrete orthogonal moments have been widely studied as effective shape descriptors. Intuitively, blur can be represented using discrete moments since noticeable blur affects the magnitudes of moments of an image. With this consideration, this paper presents a blind image blur evaluation algorithm based on discrete Tchebichef moments. The gradient of a blurred image is first computed to account for the shape, which is more effective for blur representation. Then the gradient image is divided into equal-size blocks and the Tchebichef moments are calculated to characterize image shape. The energy of a block is computed as the sum of squared non-DC moment values. Finally, the proposed image blur score is defined as the variance-normalized moment energy, which is computed with the guidance of a visual saliency model to adapt to the characteristic of human visual system. The performance of the proposed method is evaluated on four public image quality databases. The experimental results demonstrate that our method can produce blur scores highly consistent with subjective evaluations. It also outperforms the state-of-the-art image blur metrics and several general-purpose no-reference quality metrics.",
"title": ""
},
{
"docid": "neg:1840403_18",
"text": "Knowledge management (KM) has emerged as a tool that allows the creation, use, distribution and transfer of knowledge in organizations. There are different frameworks that propose KM in the scientific literature. The majority of these frameworks are structured based on a strong theoretical background. This study describes a guide for the implementation of KM in a higher education institution (HEI) based on a framework with a clear description on the practical implementation. This framework is based on a technological infrastructure that includes enterprise architecture, business intelligence and educational data mining. Furthermore, a case study which describes the experience of the implementation in a HEI is presented. As a conclusion, the pros and cons on the use of the framework are analyzed.",
"title": ""
},
{
"docid": "neg:1840403_19",
"text": "BACKGROUND\nHyperhomocysteinemia arising from impaired methionine metabolism, probably usually due to a deficiency of cystathionine beta-synthase, is associated with premature cerebral, peripheral, and possibly coronary vascular disease. Both the strength of this association and its independence of other risk factors for cardiovascular disease are uncertain. We studied the extent to which the association could be explained by heterozygous cystathionine beta-synthase deficiency.\n\n\nMETHODS\nWe first established a diagnostic criterion for hyperhomocysteinemia by comparing peak serum levels of homocysteine after a standard methionine-loading test in 25 obligate heterozygotes with respect to cystathionine beta-synthase deficiency (whose children were known to be homozygous for homocystinuria due to this enzyme defect) with the levels in 27 unrelated age- and sex-matched normal subjects. A level of 24.0 mumol per liter or more was 92 percent sensitive and 100 percent specific in distinguishing the two groups. The peak serum homocysteine levels in these normal subjects were then compared with those in 123 patients whose vascular disease had been diagnosed before they were 55 years of age.\n\n\nRESULTS\nHyperhomocysteinemia was detected in 16 of 38 patients with cerebrovascular disease (42 percent), 7 of 25 with peripheral vascular disease (28 percent), and 18 of 60 with coronary vascular disease (30 percent), but in none of the 27 normal subjects. After adjustment for the effects of conventional risk factors, the lower 95 percent confidence limit for the odds ratio for vascular disease among the patients with hyperhomocysteinemia, as compared with the normal subjects, was 3.2. The geometric-mean peak serum homocysteine level was 1.33 times higher in the patients with vascular disease than in the normal subjects (P = 0.002). The presence of cystathionine beta-synthase deficiency was confirmed in 18 of 23 patients with vascular disease who had hyperhomocysteinemia.\n\n\nCONCLUSIONS\nHyperhomocysteinemia is an independent risk factor for vascular disease, including coronary disease, and in most instances is probably due to cystathionine beta-synthase deficiency.",
"title": ""
}
] |
1840404 | Mid-Curve Recommendation System: a Stacking Approach Through Neural Networks | [
{
"docid": "pos:1840404_0",
"text": "Can we train the computer to beat experienced traders for financial assert trading? In this paper, we try to address this challenge by introducing a recurrent deep neural network (NN) for real-time financial signal representation and trading. Our model is inspired by two biological-related learning concepts of deep learning (DL) and reinforcement learning (RL). In the framework, the DL part automatically senses the dynamic market condition for informative feature learning. Then, the RL module interacts with deep representations and makes trading decisions to accumulate the ultimate rewards in an unknown environment. The learning system is implemented in a complex NN that exhibits both the deep and recurrent structures. Hence, we propose a task-aware backpropagation through time method to cope with the gradient vanishing issue in deep training. The robustness of the neural system is verified on both the stock and the commodity future markets under broad testing conditions.",
"title": ""
},
{
"docid": "pos:1840404_1",
"text": "Sometimes you just have to clench your teeth and go for the differential matrix algebra. And the central limit theorems. Together with the maximum likelihood techniques. And the static mean variance portfolio theory. Not forgetting the dynamic asset pricing models. And these are just the tools you need before you can start making empirical inferences in financial economics.” So wrote Ruben Lee, playfully, in a review of The Econometrics of Financial Markets, winner of TIAA-CREF’s Paul A. Samuelson Award. In economist Harry M. Markowitz, who in won the Nobel Prize in Economics, published his landmark thesis “Portfolio Selection” as an article in the Journal of Finance, and financial economics was born. Over the subsequent decades, this young and burgeoning field saw many advances in theory but few in econometric technique or empirical results. Then, nearly four decades later, Campbell, Lo, and MacKinlay’s The Econometrics of Financial Markets made a bold leap forward by integrating theory and empirical work. The three economists combined their own pathbreaking research with a generation of foundational work in modern financial theory and research. The book includes treatment of topics from the predictability of asset returns to the capital asset pricing model and arbitrage pricing theory, from statistical fractals to chaos theory. Read widely in academe as well as in the business world, The Econometrics of Financial Markets has become a new landmark in financial economics, extending and enhancing the Nobel Prize– winning work established by the early trailblazers in this important field.",
"title": ""
}
] | [
{
"docid": "neg:1840404_0",
"text": "Based on data from a survey (n = 3291) and 14 qualitative interviews among Danish older adults, this study investigated the use of, and attitudes toward, information communications technology (ICT) and the digital delivery of public services. While age, gender, and socioeconomic status were associated with use of ICT, these determinants lost their explanatory power when we controlled for attitudes and experiences. We identified three segments that differed in their use of ICT and attitudes toward digital service delivery. As nonuse of ICT often results from the lack of willingness to use it rather than from material or cognitive deficiencies, policy measures for bridging the digital divide should focus on skills and confidence rather than on access or ability.",
"title": ""
},
{
"docid": "neg:1840404_1",
"text": "The Tic tac toe is very popular game having a 3 × 3 grid board and 2 players. A Special Symbol (X or O) is assigned to each player to indicate the slot is covered by the respective player. The winner of the game is the player who first cover a horizontal, vertical and diagonal row of the board having only player's own symbols. This paper presents the design model of Tic tac toe Game using Multi-Tape Turing Machine in which both player choose input randomly and result of the game is declared. The computational Model of Tic tac toe is used to describe it in a formal manner.",
"title": ""
},
{
"docid": "neg:1840404_2",
"text": "There has recently been a surge of work in explanatory artificial intelligence (XAI). This research area tackles the important problem that complex machines and algorithms often cannot provide insights into their behavior and thought processes. XAI allows users and parts of the internal system to be more transparent, providing explanations of their decisions in some level of detail. These explanations are important to ensure algorithmic fairness, identify potential bias/problems in the training data, and to ensure that the algorithms perform as expected. However, explanations produced by these systems is neither standardized nor systematically assessed. In an effort to create best practices and identify open challenges, we describe foundational concepts of explainability and show how they can be used to classify existing literature. We discuss why current approaches to explanatory methods especially for deep neural networks are insufficient. Finally, based on our survey, we conclude with suggested future research directions for explanatory artificial intelligence.",
"title": ""
},
{
"docid": "neg:1840404_3",
"text": "Cyber-crime has reached unprecedented proportions in this day and age. In addition, the internet has created a world with seemingly no barriers while making a countless number of tools available to the cyber-criminal. In light of this, Computer Forensic Specialists employ state-of-the-art tools and methodologies in the extraction and analysis of data from storage devices used at the digital crime scene. The focus of this paper is to conduct an investigation into some of these Forensic tools eg.Encase®. This investigation will address commonalities across the Forensic tools, their essential differences and ultimately point out what features need to be improved in these tools to allow for effective autopsies of storage devices.",
"title": ""
},
{
"docid": "neg:1840404_4",
"text": "Back-translation has become a commonly employed heuristic for semi-supervised neural machine translation. The technique is both straightforward to apply and has led to stateof-the-art results. In this work, we offer a principled interpretation of back-translation as approximate inference in a generative model of bitext and show how the standard implementation of back-translation corresponds to a single iteration of the wake-sleep algorithm in our proposed model. Moreover, this interpretation suggests a natural iterative generalization, which we demonstrate leads to further improvement of up to 1.6 BLEU.",
"title": ""
},
{
"docid": "neg:1840404_5",
"text": "This paper presents a methodology for extracting road edge and lane information for smart and intelligent navigation of vehicles. The range information provided by a fast laser range-measuring device is processed by an extended Kalman filter to extract the road edge or curb information. The resultant road edge information is used to aid in the extraction of the lane boundary from a CCD camera image. Hough Transform (HT) is used to extract the candidate lane boundary edges, and the most probable lane boundary is determined using an Active Line Model based on minimizing an appropriate Energy function. Experimental results are presented to demonstrate the effectiveness of the combined Laser and Vision strategy for road-edge and lane boundary detection.",
"title": ""
},
{
"docid": "neg:1840404_6",
"text": "F/OSS software has been described by many as a puzzle. In the past five years, it has stimulated the curiosity of scholars in a variety of fields, including economics, law, psychology, anthropology and computer science, so that the number of contributions on the subject has increased exponentially. The purpose of this paper is to provide a sufficiently comprehensive account of these contributions in order to draw some general conclusions on the state of our understanding of the phenomenon and identify directions for future research. The exercise suggests that what is puzzling about F/OSS is not so much the fact that people freely contribute to a good they make available to all, but rather the complexity of its institutional structure and its ability to organizationally evolve over time. JEL Classification: K11, L22, L23, L86, O31, O34.",
"title": ""
},
{
"docid": "neg:1840404_7",
"text": "This paper is a preliminary report on the efficiency of two strategies of data reduction in a data preprocessing stage. In the first experiment, we apply the Count-Min sketching algorithm, while in the second experiment we discretize our data prior to applying the Count-Min algorithm. By conducting a discretization before sketching, the need for the increased number of buckets in sketching is reduced. This preliminary attempt of combining two methods with the same purpose has shown potential. In our experiments, we use sensor data collected to study the environmental fluctuation and its impact on the quality of fresh peaches and nectarines in cold chain.",
"title": ""
},
{
"docid": "neg:1840404_8",
"text": "An AC chopper controller with symmetrical Pulse-Width Modulation (PWM) is proposed to achieve better performance for a single-phase induction motor compared to phase-angle control line-commutated voltage controllers and integral-cycle control of thyristors. Forced commutated device IGBT controlled by a microcontroller was used in the AC chopper which has the advantages of simplicity, ability to control large amounts of power and low waveform distortion. In this paper the simulation and hardware models of a simple single phase IGBT An AC controller has been developed which showed good results.",
"title": ""
},
{
"docid": "neg:1840404_9",
"text": "A compact multiple-input-multiple-output (MIMO) antenna is presented for ultrawideband (UWB) applications with band-notched function. The proposed antenna is composed of two offset microstrip-fed antenna elements with UWB performance. To achieve high isolation and polarization diversity, the antenna elements are placed perpendicular to each other. A parasitic T-shaped strip between the radiating elements is employed as a decoupling structure to further suppress the mutual coupling. In addition, the notched band at 5.5 GHz is realized by etching a pair of L-shaped slits on the ground. The antenna prototype with a compact size of 38.5 × 38.5 mm2 has been fabricated and measured. Experimental results show that the antenna has an impedance bandwidth of 3.08-11.8 GHz with reflection coefficient less than -10 dB, except the rejection band of 5.03-5.97 GHz. Besides, port isolation, envelope correlation coefficient and radiation characteristics are also investigated. The results indicate that the MIMO antenna is suitable for band-notched UWB applications.",
"title": ""
},
{
"docid": "neg:1840404_10",
"text": "In this paper a simulation-based scheduling system is discussed which was developed for a semiconductor Backend facility. Apart from the usual dispatching rules it uses heuristic search strategies for the optimization of the operating sequences. In practice hereby multiple objectives have to be considered, e. g. concurrent minimization of mean cycle time, maximization of throughput and due date compliance. Because the simulation model is very complex and simulation time itself is not negligible, we emphasize to increase the convergence of heuristic optimization methods, consequentially reducing the number of necessary iterations. Several realized strategies are presented.",
"title": ""
},
{
"docid": "neg:1840404_11",
"text": "A fundamental aspect of controlling humanoid robots lies in the capability to exploit the whole body to perform tasks. This work introduces a novel whole body control library called OpenSoT. OpenSoT is combined with joint impedance control to create a framework that can effectively generate complex whole body motion behaviors for humanoids according to the needs of the interaction level of the tasks. OpenSoT gives an easy way to implement tasks, constraints, bounds and solvers by providing common interfaces. We present the mathematical foundation of the library and validate it on the compliant humanoid robot COMAN to execute multiple motion tasks under a number of constraints. The framework is able to solve hierarchies of tasks of arbitrary complexity in a robust and reliable way.",
"title": ""
},
{
"docid": "neg:1840404_12",
"text": "With the introduction of fully convolutional neural networks, deep learning has raised the benchmark for medical image segmentation on both speed and accuracy, and different networks have been proposed for 2D and 3D segmentation with promising results. Nevertheless, most networks only handle relatively small numbers of labels (<10), and there are very limited works on handling highly unbalanced object sizes especially in 3D segmentation. In this paper, we propose a network architecture and the corresponding loss function which improve segmentation of very small structures. By combining skip connections and deep supervision with respect to the computational feasibility of 3D segmentation, we propose a fast converging and computationally efficient network architecture for accurate segmentation. Furthermore, inspired by the concept of focal loss, we propose an exponential logarithmic loss which balances the labels not only by their relative sizes but also by their segmentation difficulties. We achieve an average Dice coefficient of 82% on brain segmentation with 20 labels, with the ratio of the smallest to largest object sizes as 0.14%. Less than 100 epochs are required to reach such accuracy, and segmenting a 128×128×128 volume only takes around 0.4 s.",
"title": ""
},
{
"docid": "neg:1840404_13",
"text": "Face images that are captured by surveillance cameras usually have a very low resolution, which significantly limits the performance of face recognition systems. In the past, super-resolution techniques have been proposed to increase the resolution by combining information from multiple images. These techniques use super-resolution as a preprocessing step to obtain a high-resolution image that is later passed to a face recognition system. Considering that most state-of-the-art face recognition systems use an initial dimensionality reduction method, we propose to transfer the super-resolution reconstruction from pixel domain to a lower dimensional face space. Such an approach has the advantage of a significant decrease in the computational complexity of the super-resolution reconstruction. The reconstruction algorithm no longer tries to obtain a visually improved high-quality image, but instead constructs the information required by the recognition system directly in the low dimensional domain without any unnecessary overhead. In addition, we show that face-space super-resolution is more robust to registration errors and noise than pixel-domain super-resolution because of the addition of model-based constraints.",
"title": ""
},
{
"docid": "neg:1840404_14",
"text": "Many of the recent Trajectory Optimization algorithms alternate between local approximation of the dynamics and conservative policy update. However, linearly approximating the dynamics in order to derive the new policy can bias the update and prevent convergence to the optimal policy. In this article, we propose a new model-free algorithm that backpropagates a local quadratic time-dependent Q-Function, allowing the derivation of the policy update in closed form. Our policy update ensures exact KL-constraint satisfaction without simplifying assumptions on the system dynamics demonstrating improved performance in comparison to related Trajectory Optimization algorithms linearizing the dynamics.",
"title": ""
},
{
"docid": "neg:1840404_15",
"text": "The excitation and vibration triggered by the long-term operation of railway vehicles inevitably result in defective states of catenary support devices. With the massive construction of high-speed electrified railways, automatic defect detection of diverse and plentiful fasteners on the catenary support device is of great significance for operation safety and cost reduction. Nowadays, the catenary support devices are periodically captured by the cameras mounted on the inspection vehicles during the night, but the inspection still mostly relies on human visual interpretation. To reduce the human involvement, this paper proposes a novel vision-based method that applies the deep convolutional neural networks (DCNNs) in the defect detection of the fasteners. Our system cascades three DCNN-based detection stages in a coarse-to-fine manner, including two detectors to sequentially localize the cantilever joints and their fasteners and a classifier to diagnose the fasteners’ defects. Extensive experiments and comparisons of the defect detection of catenary support devices along the Wuhan–Guangzhou high-speed railway line indicate that the system can achieve a high detection rate with good adaptation and robustness in complex environments.",
"title": ""
},
{
"docid": "neg:1840404_16",
"text": "This paper presents a study on SIFT (Scale Invariant Feature transform) which is a method for extracting distinctive invariant features from images that can be used to perform reliable matching between different views of an object or scene. The features are invariant to image scaling, translation, and rotation, and partially invariant to illumination changes and affine or 3D projection. There are various applications of SIFT that includes object recognition, robotic mapping and navigation, image stitching, 3D modeling, gesture recognition, video tracking, individual identification of wildlife and match moving.",
"title": ""
},
{
"docid": "neg:1840404_17",
"text": "Mobile cloud computing (MCC) as an emerging and prospective computing paradigm, can significantly enhance computation capability and save energy of smart mobile devices (SMDs) by offloading computation-intensive tasks from resource-constrained SMDs onto the resource-rich cloud. However, how to achieve energy-efficient computation offloading under the hard constraint for application completion time remains a challenge issue. To address such a challenge, in this paper, we provide an energy-efficient dynamic offloading and resource scheduling (eDors) policy to reduce energy consumption and shorten application completion time. We first formulate the eDors problem into the energy-efficiency cost (EEC) minimization problem while satisfying the task-dependency requirements and the completion time deadline constraint. To solve the optimization problem, we then propose a distributed eDors algorithm consisting of three subalgorithms of computation offloading selection, clock frequency control and transmission power allocation. More importantly, we find that the computation offloading selection depends on not only the computing workload of a task, but also the maximum completion time of its immediate predecessors and the clock frequency and transmission power of the mobile device. Finally, our experimental results in a real testbed demonstrate that the eDors algorithm can effectively reduce the EEC by optimally adjusting the CPU clock frequency of SMDs based on the dynamic voltage and frequency scaling (DVFS) technique in local computing, and adapting the transmission power for the wireless channel conditions in cloud computing.",
"title": ""
},
{
"docid": "neg:1840404_18",
"text": "We describe an efficient neural network method to automatically learn sentiment lexicons without relying on any manual resources. The method takes inspiration from the NRC method, which gives the best results in SemEval13 by leveraging emoticons in large tweets, using the PMI between words and tweet sentiments to define the sentiment attributes of words. We show that better lexicons can be learned by using them to predict the tweet sentiment labels. By using a very simple neural network, our method is fast and can take advantage of the same data volume as the NRC method. Experiments show that our lexicons give significantly better accuracies on multiple languages compared to the current best methods.",
"title": ""
},
{
"docid": "neg:1840404_19",
"text": "This study examines print and online daily newspaper journalists’ perceptions of the credibility of Internet news information, as well as the influence of several factors— most notably, professional role conceptions—on those perceptions. Credibility was measured as a multidimensional construct. The results of a survey of U.S. journalists (N = 655) show that Internet news information was viewed as moderately credible overall and that online newspaper journalists rated Internet news information as significantly more credible than did print newspaper journalists. Hierarchical regression analyses reveal that Internet reliance was a strong positive predictor of credibility. Two professional role conceptions also emerged as significant predictors. The populist mobilizer role conception was a significant positive predictor of online news credibility, while the adversarial role conception was a significant negative predictor. Demographic characteristics of print and online daily newspaper journalists did not influence their perceptions of online news credibility.",
"title": ""
}
] |
1840405 | E-Counterfeit: A Mobile-Server Platform for Document Counterfeit Detection | [
{
"docid": "pos:1840405_0",
"text": "We study the recognition of surfaces made from different materials such as concrete, rug, marble, or leather on the basis of their textural appearance. Such natural textures arise from spatial variation of two surface attributes: (1) reflectance and (2) surface normal. In this paper, we provide a unified model to address both these aspects of natural texture. The main idea is to construct a vocabulary of prototype tiny surface patches with associated local geometric and photometric properties. We call these 3D textons. Examples might be ridges, grooves, spots or stripes or combinations thereof. Associated with each texton is an appearance vector, which characterizes the local irradiance distribution, represented as a set of linear Gaussian derivative filter outputs, under different lighting and viewing conditions. Given a large collection of images of different materials, a clustering approach is used to acquire a small (on the order of 100) 3D texton vocabulary. Given a few (1 to 4) images of any material, it can be characterized using these textons. We demonstrate the application of this representation for recognition of the material viewed under novel lighting and viewing conditions. We also illustrate how the 3D texton model can be used to predict the appearance of materials under novel conditions.",
"title": ""
},
{
"docid": "pos:1840405_1",
"text": "Combinatorial graph cut algorithms have been successfully applied to a wide range of problems in vision and graphics. This paper focusses on possibly the simplest application of graph-cuts: segmentation of objects in image data. Despite its simplicity, this application epitomizes the best features of combinatorial graph cuts methods in vision: global optima, practical efficiency, numerical robustness, ability to fuse a wide range of visual cues and constraints, unrestricted topological properties of segments, and applicability to N-D problems. Graph cuts based approaches to object extraction have also been shown to have interesting connections with earlier segmentation methods such as snakes, geodesic active contours, and level-sets. The segmentation energies optimized by graph cuts combine boundary regularization with region-based properties in the same fashion as Mumford-Shah style functionals. We present motivation and detailed technical description of the basic combinatorial optimization framework for image segmentation via s/t graph cuts. After the general concept of using binary graph cut algorithms for object segmentation was first proposed and tested in Boykov and Jolly (2001), this idea was widely studied in computer vision and graphics communities. We provide links to a large number of known extensions based on iterative parameter re-estimation and learning, multi-scale or hierarchical approaches, narrow bands, and other techniques for demanding photo, video, and medical applications.",
"title": ""
}
] | [
{
"docid": "neg:1840405_0",
"text": "Predicting ad click-through rates (CTR) is a massive-scale learning problem that is central to the multi-billion dollar online advertising industry. We present a selection of case studies and topics drawn from recent experiments in the setting of a deployed CTR prediction system. These include improvements in the context of traditional supervised learning based on an FTRL-Proximal online learning algorithm (which has excellent sparsity and convergence properties) and the use of per-coordinate learning rates.\n We also explore some of the challenges that arise in a real-world system that may appear at first to be outside the domain of traditional machine learning research. These include useful tricks for memory savings, methods for assessing and visualizing performance, practical methods for providing confidence estimates for predicted probabilities, calibration methods, and methods for automated management of features. Finally, we also detail several directions that did not turn out to be beneficial for us, despite promising results elsewhere in the literature. The goal of this paper is to highlight the close relationship between theoretical advances and practical engineering in this industrial setting, and to show the depth of challenges that appear when applying traditional machine learning methods in a complex dynamic system.",
"title": ""
},
{
"docid": "neg:1840405_1",
"text": "ID: 2423 Y. M. S. Al-Wesabi, Avishek Choudhury, Daehan Won Binghamton University, USA",
"title": ""
},
{
"docid": "neg:1840405_2",
"text": "Although motorcycle safety helmets are known for preventing head injuries, in many countries, the use of motorcycle helmets is low due to the lack of police power to enforcing helmet laws. This paper presents a system which automatically detect motorcycle riders and determine that they are wearing safety helmets or not. The system extracts moving objects and classifies them as a motorcycle or other moving objects based on features extracted from their region properties using K-Nearest Neighbor (KNN) classifier. The heads of the riders on the recognized motorcycle are then counted and segmented based on projection profiling. The system classifies the head as wearing a helmet or not using KNN based on features derived from 4 sections of segmented head region. Experiment results show an average correct detection rate for near lane, far lane, and both lanes as 84%, 68%, and 74%, respectively.",
"title": ""
},
{
"docid": "neg:1840405_3",
"text": "A systematic method to improve the quality ( ) factor of RF integrated inductors is presented in this paper. The proposed method is based on the layout optimization to minimize the series resistance of the inductor coil, taking into account both ohmic losses, due to conduction currents, and magnetically induced losses, due to Eddy currents. The technique is particularly useful when applied to inductors in which the fabrication process includes integration substrate removal. However, it is also applicable to inductors on low-loss substrates. The method optimizes the width of the metal strip for each turn of the inductor coil, leading to a variable strip-width layout. The optimization procedure has been successfully applied to the design of square spiral inductors in a silicon-based multichip-module technology, complemented with silicon micromachining postprocessing. The obtained experimental results corroborate the validity of the proposed method. A factor of about 17 have been obtained for a 35-nH inductor at 1.5 GHz, with values higher than 40 predicted for a 20-nH inductor working at 3.5 GHz. The latter is up to a 60% better than the best results for a single strip-width inductor working at the same frequency.",
"title": ""
},
{
"docid": "neg:1840405_4",
"text": "We present a new video-assisted minimally invasive technique for the treatment of pilonidal disease (E.P.Si.T: endoscopic pilonidal sinus treatment). Between March and November 2012, we operated on 11 patients suffering from pilonidal disease. Surgery is performed under local or spinal anesthesia using the Meinero fistuloscope. The external opening is excised and the fistuloscope is introduced through the small hole. Anatomy is identified, hair and debris are removed and the entire area is ablated under direct vision. There were no significant complications recorded in the patient cohort. The pain experienced during the postoperative period was minimal. At 1 month postoperatively, the external opening(s) were closed in all patients and there were no cases of recurrence at a median follow-up of 6 months. All patients were admitted and discharged on the same day as surgery and commenced work again after a mean time period of 4 days. Aesthetic results were excellent. The key feature of the E.P.Si.T. technique is direct vision, allowing a good definition of the involved area, removal of debris and cauterization of the inflamed tissue.",
"title": ""
},
{
"docid": "neg:1840405_5",
"text": "Knowledge bases extracted automatically from the Web present new opportunities for data mining and exploration. Given a large, heterogeneous set of extracted relations, new tools are needed for searching the knowledge and uncovering relationships of interest. We present WikiTables, a Web application that enables users to interactively explore tabular knowledge extracted from Wikipedia.\n In experiments, we show that WikiTables substantially outperforms baselines on the novel task of automatically joining together disparate tables to uncover \"interesting\" relationships between table columns. We find that a \"Semantic Relatedness\" measure that leverages the Wikipedia link structure accounts for a majority of this improvement. Further, on the task of keyword search for tables, we show that WikiTables performs comparably to Google Fusion Tables despite using an order of magnitude fewer tables. Our work also includes the release of a number of public resources, including over 15 million tuples of extracted tabular data, manually annotated evaluation sets, and public APIs.",
"title": ""
},
{
"docid": "neg:1840405_6",
"text": "Motion segmentation is currently an active area of research in computer Vision. The task of comparing different methods of motion segmentation is complicated by the fact that researchers may use subtly different definitions of the problem. Questions such as ”Which objects are moving?”, ”What is background?”, and ”How can we use motion of the camera to segment objects, whether they are static or moving?” are clearly related to each other, but lead to different algorithms, and imply different versions of the ground truth. This report has two goals. The first is to offer a precise definition of motion segmentation so that the intent of an algorithm is as welldefined as possible. The second is to report on new versions of three previously existing data sets that are compatible with this definition. We hope that this more detailed definition, and the three data sets that go with it, will allow more meaningful comparisons of certain motion segmentation methods.",
"title": ""
},
{
"docid": "neg:1840405_7",
"text": "The purpose of the study is to explore the factors influencing customer buying decision through Intern et shopping. Several factors such as information quali ty, firm’s reputation, perceived ease of payment, s ites design, benefit of online shopping, and trust that influence customer decision to purchase from e-comm erce sites were analyzed. Factors such as those mention d above, which are commonly considered influencing purhasing decision through online shopping in other countries were hypothesized to be true in the case of Indonesia. A random sample comprised of 171 Indone sia people who have been buying goods/services through e-commerce sites at least once, were collec ted via online questionnaires. To test the hypothes is, the data were examined using Structural Equations Model ing (SEM) which is basically a combination of Confirmatory Factor Analysis (CFA), and linear Regr ession. The results suggest that information qualit y, perceived ease of payment, benefits of online shopp ing, and trust affect online purchase decision significantly. Close attention need to be placed on these factors to increase online sales. The most significant influence comes from trust. Indonesian people still lack of trust toward online commerce, so it is very important to gain customer trust to increase s al s. E-commerce’s business owners are encouraged t o develop sites that can meet the expectation of pote ntial customer, provides ease of payment system, pr ovide detailed and actual information and responsible for customer personal information and transaction reco rds. This paper outlined the key factors influencing onl ine shopping intention in Indonesia and pioneered t he building of an integrated research framework to und erstand how consumers make purchase decision toward online shopping; a relatively new way of shopping i the country.",
"title": ""
},
{
"docid": "neg:1840405_8",
"text": "The Janus kinase (JAK)-signal transducer of activators of transcription (STAT) pathway is now recognized as an evolutionarily conserved signaling pathway employed by diverse cytokines, interferons, growth factors, and related molecules. This pathway provides an elegant and remarkably straightforward mechanism whereby extracellular factors control gene expression. It thus serves as a fundamental paradigm for how cells sense environmental cues and interpret these signals to regulate cell growth and differentiation. Genetic mutations and polymorphisms are functionally relevant to a variety of human diseases, especially cancer and immune-related conditions. The clinical relevance of the pathway has been confirmed by the emergence of a new class of therapeutics that targets JAKs.",
"title": ""
},
{
"docid": "neg:1840405_9",
"text": "One of the main current applications of intelligent systems is recommender systems (RS). RS can help users to find relevant items in huge information spaces in a personalized way. Several techniques have been investigated for the development of RS. One of them is evolutionary computational (EC) techniques, which is an emerging trend with various application areas. The increasing interest in using EC for web personalization, information retrieval and RS fostered the publication of survey papers on the subject. However, these surveys have analyzed only a small number of publications, around ten. This study provides a comprehensive review of more than 65 research publications focusing on five aspects we consider relevant for such: the recommendation technique used, the datasets and the evaluation methods adopted in their experimental parts, the baselines employed in the experimental comparison of proposed approaches and the reproducibility of the reported experiments. At the end of this review, we discuss negative and positive aspects of these papers, as well as point out opportunities, challenges and possible future research directions. To the best of our knowledge, this review is the most comprehensive review of various approaches using EC in RS. Thus, we believe this review will be a relevant material for researchers interested in EC and RS.",
"title": ""
},
{
"docid": "neg:1840405_10",
"text": "Color mapping is an important technique used in visualization to build visual representations of data and information. With output devices such as computer displays providing a large number of colors, developers sometimes tend to build their visualization to be visually appealing, while forgetting the main goal of clear depiction of the underlying data. Visualization researchers have profited from findings in adjoining areas such as human vision and psychophysics which, combined with their own experience, enabled them to establish guidelines that might help practitioners to select appropriate color scales and adjust the associated color maps, for particular applications. This survey presents an overview on the subject of color scales by focusing on important guidelines, experimental research work and tools proposed to help non-expert users. & 2010 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "neg:1840405_11",
"text": "This paper proposes to develop an electronic device for obstacle detection in the path of visually impaired people. This device assists a user to walk without colliding with any obstacles in their path. It is a wearable device in the form of a waist belt that has ultrasonic sensors and raspberry pi installed on it. This device detects obstacles around the user up to 500cm in three directions i.e. front, left and right using a network of ultrasonic sensors. These ultrasonic sensors are connected to raspberry pi that receives data signals from these sensors for further data processing. The algorithm running in raspberry pi computes the distance from the obstacle and converts it into text message, which is then converted into speech and conveyed to the user through earphones/speakers. This design is benefitial in terms of it’s portability, low-cost, low power consumption and the fact that neither the user nor the device requires initial training. Keywords—embedded systems; raspberry pi; speech feedback; ultrasonic sensor; visually impaired;",
"title": ""
},
{
"docid": "neg:1840405_12",
"text": "Calibration between color camera and 3D Light Detection And Ranging (LIDAR) equipment is an essential process for data fusion. The goal of this paper is to improve the calibration accuracy between a camera and a 3D LIDAR. In particular, we are interested in calibrating a low resolution 3D LIDAR with a relatively small number of vertical sensors. Our goal is achieved by employing a new methodology for the calibration board, which exploits 2D-3D correspondences. The 3D corresponding points are estimated from the scanned laser points on the polygonal planar board with adjacent sides. Since the lengths of adjacent sides are known, we can estimate the vertices of the board as a meeting point of two projected sides of the polygonal board. The estimated vertices from the range data and those detected from the color image serve as the corresponding points for the calibration. Experiments using a low-resolution LIDAR with 32 sensors show robust results.",
"title": ""
},
{
"docid": "neg:1840405_13",
"text": "Fifteen years ago, a panel of experts representing the full spectrum of cardiovascular disease (CVD) research and practice assembled at a workshop to examine the state of knowledge about CVD. The leaders of the workshop generated a hypothesis that framed CVD as a chain of events, initiated by a myriad of related and unrelated risk factors and progressing through numerous physiological pathways and processes to the development of end-stage heart disease (Figure 1).1 They further hypothesized that intervention anywhere along the chain of events leading to CVD could disrupt the pathophysiological process and confer cardioprotection. The workshop participants endorsed this paradigm but also identified the unresolved issues relating to the concept of a CVD continuum. There was limited availability of clinical trial data and pathobiological evidence at that time, and the experts recognized that critical studies at both the mechanistic level and the clinical level were needed to validate the concept of a chain of events leading to end-stage CVD. In the intervening 15 years, new evidence for underlying pathophysiological mechanisms, the development of novel therapeutic agents, and the release of additional landmark clinical trial data have confirmed the concept of a CVD continuum and reinforced the notion that intervention at any point along this chain can modify CVD progression. In addition, the accumulated evidence indicates that the events leading to disease progression overlap and intertwine and do not always occur as a sequence of discrete, tandem incidents. Furthermore, although the original concept focused on risk factors for coronary artery disease (CAD) and its sequelae, the CVD continuum has expanded to include other areas such as cerebrovascular disease, peripheral vascular disease, and renal disease. Since its conception 15 years ago, the CVD continuum has become much in need of an update. Accordingly, this 2-part article will present a critical and comprehensive update of the current evidence for a CVD continuum based on the results of pathophysiological studies and the outcome of a broad range of clinical trials that have been performed in the past 15 years. It is not the intent of the article to include a comprehensive listing of all trials performed as part of the CVD continuum; instead, we have sought to include only those trials that have had the greatest impact. Part I briefly reviews the current understanding of the pathophysiology of CVD and discusses clinical trial data from risk factors for disease through stable CAD. Part II continues the review of clinical trial data beginning with acute coronary syndromes and continuing through extension of the CVD continuum to stroke and renal disease. The article concludes with a discussion of areas in which future research might further clarify our understanding of the CVD continuum.",
"title": ""
},
{
"docid": "neg:1840405_14",
"text": "In many computer vision tasks, we expect a particular behavior of the output with respect to rotations of the input image. If this relationship is explicitly encoded, instead of treated as any other variation, the complexity of the problem is decreased, leading to a reduction in the size of the required model. In this paper, we propose the Rotation Equivariant Vector Field Networks (RotEqNet), a Convolutional Neural Network (CNN) architecture encoding rotation equivariance, invariance and covariance. Each convolutional filter is applied at multiple orientations and returns a vector field representing magnitude and angle of the highest scoring orientation at every spatial location. We develop a modified convolution operator relying on this representation to obtain deep architectures. We test RotEqNet on several problems requiring different responses with respect to the inputs’ rotation: image classification, biomedical image segmentation, orientation estimation and patch matching. In all cases, we show that RotEqNet offers extremely compact models in terms of number of parameters and provides results in line to those of networks orders of magnitude larger.",
"title": ""
},
{
"docid": "neg:1840405_15",
"text": "This paper describes a computer vision based system for real-time robust traffic sign detection, tracking, and recognition. Such a framework is of major interest for driver assistance in an intelligent automotive cockpit environment. The proposed approach consists of two components. First, signs are detected using a set of Haar wavelet features obtained from AdaBoost training. Compared to previously published approaches, our solution offers a generic, joint modeling of color and shape information without the need of tuning free parameters. Once detected, objects are efficiently tracked within a temporal information propagation framework. Second, classification is performed using Bayesian generative modeling. Making use of the tracking information, hypotheses are fused over multiple frames. Experiments show high detection and recognition accuracy and a frame rate of approximately 10 frames per second on a standard PC.",
"title": ""
},
{
"docid": "neg:1840405_16",
"text": "Fusion of data from multiple sensors can enable robust navigation in varied environments. However, for optimal performance, the sensors must calibrated relative to one another. Full sensor-to-sensor calibration is a spatiotemporal problem: we require an accurate estimate of the relative timing of measurements for each pair of sensors, in addition to the 6-DOF sensor-to-sensor transform. In this paper, we examine the problem of determining the time delays between multiple proprioceptive and exteroceptive sensor data streams. The primary difficultly is that the correspondences between measurements from different sensors are unknown, and hence the delays cannot be computed directly. We instead formulate temporal calibration as a registration task. Our algorithm operates by aligning curves in a three-dimensional orientation space, and, as such, can be considered as a variant of Iterative Closest Point (ICP). We present results from simulation studies and from experiments with a PR2 robot, which demonstrate accurate calibration of the time delays between measurements from multiple, heterogeneous sensors.",
"title": ""
},
{
"docid": "neg:1840405_17",
"text": "We describe a new model for learning meaningful representations of text documents from an unlabeled collection of documents. This model is inspired by the recently proposed Replicated Softmax, an undirected graphical model of word counts that was shown to learn a better generative model and more meaningful document representations. Specifically, we take inspiration from the conditional mean-field recursive equations of the Replicated Softmax in order to define a neural network architecture that estimates the probability of observing a new word in a given document given the previously observed words. This paradigm also allows us to replace the expensive softmax distribution over words with a hierarchical distribution over paths in a binary tree of words. The end result is a model whose training complexity scales logarithmically with the vocabulary size instead of linearly as in the Replicated Softmax. Our experiments show that our model is competitive both as a generative model of documents and as a document representation learning algorithm.",
"title": ""
},
{
"docid": "neg:1840405_18",
"text": "This paper presents a microstrip dual-band bandpass filter (BPF) based on cross-shaped resonator and spurline. It is shown that spurlines added into input/output ports of a cross-shaped resonator generate an additional notch band. Using even and odd-mode analysis the proposed structure is realized and designed. The proposed bandpass filter has dual passband from 1.9 GHz to 2.4 GHz and 9.5 GHz to 11.5 GHz.",
"title": ""
}
] |
1840406 | Challenges of Sentiment Analysis for Dynamic Events | [
{
"docid": "pos:1840406_0",
"text": "User generated content on Twitter (produced at an enormous rate of 340 million tweets per day) provides a rich source for gleaning people's emotions, which is necessary for deeper understanding of people's behaviors and actions. Extant studies on emotion identification lack comprehensive coverage of \"emotional situations\" because they use relatively small training datasets. To overcome this bottleneck, we have automatically created a large emotion-labeled dataset (of about 2.5 million tweets) by harnessing emotion-related hash tags available in the tweets. We have applied two different machine learning algorithms for emotion identification, to study the effectiveness of various feature combinations as well as the effect of the size of the training data on the emotion identification task. Our experiments demonstrate that a combination of unigrams, big rams, sentiment/emotion-bearing words, and parts-of-speech information is most effective for gleaning emotions. The highest accuracy (65.57%) is achieved with a training data containing about 2 million tweets.",
"title": ""
}
] | [
{
"docid": "neg:1840406_0",
"text": "68 AI MAGAZINE Adaptive graphical user interfaces (GUIs) automatically tailor the presentation of functionality to better fit an individual user’s tasks, usage patterns, and abilities. A familiar example of an adaptive interface is the Windows XP start menu, where a small set of applications from the “All Programs” submenu is replicated in the top level of the “Start” menu for easier access, saving users from navigating through multiple levels of the menu hierarchy (figure 1). The potential of adaptive interfaces to reduce visual search time, cognitive load, and motor movement is appealing, and when the adaptation is successful an adaptive interface can be faster and preferred in comparison to a nonadaptive counterpart (for example, Gajos et al. [2006], Greenberg and Witten [1985]). In practice, however, many challenges exist, and, thus far, evaluation results of adaptive interfaces have been mixed. For an adaptive interface to be successful, the benefits of correct adaptations must outweigh the costs, or usability side effects, of incorrect adaptations. Often, an adaptive mechanism designed to improve one aspect of the interaction, typically motor movement or visual search, inadvertently increases effort along another dimension, such as cognitive or perceptual load. The result is that many adaptive designs that were expected to confer a benefit along one of these dimensions have failed in practice. For example, a menu that tracks how frequently each item is used and adaptively reorders itself so that items appear in order from most to least frequently accessed should improve motor performance, but in reality this design can slow users down and reduce satisfaction because of the constantly changing layout (Mitchell and Schneiderman [1989]; for example, figure 2b). Commonly cited issues with adaptive interfaces include the lack of control the user has over the adaptive process and the difficulty that users may have in predicting what the system’s response will be to a user action (Höök 2000). User evaluation of adaptive GUIs is more complex than eval-",
"title": ""
},
{
"docid": "neg:1840406_1",
"text": "A back-cavity shielded bow-tie antenna system working at 900MHz center frequency for ground-coupled GPR application is investigated numerically and experimentally in this paper. Bow-tie geometrical structure is modified for a compact design and back-cavity assembly. A layer of absorber is employed to overcome the back reflection by omni-directional radiation pattern of a bow-tie antenna in H-plane, thus increasing the SNR and improve the isolation between T and R antennas as well. The designed antenna system is applied to a prototype GPR system. Tested data shows that the back-cavity shielded antenna works satisfactorily in the 900MHz GPR system.",
"title": ""
},
{
"docid": "neg:1840406_2",
"text": "Intensive care units (ICUs) are major sites for medical errors and adverse events. Suboptimal outcomes reflect a widespread failure to implement care delivery systems that successfully address the complexity of modern ICUs. Whereas other industries have used information technologies to fundamentally improve operating efficiency and enhance safety, medicine has been slow to implement such strategies. Most ICUs do not even track performance; fewer still have the capability to examine clinical data and use this information to guide quality improvement initiatives. This article describes a technology-enabled care model (electronic ICU, or eICU) that represents a new paradigm for delivery of critical care services. A major component of the model is the use of telemedicine to leverage clinical expertise and facilitate a round-the-clock proactive care by intensivist-led teams of ICU caregivers. Novel data presentation formats, computerized decision support, and smart alarms are used to enhance efficiency, increase effectiveness, and standardize clinical and operating processes. In addition, the technology infrastructure facilitates performance improvement by providing an automated means to measure outcomes, track performance, and monitor resource utilization. The program is designed to support the multidisciplinary intensivist-led team model and incorporates comprehensive ICU re-engineering efforts to change practice behavior. Although this model can transform ICUs into centers of excellence, success will hinge on hospitals accepting the underlying value proposition and physicians being willing to change established practices.",
"title": ""
},
{
"docid": "neg:1840406_3",
"text": "Context Emergency department visits by older adults are often due to adverse drug events, but the proportion of these visits that are the result of drugs designated as inappropriate for use in this population is unknown. Contribution Analyses of a national surveillance study of adverse drug events and a national outpatient survey estimate that Americans age 65 years or older have more than 175000 emergency department visits for adverse drug events yearly. Three commonly prescribed drugs accounted for more than one third of visits: warfarin, insulin, and digoxin. Caution The study was limited to adverse events in the emergency department. Implication Strategies to decrease adverse drug events among older adults should focus on warfarin, insulin, and digoxin. The Editors Adverse drug events cause clinically significant morbidity and mortality and are associated with large economic costs (15). They are common in older adults, regardless of whether they live in the community, reside in long-term care facilities, or are hospitalized (59). Most physicians recognize that prescribing medications to older patients requires special considerations, but nongeriatricians are typically unfamiliar with the most commonly used measure of medication appropriateness for older patients: the Beers criteria (1012). The Beers criteria are a consensus-based list of medications identified as potentially inappropriate for use in older adults. The criteria were introduced in 1991 to help researchers evaluate prescription quality in nursing homes (10). The Beers criteria were updated in 1997 and 2003 to apply to all persons age 65 years or older, to include new medications judged to be ineffective or to pose unnecessarily high risk, and to rate the severity of adverse outcomes (11, 12). Prescription rates of Beers criteria medications have become a widely used measure of quality of care for older adults in research studies in the United States and elsewhere (1326). The application of the Beers criteria as a measure of health care quality and safety has expanded beyond research studies. The Centers for Medicare & Medicaid Services incorporated the Beers criteria into federal safety regulations for long-term care facilities in 1999 (27). The prescription rate of potentially inappropriate medications is one of the few medication safety measures in the National Healthcare Quality Report (28) and has been introduced as a Health Plan and Employer Data and Information Set quality measure for managed care plans (29). Despite widespread adoption of the Beers criteria to measure prescription quality and safety, as well as proposals to apply these measures to additional settings, such as medication therapy management services under Medicare Part D (30), population-based data on the effect of adverse events from potentially inappropriate medications are sparse and do not compare the risks for adverse events from Beers criteria medications against those from other medications (31, 32). Adverse drug events that lead to emergency department visits are clinically significant adverse events (5) and result in increased health care resource utilization and expense (6). We used nationally representative public health surveillance data to estimate the number of emergency department visits for adverse drug events involving Beers criteria medications and compared the number with that for adverse drug events involving other medications. We also estimated the frequency of outpatient prescription of Beers criteria medications and other medications to calculate and compare the risks for emergency department visits for adverse drug events per outpatient prescription visit. Methods Data Sources National estimates of emergency department visits for adverse drug events were based on data from the 58 nonpediatric hospitals participating in the National Electronic Injury Surveillance SystemCooperative Adverse Drug Event Surveillance (NEISS-CADES) System, a nationally representative, size-stratified probability sample of hospitals (excluding psychiatric and penal institutions) in the United States and its territories with a minimum of 6 beds and a 24-hour emergency department (Figure 1) (3335). As described elsewhere (5, 34), trained coders at each hospital reviewed clinical records of every emergency department visit to report physician-diagnosed adverse drug events. Coders reported clinical diagnosis, medication implicated in the adverse event, and narrative descriptions of preceding circumstances. Data collection, management, quality assurance, and analyses were determined to be public health surveillance activities by the Centers for Disease Control and Prevention (CDC) and U.S. Food and Drug Administration human subjects oversight bodies and, therefore, did not require human subject review or institutional review board approval. Figure 1. Data sources and descriptions. NAMCS= National Ambulatory Medical Care Survey (36); NEISS-CADES= National Electronic Injury Surveillance SystemCooperative Adverse Drug Event Surveillance System (5, 3335); NHAMCS = National Hospital Ambulatory Medical Care Survey (37). *The NEISS-CADES is a 63-hospital national probability sample, but 5 pediatric hospitals were not included in this analysis. National estimates of outpatient prescription were based on 2 cross-sectional surveys, the National Ambulatory Medical Care Survey (NAMCS) and the National Hospital Ambulatory Medical Care Survey (NHAMCS), designed to provide information on outpatient office visits and visits to hospital outpatient clinics and emergency departments (Figure 1) (36, 37). These surveys have been previously used to document the prescription rates of inappropriate medications (17, 3840). Definition of Potentially Inappropriate Medications The most recent iteration of the Beers criteria (12) categorizes 41 medications or medication classes as potentially inappropriate under any circumstances (always potentially inappropriate) and 7 medications or medication classes as potentially inappropriate when used in certain doses, frequencies, or durations (potentially inappropriate in certain circumstances). For example, ferrous sulfate is considered to be potentially inappropriate only when used at dosages greater than 325 mg/d, but not potentially inappropriate if used at lower dosages. For this investigation, we included the Beers criteria medications listed in Table 1. Because medication dose, duration, and frequency were not always available in NEISS-CADES and are not reported in NAMCS and NHAMCS, we included medications regardless of dose, duration, or frequency of use. We excluded 3 medications considered to be potentially inappropriate when used in specific formulations (short-acting nifedipine, short-acting oxybutynin, and desiccated thyroid) because NEISS-CADES, NAMCS, and NHAMCS do not reliably identify these formulations. Table 1. Potentially Inappropriate Medications for Individuals Age 65 Years or Older The updated Beers criteria identify additional medications as potentially inappropriate if they are prescribed to patients who have certain preexisting conditions. We did not include these medications because they have rarely been used in previous studies or safety measures and NEISS-CADES, NAMCS, and NHAMCS do not reliably identify preexisting conditions. Identification of Emergency Department Visits for Adverse Drug Events We defined an adverse drug event case as an incident emergency department visit by a patient age 65 years or older, from 1 January 2004 to 31 December 2005, for a condition that the treating physician explicitly attributed to the use of a drug or for a drug-specific effect (5). Adverse events include allergic reactions (immunologically mediated effects) (41), adverse effects (undesirable pharmacologic or idiosyncratic effects at recommended doses) (41), unintentional overdoses (toxic effects linked to excess dose or impaired excretion) (41), or secondary effects (such as falls and choking). We excluded cases of intentional self-harm, therapeutic failures, therapy withdrawal, drug abuse, adverse drug events that occurred as a result of medical treatment received during the emergency department visit, and follow-up visits for a previously diagnosed adverse drug event. We defined an adverse drug event from Beers criteria medications as an emergency department visit in which a medication from Table 1 was implicated. Identification of Outpatient Prescription Visits We used the NAMCS and NHAMCS public use data files for the most recent year available (2004) to identify outpatient prescription visits. We defined an outpatient prescription visit as any outpatient office, hospital clinic, or emergency department visit at which treatment with a medication of interest was either started or continued. We identified medications by generic name for those with a single active ingredient and by individual active ingredients for combination products. We categorized visits with at least 1 medication identified in Table 1 as involving Beers criteria medications. Statistical Analysis Each NEISS-CADES, NAMCS, and NHAMCS case is assigned a sample weight on the basis of the inverse probability of selection (33, 4244). We calculated national estimates of emergency department visits and prescription visits by summing the corresponding sample weights, and we calculated 95% CIs by using the SURVEYMEANS procedure in SAS, version 9.1 (SAS Institute, Cary, North Carolina), to account for the sampling strata and clustering by site. To obtain annual estimates of visits for adverse events, we divided NEISS-CADES estimates for 20042005 and corresponding 95% CI end points by 2. Estimates based on small numbers of cases (<20 cases for NEISS-CADES and <30 cases for NAMCS and NHAMCS) or with a coefficient of variation greater than 30% are considered statistically unstable and are identified in the tables. To estimate the risk for adverse events relative to outpatient prescription",
"title": ""
},
{
"docid": "neg:1840406_4",
"text": "Robust and reliable vehicle detection from images acquired by a moving vehicle (i.e., on-road vehicle detection) is an important problem with applications to driver assistance systems and autonomous, self-guided vehicles. The focus of this work is on the issues of feature extraction and classification for rear-view vehicle detection. Specifically, by treating the problem of vehicle detection as a two-class classification problem, we have investigated several different feature extraction methods such as principal component analysis, wavelets, and Gabor filters. To evaluate the extracted features, we have experimented with two popular classifiers, neural networks and support vector machines (SVMs). Based on our evaluation results, we have developed an on-board real-time monocular vehicle detection system that is capable of acquiring grey-scale images, using Ford's proprietary low-light camera, achieving an average detection rate of 10 Hz. Our vehicle detection algorithm consists of two main steps: a multiscale driven hypothesis generation step and an appearance-based hypothesis verification step. During the hypothesis generation step, image locations where vehicles might be present are extracted. This step uses multiscale techniques not only to speed up detection, but also to improve system robustness. The appearance-based hypothesis verification step verifies the hypotheses using Gabor features and SVMs. The system has been tested in Ford's concept vehicle under different traffic conditions (e.g., structured highway, complex urban streets, and varying weather conditions), illustrating good performance.",
"title": ""
},
{
"docid": "neg:1840406_5",
"text": "In several previous papers and particularly in [3] we presented the use of logic equations and their solution using ternary vectors and set-theoretic considerations as well as binary codings and bit-parallel vector operations. In this paper we introduce a new and elegant model for the game of Sudoku that uses the same approach and solves this problem without any search always finding all solutions (including no solutions or several solutions). It can also be extended to larger Sudokus and to a whole class of similar discrete problems, such as Queens’ problems on the chessboard, graph-coloring problems etc. Disadvantages of known SAT approaches for such problems were overcome by our new method.",
"title": ""
},
{
"docid": "neg:1840406_6",
"text": "Homeland Security (HS) is a growing field of study in the U.S. today, generally covering risk management, terrorism studies, policy development, and other topics related to the broad field. Information security threats to both the public and private sectors are growing in intensity, frequency, and severity, and are a very real threat to the security of the nation. While there are many models for information security education at all levels of higher education, these programs are invariably offered as a technical course of study, these curricula are generally not well suited to HS students. As a result, information systems and cyber security principles are under represented in the typical HS program. The authors propose a course of study in cyber security designed to capitalize on the intellectual strengths of students in this discipline and that are consistent with the broad suite of professional needs in this discipline.",
"title": ""
},
{
"docid": "neg:1840406_7",
"text": "The use of Building Information Modeling (BIM) in the construction industry is on the rise. It is widely acknowledged that adoption of BIM would cause a seismic shift in the business processes within the construction industry and related fields. Cost estimation is a key aspect in the workflow of a construction project. Processes within estimating, such as quantity survey and pricing, may be automated by using existing BIM software in combination with existing estimating software. The adoption of this combination of technologies is not as widely seen as might be expected. Researchers conducted a survey of construction practitioners to determine the extent to which estimating processes were automated in the conjunction industry, with the data from a BIM model. Survey participants were asked questions about how BIM was used within their organization and how it was used in the various tasks involved in construction cost estimating. The results of the survey data revealed that while most contractors were using BIM, only a small minority were using it to automate estimating processes. Most organizations reported that employees skilled in BIM did not have the estimating experience to produce working estimates from BIM models and vice-versa. The results of the survey are presented and analyzed to determine conditions that would improve the adoption of these new business processes in the construction estimating field.",
"title": ""
},
{
"docid": "neg:1840406_8",
"text": "We present a novel approach to efficiently learn a label tree for large scale classification with many classes. The key contribution of the approach is a technique to simultaneously determine the structure of the tree and learn the classifiers for each node in the tree. This approach also allows fine grained control over the efficiency vs accuracy trade-off in designing a label tree, leading to more balanced trees. Experiments are performed on large scale image classification with 10184 classes and 9 million images. We demonstrate significant improvements in test accuracy and efficiency with less training time and more balanced trees compared to the previous state of the art by Bengio et al.",
"title": ""
},
{
"docid": "neg:1840406_9",
"text": "Dowser is a ‘guided’ fuzzer that combines taint tracking, program analysis and symbolic execution to find buffer overflow and underflow vulnerabilities buried deep in a program’s logic. The key idea is that analysis of a program lets us pinpoint the right areas in the program code to probe and the appropriate inputs to do so. Intuitively, for typical buffer overflows, we need consider only the code that accesses an array in a loop, rather than all possible instructions in the program. After finding all such candidate sets of instructions, we rank them according to an estimation of how likely they are to contain interesting vulnerabilities. We then subject the most promising sets to further testing. Specifically, we first use taint analysis to determine which input bytes influence the array index and then execute the program symbolically, making only this set of inputs symbolic. By constantly steering the symbolic execution along branch outcomes most likely to lead to overflows, we were able to detect deep bugs in real programs (like the nginx webserver, the inspircd IRC server, and the ffmpeg videoplayer). Two of the bugs we found were previously undocumented buffer overflows in ffmpeg and the poppler PDF rendering library.",
"title": ""
},
{
"docid": "neg:1840406_10",
"text": "RankNet is one of the widely adopted ranking models for web search tasks. However, adapting a generic RankNet for personalized search is little studied. In this paper, we first continue-trained a variety of RankNets with different number of hidden layers and network structures over a previously trained global RankNet model, and observed that a deep neural network with five hidden layers gives the best performance. To further improve the performance of adaptation, we propose a set of novel methods categorized into two groups. In the first group, three methods are proposed to properly assess the usefulness of each adaptation instance and only leverage the most informative instances to adapt a user-specific RankNet model. These assessments are based on KL-divergence, click entropy or a heuristic to ignore top clicks in adaptation queries. In the second group, two methods are proposed to regularize the training of the neural network in RankNet: one of these methods regularize the error back-propagation via a truncated gradient approach, while the other method limits the depth of the back propagation when adapting the neural network. We empirically evaluate our approaches using a large-scale real-world data set. Experimental results exhibit that our methods all give significant improvements over a strong baseline ranking system, and the truncated gradient approach gives the best performance, significantly better than all others.",
"title": ""
},
{
"docid": "neg:1840406_11",
"text": "We formulate dependency parsing as a graphical model with the novel ingredient of global constraints. We show how to apply loopy belief propagation (BP), a simple and effective tool for approximate learning and inference. As a parsing algorithm, BP is both asymptotically and empirically efficient. Even with second-order features or latent variables, which would make exact parsing considerably slower or NP-hard, BP needs only O(n) time with a small constant factor. Furthermore, such features significantly improve parse accuracy over exact first-order methods. Incorporating additional features would increase the runtime additively rather than multiplicatively.",
"title": ""
},
{
"docid": "neg:1840406_12",
"text": "Analysis on a developed dynamic model of the dish-Stirling (DS) system shows that maximum solar energy harness can be realized through controlling the Stirling engine speed. Toward this end, a control scheme is proposed for the doubly fed induction generator coupled to the DS system, as a means to achieve maximum power point tracking as the solar insolation level varies. Furthermore, the adopted fuzzy supervisory control technique is shown to be effective in controlling the temperature of the receiver in the DS system as the speed changes. Simulation results and experimental measurements validate the maximum energy harness ability of the proposed variable-speed DS solar-thermal system.",
"title": ""
},
{
"docid": "neg:1840406_13",
"text": "This document updates RFC 4944, \"Transmission of IPv6 Packets over IEEE 802.15.4 Networks\". This document specifies an IPv6 header compression format for IPv6 packet delivery in Low Power Wireless Personal Area Networks (6LoWPANs). The compression format relies on shared context to allow compression of arbitrary prefixes. How the information is maintained in that shared context is out of scope. This document specifies compression of multicast addresses and a framework for compressing next headers. UDP header compression is specified within this framework. Information about the current status of this document, any errata, and how to provide feedback on it may be obtained at in effect on the date of publication of this document. Please review these documents carefully, as they describe your rights and restrictions with respect to this document. Code Components extracted from this document must include Simplified BSD License text as described in Section 4.e of the Trust Legal Provisions and are provided without warranty as described in the Simplified BSD License.",
"title": ""
},
{
"docid": "neg:1840406_14",
"text": "Many organisations are currently involved in implementing Sustainable Supply Chain Management (SSCM) initiatives to address societal expectations and government regulations. Implementation of these initiatives has in turn created complexity due to the involvement of collection, management, control, and monitoring of a wide range of additional information exchanges among trading partners, which was not necessary in the past. Organisations thus would rely more on meaningful support from their IT function to help them implement and operate SSCM practices. Given the growing global recognition of the importance of sustainable supply chain (SSC) practices, existing corporate IT strategy and plans need to be revisited for IT to remain supportive and aligned with new sustainability aspirations of their organisations. Towards this goal, in this paper we report on the development of an IT maturity model specifically designed for SSCM context. The model is built based on four dimensions derived from software process maturity and IS/IT planning literatures. Our proposed model defines four progressive IT maturity stages for corporate IT function to support SSCM implementation initiatives. Some implications of the study finding and several challenges that may potentially hinder acceptance of the model by organisations are discussed.",
"title": ""
},
{
"docid": "neg:1840406_15",
"text": "Face frontalization refers to the process of synthesizing the frontal view of a face from a given profile. Due to self-occlusion and appearance distortion in the wild, it is extremely challenging to recover faithful results and preserve texture details in a high-resolution. This paper proposes a High Fidelity Pose Invariant Model (HF-PIM) to produce photographic and identity-preserving results. HF-PIM frontalizes the profiles through a novel texture warping procedure and leverages a dense correspondence field to bind the 2D and 3D surface spaces. We decompose the prerequisite of warping into dense correspondence field estimation and facial texture map recovering, which are both well addressed by deep networks. Different from those reconstruction methods relying on 3D data, we also propose Adversarial Residual Dictionary Learning (ARDL) to supervise facial texture map recovering with only monocular images. Exhaustive experiments on both controlled and uncontrolled environments demonstrate that the proposed method not only boosts the performance of pose-invariant face recognition but also dramatically improves high-resolution frontalization appearances.",
"title": ""
},
{
"docid": "neg:1840406_16",
"text": "Vehicles are becoming complex software systems with many components and services that need to be coordinated. Service oriented architectures can be used in this domain to support intra-vehicle, inter-vehicles, and vehicle-environment services. Such architectures can be deployed on different platforms, using different communication and coordination paradigms. We argue that practical solutions should be hybrid: they should integrate and support interoperability of different paradigms. We demonstrate the concept by integrating Jini, the service-oriented technology we used within the vehicle, and JXTA, the peer to peer infrastructure we used to support interaction with the environment through a gateway service, called J2J. Initial experience with J2J is illustrated.",
"title": ""
},
{
"docid": "neg:1840406_17",
"text": "Millimeter-wave (mm-wave) wireless local area networks (WLANs) are expected to provide multi-Gbps connectivity by exploiting the large amount of unoccupied spectrum in e.g. the unlicensed 60 GHz band. However, to overcome the high path loss inherent at these high frequencies, mm-wave networks must employ highly directional beamforming antennas, which makes link establishment and maintenance much more challenging than in traditional omnidirectional networks. In particular, maintaining connectivity under node mobility necessitates frequent re-steering of the transmit and receive antenna beams to re-establish a directional mm-wave link. A simple exhaustive sequential scanning to search for new feasible antenna sector pairs may introduce excessive delay, potentially disrupting communication and lowering the QoS. In this paper, we propose a smart beam steering algorithm for fast 60 GHz link re-establishment under node mobility, which uses knowledge of previous feasible sector pairs to narrow the sector search space, thereby reducing the associated latency overhead. We evaluate the performance of our algorithm in several representative indoor scenarios, based on detailed simulations of signal propagation in a 60 GHz WLAN in WinProp with realistic building materials. We study the effect of indoor layout, antenna sector beamwidth, node mobility pattern, and device orientation awareness. Our results show that the smart beam steering algorithm achieves a 7-fold reduction of the sector search space on average, which directly translates into lower 60 GHz link re-establishment latency. Our results also show that our fast search algorithm selects the near-optimal antenna sector pair for link re-establishment.",
"title": ""
},
{
"docid": "neg:1840406_18",
"text": "Koch-shaped dipoles are introduced for the first time in a wideband antenna design and evolve the traditional Euclidean log-periodic dipole array into the log-periodic Koch-dipole array (LPKDA). Antenna size can be reduced while maintaining its overall performance characteristics. Observations and characteristics of both antennas are discussed. Advantages and disadvantages of the proposed LPKDA are validated through a fabricated proof-of-concept prototype that exhibited approximately 12% size reduction with minimal degradation in the impedance and pattern bandwidths. This is the first application of Koch prefractal elements in a miniaturized wideband antenna design.",
"title": ""
},
{
"docid": "neg:1840406_19",
"text": "Recently, a supervised dictionary learning (SDL) approach based on the Hilbert-Schmidt independence criterion (HSIC) has been proposed that learns the dictionary and the corresponding sparse coefficients in a space where the dependency between the data and the corresponding labels is maximized. In this paper, two multiview dictionary learning techniques are proposed based on this HSIC-based SDL. While one of these two techniques learns one dictionary and the corresponding coefficients in the space of fused features in all views, the other learns one dictionary in each view and subsequently fuses the sparse coefficients in the spaces of learned dictionaries. The effectiveness of the proposed multiview learning techniques in using the complementary information of single views is demonstrated in the application of speech emotion recognition (SER). The fully-continuous sub-challenge (FCSC) of the AVEC 2012 dataset is used in two different views: baseline and spectral energy distribution (SED) feature sets. Four dimensional affects, i.e., arousal, expectation, power, and valence are predicted using the proposed multiview methods as the continuous response variables. The results are compared with the single views, AVEC 2012 baseline system, and also other supervised and unsupervised multiview learning approaches in the literature. Using correlation coefficient as the performance measure in predicting the continuous dimensional affects, it is shown that the proposed approach achieves the highest performance among the rivals. The relative performance of the two proposed multiview techniques and their relationship are also discussed. Particularly, it is shown that by providing an additional constraint on the dictionary of one of these approaches, it becomes the same as the other.",
"title": ""
}
] |
1840407 | Hierarchical load forecasting : Gradient boosting machines and Gaussian processes | [
{
"docid": "pos:1840407_0",
"text": "Despite its importance, choosing the structural form of the kernel in nonparametric regression remains a black art. We define a space of kernel structures which are built compositionally by adding and multiplying a small number of base kernels. We present a method for searching over this space of structures which mirrors the scientific discovery process. The learned structures can often decompose functions into interpretable components and enable long-range extrapolation on time-series datasets. Our structure search method outperforms many widely used kernels and kernel combination methods on a variety of prediction tasks.",
"title": ""
}
] | [
{
"docid": "neg:1840407_0",
"text": "Th is book supports an emerging trend toward emphasizing the plurality of digital literacy; recognizing the advantages of understanding digital literacy as digital literacies. In the book world this trend is still marginal. In December 2007, Allan Martin and Dan Madigan’s collection Digital Literacies for Learning (2006) was the only English-language book with “digital literacies” in the title to show up in a search on Amazon.com. Th e plural form fares better among English-language journal articles (e.g., Anderson & Henderson, 2004; Ba, Tally, & Tsikalas, 2002; Bawden, 2001; Doering et al., 2007; Myers, 2006; Snyder, 1999; Th omas, 2004) and conference presentations (e.g., Erstad, 2007; Lin & Lo, 2004; Steinkeuhler, 2005), however, and is now reasonably common in talk on blogs and wikis (e.g., Couros, 2007; Davies, 2007). Nonetheless, talk of digital literacy, in the singular, remains the default mode. Th e authors invited to contribute to this book were chosen in light of three reasons we (the editors) identify as important grounds for promoting the idea of digital literacies in the plural. Th is, of course, does not mean the contributing authors would necessarily subscribe to some or all of these reasons. Th at was",
"title": ""
},
{
"docid": "neg:1840407_1",
"text": "As multicast applications are deployed for mainstream use, the need to secure multicast communications will become critical. Multicast, however, does not fit the point-to-point model of most network security protocols which were designed with unicast communications in mind. As we will show, securing multicast (or group) communications is fundamentally different from securing unicast (or paired) communications. In turn, these differences can result in scalability problems for many typical applications.In this paper, we examine and model the differences between unicast and multicast security and then propose Iolus: a novel framework for scalable secure multicasting. Protocols based on Iolus can be used to achieve a variety of security objectives and may be used either to directly secure multicast communications or to provide a separate group key management service to other \"security-aware\" applications. We describe the architecture and operation of Iolus in detail and also describe our experience with a protocol based on the Iolus framework.",
"title": ""
},
{
"docid": "neg:1840407_2",
"text": "We present the design and implementation of a system for axiomatic programming, and its application to mathematical software construction. Key novelties include a direct support for user-defined axioms establishing local equalities between types, and overload resolution based on equational theories and user-defined local axioms. We illustrate uses of axioms, and their organization into concepts, in structured generic programming as practiced in computational mathematical systems.",
"title": ""
},
{
"docid": "neg:1840407_3",
"text": "Collaboration is the “mutual engagement of participants in a coordinated effort to solve a problem together.” Collaborative interactions are characterized by shared goals, symmetry of structure, and a high degree of negotiation, interactivity, and interdependence. Interactions producing elaborated explanations are particularly valuable for improving student learning. Nonresponsive feedback, on the other hand, can be detrimental to student learning in collaborative situations. Collaboration can have powerful effects on student learning, particularly for low-achieving students. However, a number of factors may moderate the impact of collaboration on student learning, including student characteristics, group composition, and task characteristics. Although historical frameworks offer some guidance as to when and how children acquire and develop collaboration skills, there is scant empirical evidence to support such predictions. However, because many researchers appear to believe children can be taught to collaborate, they urge educators to provide explicit instruction that encourages development of skills such as coordination, communication, conflict resolution, decision-making, problemsolving, and negotiation. Such training should also emphasize desirable qualities of interaction, such as providing elaborated explanations, asking direct and specific questions, and responding appropriately to the requests of others. Teachers should structure tasks in ways that will support the goals of collaboration, specify “ground rules” for interaction, and regulate such interactions. There are a number of challenges in using group-based tasks to assess collaboration. Several suggestions for assessing collaboration skills are made.",
"title": ""
},
{
"docid": "neg:1840407_4",
"text": "This paper presents an approach for online estimation of the extrinsic calibration parameters of a multi-camera rig. Given a coarse initial estimate of the parameters, the relative poses between cameras are refined through recursive filtering. The approach is purely vision based and relies on plane induced homographies between successive frames. Overlapping fields of view are not required. Instead, the ground plane serves as a natural reference object. In contrast to other approaches, motion, relative camera poses, and the ground plane are estimated simultaneously using a single iterated extended Kalman filter. This reduces not only the number of parameters but also the computational complexity. Furthermore, an arbitrary number of cameras can be incorporated. Several experiments on synthetic as well as real data were conducted using a setup of four synchronized wide angle fisheye cameras, mounted on a moving platform. Results were obtained, using both, a planar and a general motion model with full six degrees of freedom. Additionally, the effects of uncertain intrinsic parameters and nonplanar ground were evaluated experimentally.",
"title": ""
},
{
"docid": "neg:1840407_5",
"text": "Taurine is a natural amino acid present as free form in many mammalian tissues and in particular in skeletal muscle. Taurine exerts many physiological functions, including membrane stabilization, osmoregulation and cytoprotective effects, antioxidant and anti-inflammatory actions as well as modulation of intracellular calcium concentration and ion channel function. In addition taurine may control muscle metabolism and gene expression, through yet unclear mechanisms. This review summarizes the effects of taurine on specific muscle targets and pathways as well as its therapeutic potential to restore skeletal muscle function and performance in various pathological conditions. Evidences support the link between alteration of intracellular taurine level in skeletal muscle and different pathophysiological conditions, such as disuse-induced muscle atrophy, muscular dystrophy and/or senescence, reinforcing the interest towards its exogenous supplementation. In addition, taurine treatment can be beneficial to reduce sarcolemmal hyper-excitability in myotonia-related syndromes. Although further studies are necessary to fill the gaps between animals and humans, the benefit of the amino acid appears to be due to its multiple actions on cellular functions while toxicity seems relatively low. Human clinical trials using taurine in various pathologies such as diabetes, cardiovascular and neurological disorders have been performed and may represent a guide-line for designing specific studies in patients of neuromuscular diseases.",
"title": ""
},
{
"docid": "neg:1840407_6",
"text": "Designing a practical test automation architecture provides a solid foundation for a successful automation effort. This paper describes key elements of automated testing that need to be considered, models for testing that can be used for designing a test automation architecture, and considerations for successfully combining the elements to form an automated test environment. The paper first develops a general framework for discussion of software testing and test automation. This includes a definition of test automation, a model for software tests, and a discussion of test oracles. The remainder of the paper focuses on using the framework to plan for a test automation architecture that addresses the requirements for the specific software under test (SUT).",
"title": ""
},
{
"docid": "neg:1840407_7",
"text": "Ensemble learning has been proved to improve the generalization ability effectively in both theory and practice. In this paper, we briefly outline the current status of research on it first. Then, a new deep neural network-based ensemble method that integrates filtering views, local views, distorted views, explicit training, implicit training, subview prediction, and Simple Average is proposed for biomedical time series classification. Finally, we validate its effectiveness on the Chinese Cardiovascular Disease Database containing a large number of electrocardiogram recordings. The experimental results show that the proposed method has certain advantages compared to some well-known ensemble methods, such as Bagging and AdaBoost.",
"title": ""
},
{
"docid": "neg:1840407_8",
"text": "Over the past years, state-of-the-art information extraction (IE) systems such as NELL [5] and ReVerb [9] have achieved impressive results by producing very large knowledge resources at web scale with minimal supervision. However, these resources lack the schema information, exhibit a high degree of ambiguity, and are difficult even for humans to interpret. Working with such resources becomes easier if there is a structured information base to which the resources can be linked. In this paper, we introduce the integration of open information extraction projects with Wikipedia-based IE projects that maintain a logical schema, as an important challenge for the NLP, semantic web, and machine learning communities. We describe the problem, present a gold-standard benchmark, and take the first steps towards a data-driven solution to the problem. This is especially promising, since NELL and ReVerb typically achieve a very large coverage, but still still lack a fullfledged clean ontological structure which, on the other hand, could be provided by large-scale ontologies like DBpedia [2] or YAGO [13].",
"title": ""
},
{
"docid": "neg:1840407_9",
"text": "We present a framework for precomputed volume radiance transfer that achieves real-time rendering of global illumination effects for volume data sets such as multiple scattering, volumetric shadows, and so on. Our approach incorporates the volumetric photon mapping method into the classical precomputed radiance transfer pipeline. We contribute several techniques for light approximation, radiance transfer precomputation, and real-time radiance estimation, which are essential to make the approach practical and to achieve high frame rates. For light approximation, we propose a new discrete spherical function that has better performance for construction and evaluation when compared with existing rotational invariant spherical functions such as spherical harmonics and spherical radial basis functions. In addition, we present a fast splatting-based radiance transfer precomputation method and an early evaluation technique for real-time radiance estimation in the clustered principal component analysis space. Our techniques are validated through comprehensive evaluations and rendering tests. We also apply our rendering approach to volume visualization.",
"title": ""
},
{
"docid": "neg:1840407_10",
"text": "0957-4174/$ see front matter 2012 Elsevier Ltd. A http://dx.doi.org/10.1016/j.eswa.2012.05.068 ⇑ Tel.: +886 7 3814526. E-mail address: leechung@mail.ee.kuas.edu.tw Due to the explosive growth of social-media applications, enhancing event-awareness by social mining has become extremely important. The contents of microblogs preserve valuable information associated with past disastrous events and stories. To learn the experiences from past events for tackling emerging real-world events, in this work we utilize the social-media messages to characterize real-world events through mining their contents and extracting essential features for relatedness analysis. On one hand, we established an online clustering approach on Twitter microblogs for detecting emerging events, and meanwhile we performed event relatedness evaluation using an unsupervised clustering approach. On the other hand, we developed a supervised learning model to create extensible measure metrics for offline evaluation of event relatedness. By means of supervised learning, our developed measure metrics are able to compute relatedness of various historical events, allowing the event impacts on specified domains to be quantitatively measured for event comparison. By combining the strengths of both methods, the experimental results showed that the combined framework in our system is sensible for discovering more unknown knowledge about event impacts and enhancing event awareness. 2012 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "neg:1840407_11",
"text": "Two dimensional (2D) materials with a monolayer of atoms represent an ultimate control of material dimension in the vertical direction. Molybdenum sulfide (MoS2) monolayers, with a direct bandgap of 1.8 eV, offer an unprecedented prospect of miniaturizing semiconductor science and technology down to a truly atomic scale. Recent studies have indeed demonstrated the promise of 2D MoS2 in fields including field effect transistors, low power switches, optoelectronics, and spintronics. However, device development with 2D MoS2 has been delayed by the lack of capabilities to produce large-area, uniform, and high-quality MoS2 monolayers. Here we present a self-limiting approach that can grow high quality monolayer and few-layer MoS2 films over an area of centimeters with unprecedented uniformity and controllability. This approach is compatible with the standard fabrication process in semiconductor industry. It paves the way for the development of practical devices with 2D MoS2 and opens up new avenues for fundamental research.",
"title": ""
},
{
"docid": "neg:1840407_12",
"text": "This paper presents an unsupervised distribution-free change detection approach for synthetic aperture radar (SAR) images based on an image fusion strategy and a novel fuzzy clustering algorithm. The image fusion technique is introduced to generate a difference image by using complementary information from a mean-ratio image and a log-ratio image. In order to restrain the background information and enhance the information of changed regions in the fused difference image, wavelet fusion rules based on an average operator and minimum local area energy are chosen to fuse the wavelet coefficients for a low-frequency band and a high-frequency band, respectively. A reformulated fuzzy local-information C-means clustering algorithm is proposed for classifying changed and unchanged regions in the fused difference image. It incorporates the information about spatial context in a novel fuzzy way for the purpose of enhancing the changed information and of reducing the effect of speckle noise. Experiments on real SAR images show that the image fusion strategy integrates the advantages of the log-ratio operator and the mean-ratio operator and gains a better performance. The change detection results obtained by the improved fuzzy clustering algorithm exhibited lower error than its preexistences.",
"title": ""
},
{
"docid": "neg:1840407_13",
"text": "A residual network (or ResNet) is a standard deep neural net architecture, with stateof-the-art performance across numerous applications. The main premise of ResNets is that they allow the training of each layer to focus on fitting just the residual of the previous layer’s output and the target output. Thus, we should expect that the trained network is no worse than what we can obtain if we remove the residual layers and train a shallower network instead. However, due to the non-convexity of the optimization problem, it is not at all clear that ResNets indeed achieve this behavior, rather than getting stuck at some arbitrarily poor local minimum. In this paper, we rigorously prove that arbitrarily deep, nonlinear residual units indeed exhibit this behavior, in the sense that the optimization landscape contains no local minima with value above what can be obtained with a linear predictor (namely a 1-layer network). Notably, we show this under minimal or no assumptions on the precise network architecture, data distribution, or loss function used. We also provide a quantitative analysis of approximate stationary points for this problem. Finally, we show that with a certain tweak to the architecture, training the network with standard stochastic gradient descent achieves an objective value close or better than any linear predictor.",
"title": ""
},
{
"docid": "neg:1840407_14",
"text": "This paper focuses on the micro-blogging service Twitter, looking at source credibility for information shared in relation to the Fukushima Daiichi nuclear power plant disaster in Japan. We look at the sources, credibility, and between-language differences in information shared in the month following the disaster. Messages were categorized by user, location, language, type, and credibility of information source. Tweets with reference to third-party information made up the bulk of messages sent, and it was also found that a majority of those sources were highly credible, including established institutions, traditional media outlets, and highly credible individuals. In general, profile anonymity proved to be correlated with a higher propensity to share information from low credibility sources. However, Japanese-language tweeters, while more likely to have anonymous profiles, referenced lowcredibility sources less often than non-Japanese tweeters, suggesting proximity to the disaster mediating the degree of credibility of shared content.",
"title": ""
},
{
"docid": "neg:1840407_15",
"text": "Thumbnail images provide users of image retrieval and browsing systems with a method for quickly scanning large numbers of images. Recognizing the objects in an image is important in many retrieval tasks, but thumbnails generated by shrinking the original image often render objects illegible. We study the ability of computer vision systems to detect key components of images so that automated cropping, prior to shrinking, can render objects more recognizable. We evaluate automatic cropping techniques 1) based on a general method that detects salient portions of images, and 2) based on automatic face detection. Our user study shows that these methods result in small thumbnails that are substantially more recognizable and easier to find in the context of visual search.",
"title": ""
},
{
"docid": "neg:1840407_16",
"text": "This paper presents a compliant locomotion framework for torque-controlled humanoids using model-based whole-body control. In order to stabilize the centroidal dynamics during locomotion, we compute linear momentum rate of change objectives using a novel time-varying controller for the Divergent Component of Motion (DCM). Task-space objectives, including the desired momentum rate of change, are tracked using an efficient quadratic program formulation that computes optimal joint torque setpoints given frictional contact constraints and joint position / torque limits. In order to validate the effectiveness of the proposed approach, we demonstrate push recovery and compliant walking using THOR, a 34 DOF humanoid with series elastic actuation. We discuss details leading to the successful implementation of optimization-based whole-body control on our hardware platform, including the design of a “simple” joint impedance controller that introduces inner-loop velocity feedback into the actuator force controller.",
"title": ""
},
{
"docid": "neg:1840407_17",
"text": "A general theory of addictions is proposed, using the compulsive gambler as the prototype. Addiction is defined as a dependent state acquired over time to relieve stress. Two interrelated sets of factors predispose persons to addictions: an abnormal physiological resting state, and childhood experiences producing a deep sense of inadequacy. All addictions are hypothesized to follow a similar three-stage course. A matrix strategy is outlined to collect similar information from different kinds of addicts and normals. The ultimate objective is to identify high risk youth and prevent the development of addictions.",
"title": ""
},
{
"docid": "neg:1840407_18",
"text": "Venue recommendation is an important application for Location-Based Social Networks (LBSNs), such as Yelp, and has been extensively studied in recent years. Matrix Factorisation (MF) is a popular Collaborative Filtering (CF) technique that can suggest relevant venues to users based on an assumption that similar users are likely to visit similar venues. In recent years, deep neural networks have been successfully applied to tasks such as speech recognition, computer vision and natural language processing. Building upon this momentum, various approaches for recommendation have been proposed in the literature to enhance the effectiveness of MF-based approaches by exploiting neural network models such as: word embeddings to incorporate auxiliary information (e.g. textual content of comments); and Recurrent Neural Networks (RNN) to capture sequential properties of observed user-venue interactions. However, such approaches rely on the traditional inner product of the latent factors of users and venues to capture the concept of collaborative filtering, which may not be sufficient to capture the complex structure of user-venue interactions. In this paper, we propose a Deep Recurrent Collaborative Filtering framework (DRCF) with a pairwise ranking function that aims to capture user-venue interactions in a CF manner from sequences of observed feedback by leveraging Multi-Layer Perception and Recurrent Neural Network architectures. Our proposed framework consists of two components: namely Generalised Recurrent Matrix Factorisation (GRMF) and Multi-Level Recurrent Perceptron (MLRP) models. In particular, GRMF and MLRP learn to model complex structures of user-venue interactions using element-wise and dot products as well as the concatenation of latent factors. In addition, we propose a novel sequence-based negative sampling approach that accounts for the sequential properties of observed feedback and geographical location of venues to enhance the quality of venue suggestions, as well as alleviate the cold-start users problem. Experiments on three large checkin and rating datasets show the effectiveness of our proposed framework by outperforming various state-of-the-art approaches.",
"title": ""
},
{
"docid": "neg:1840407_19",
"text": "This paper presents the state of art research progress on multilingual multi-document summarization. Our method utilizes hLDA (hierarchical Latent Dirichlet Allocation) algorithm to model the documents firstly. A new feature is proposed from the hLDA modeling results, which can reflect semantic information to some extent. Then it combines this new feature with different other features to perform sentence scoring. According to the results of sentence score, it extracts candidate summary sentences from the documents to generate a summary. We have also attempted to verify the effectiveness and robustness of the new feature through experiments. After the comparison with other summarization methods, our method reveals better performance in some respects.",
"title": ""
}
] |
1840408 | Coming of Age (Digitally): An Ecological View of Social Media Use among College Students | [
{
"docid": "pos:1840408_0",
"text": "Twitter is now used to distribute substantive content such as breaking news, increasing the importance of assessing the credibility of tweets. As users increasingly access tweets through search, they have less information on which to base credibility judgments as compared to consuming content from direct social network connections. We present survey results regarding users' perceptions of tweet credibility. We find a disparity between features users consider relevant to credibility assessment and those currently revealed by search engines. We then conducted two experiments in which we systematically manipulated several features of tweets to assess their impact on credibility ratings. We show that users are poor judges of truthfulness based on content alone, and instead are influenced by heuristics such as user name when making credibility assessments. Based on these findings, we discuss strategies tweet authors can use to enhance their credibility with readers (and strategies astute readers should be aware of!). We propose design improvements for displaying social search results so as to better convey credibility.",
"title": ""
},
{
"docid": "pos:1840408_1",
"text": "Though social network site use is often treated as a monolithic activity, in which all time is equally social and its impact the same for all users, we examine how Facebook affects social capital depending upon: (1) types of site activities, contrasting one-on-one communication, broadcasts to wider audiences, and passive consumption of social news, and (2) individual differences among users, including social communication skill and self-esteem. Longitudinal surveys matched to server logs from 415 Facebook users reveal that receiving messages from friends is associated with increases in bridging social capital, but that other uses are not. However, using the site to passively consume news assists those with lower social fluency draw value from their connections. The results inform site designers seeking to increase social connectedness and the value of those connections.",
"title": ""
},
{
"docid": "pos:1840408_2",
"text": "The negative aspects of smartphone overuse on young adults, such as sleep deprivation and attention deficits, are being increasingly recognized recently. This emerging issue motivated us to analyze the usage patterns related to smartphone overuse. We investigate smartphone usage for 95 college students using surveys, logged data, and interviews. We first divide the participants into risk and non-risk groups based on self-reported rating scale for smartphone overuse. We then analyze the usage data to identify between-group usage differences, which ranged from the overall usage patterns to app-specific usage patterns. Compared with the non-risk group, our results show that the risk group has longer usage time per day and different diurnal usage patterns. Also, the risk group users are more susceptible to push notifications, and tend to consume more online content. We characterize the overall relationship between usage features and smartphone overuse using analytic modeling and provide detailed illustrations of problematic usage behaviors based on interview data.",
"title": ""
},
{
"docid": "pos:1840408_3",
"text": "0747-5632/$ see front matter 2010 Elsevier Ltd. A doi:10.1016/j.chb.2010.02.004 * Corresponding author. E-mail address: malinda.desjarlais@gmail.com (M. Using computers with friends either in person or online has become ubiquitous in the life of most adolescents; however, little is known about the complex relation between this activity and friendship quality. This study examined direct support for the social compensation and rich-get-richer hypotheses among adolescent girls and boys by including social anxiety as a moderating factor. A sample of 1050 adolescents completed a survey in grade 9 and then again in grades 11 and 12. For girls, there was a main effect of using computers with friends on friendship quality; providing support for both hypotheses. For adolescent boys, however, social anxiety moderated this relation, supporting the social compensation hypothesis. These findings were identical for online communication and were stable throughout adolescence. Furthermore, participating in organized sports did not compensate for social anxiety for either adolescent girls or boys. Therefore, characteristics associated with using computers with friends may create a comfortable environment for socially anxious adolescents to interact with their peers which may be distinct from other more traditional adolescent activities. 2010 Elsevier Ltd. All rights reserved.",
"title": ""
}
] | [
{
"docid": "neg:1840408_0",
"text": "UNLABELLED\nThe need for long-term retention to prevent post-treatment tooth movement is now widely accepted by orthodontists. This may be achieved with removable retainers or permanent bonded retainers. This article aims to provide simple guidance for the dentist on how to maintain and repair both removable and fixed retainers.\n\n\nCLINICAL RELEVANCE\nThe general dental practitioner is more likely to review patients over time and needs to be aware of the need for long-term retention and how to maintain and repair the retainers.",
"title": ""
},
{
"docid": "neg:1840408_1",
"text": "Facebook is rapidly gaining recognition as a powerful research tool for the social sciences. It constitutes a large and diverse pool of participants, who can be selectively recruited for both online and offline studies. Additionally, it facilitates data collection by storing detailed records of its users' demographic profiles, social interactions, and behaviors. With participants' consent, these data can be recorded retrospectively in a convenient, accurate, and inexpensive way. Based on our experience in designing, implementing, and maintaining multiple Facebook-based psychological studies that attracted over 10 million participants, we demonstrate how to recruit participants using Facebook, incentivize them effectively, and maximize their engagement. We also outline the most important opportunities and challenges associated with using Facebook for research, provide several practical guidelines on how to successfully implement studies on Facebook, and finally, discuss ethical considerations.",
"title": ""
},
{
"docid": "neg:1840408_2",
"text": "PURPOSE\nVirtual reality devices, including virtual reality head-mounted displays, are becoming increasingly accessible to the general public as technological advances lead to reduced costs. However, there are numerous reports that adverse effects such as ocular discomfort and headache are associated with these devices. To investigate these adverse effects, questionnaires that have been specifically designed for other purposes such as investigating motion sickness have often been used. The primary purpose of this study was to develop a standard questionnaire for use in investigating symptoms that result from virtual reality viewing. In addition, symptom duration and whether priming subjects elevates symptom ratings were also investigated.\n\n\nMETHODS\nA list of the most frequently reported symptoms following virtual reality viewing was determined from previously published studies and used as the basis for a pilot questionnaire. The pilot questionnaire, which consisted of 12 nonocular and 11 ocular symptoms, was administered to two groups of eight subjects. One group was primed by having them complete the questionnaire before immersion; the other group completed the questionnaire postviewing only. Postviewing testing was carried out immediately after viewing and then at 2-min intervals for a further 10 min.\n\n\nRESULTS\nPriming subjects did not elevate symptom ratings; therefore, the data were pooled and 16 symptoms were found to increase significantly. The majority of symptoms dissipated rapidly, within 6 min after viewing. Frequency of endorsement data showed that approximately half of the symptoms on the pilot questionnaire could be discarded because <20% of subjects experienced them.\n\n\nCONCLUSIONS\nSymptom questionnaires to investigate virtual reality viewing can be administered before viewing, without biasing the findings, allowing calculation of the amount of change from pre- to postviewing. However, symptoms dissipate rapidly and assessment of symptoms needs to occur in the first 5 min postviewing. Thirteen symptom questions, eight nonocular and five ocular, were determined to be useful for a questionnaire specifically related to virtual reality viewing using a head-mounted display.",
"title": ""
},
{
"docid": "neg:1840408_3",
"text": "The need for building robots with soft materials emerged recently from considerations of the limitations of service robots in negotiating natural environments, from observation of the role of compliance in animals and plants [1], and even from the role attributed to the physical body in movement control and intelligence, in the so-called embodied intelligence or morphological computation paradigm [2]-[4]. The wide spread of soft robotics relies on numerous investigations of diverse materials and technologies for actuation and sensing, and on research of control techniques, all of which can serve the purpose of building robots with high deformability and compliance. But the core challenge of soft robotics research is, in fact, the variability and controllability of such deformability and compliance.",
"title": ""
},
{
"docid": "neg:1840408_4",
"text": "The use of tablet PCs is spreading rapidly, and accordingly users browsing and inputting personal information in public spaces can often be seen by third parties. Unlike conventional mobile phones and notebook PCs equipped with distinct input devices (e.g., keyboards), tablet PCs have touchscreen keyboards for data input. Such integration of display and input device increases the potential for harm when the display is captured by malicious attackers. This paper presents the description of reconstructing tablet PC displays via measurement of electromagnetic (EM) emanation. In conventional studies, such EM display capture has been achieved by using non-portable setups. Those studies also assumed that a large amount of time was available in advance of capture to obtain the electrical parameters of the target display. In contrast, this paper demonstrates that such EM display capture is feasible in real time by a setup that fits in an attaché case. The screen image reconstruction is achieved by performing a prior course profiling and a complemental signal processing instead of the conventional fine parameter tuning. Such complemental processing can eliminate the differences of leakage parameters among individuals and therefore correct the distortions of images. The attack distance, 2 m, makes this method a practical threat to general tablet PCs in public places. This paper discusses possible attack scenarios based on the setup described above. In addition, we describe a mechanism of EM emanation from tablet PCs and a countermeasure against such EM display capture.",
"title": ""
},
{
"docid": "neg:1840408_5",
"text": "Visual place recognition and loop closure is critical for the global accuracy of visual Simultaneous Localization and Mapping (SLAM) systems. We present a place recognition algorithm which operates by matching local query image sequences to a database of image sequences. To match sequences, we calculate a matrix of low-resolution, contrast-enhanced image similarity probability values. The optimal sequence alignment, which can be viewed as a discontinuous path through the matrix, is found using a Hidden Markov Model (HMM) framework reminiscent of Dynamic Time Warping from speech recognition. The state transitions enforce local velocity constraints and the most likely path sequence is recovered efficiently using the Viterbi algorithm. A rank reduction on the similarity probability matrix is used to provide additional robustness in challenging conditions when scoring sequence matches. We evaluate our approach on seven outdoor vision datasets and show improved precision-recall performance against the recently published seqSLAM algorithm.",
"title": ""
},
{
"docid": "neg:1840408_6",
"text": "Analysing the behaviour of student performance in classroom education is an active area in educational research. Early prediction of student performance may be helpful for both teacher and the student. However, the influencing factors of the student performance need to be identified first to build up such early prediction model. The existing data mining literature on student performance primarily focuses on student-related factors, though it may be influenced by many external factors also. Superior teaching acts as a catalyst which improves the knowledge dissemination process from teacher to the student. It also motivates the student to put more effort on the study. However, the research question, how the performance or grade correlates with teaching, is still relevant in present days. In this work, we propose a quantifiable measure of improvement with respect to the expected performance of a student. Furthermore, this study analyses the impact of teaching on performance improvement in theoretical courses of classroom-based education. It explores nearly 0.2 million academic records collected from an online system of an academic institute of national importance in India. The association mining approach has been adopted here and the result shows that confidence of both non-negative and positive improvements increase with superior teaching. This result indeed establishes the fact that teaching has a positive impact on student performance. To be more specific, the growing confidence of non-negative and positive improvements indicate that superior teaching facilitates more students to obtain either expected or better than expected grade.",
"title": ""
},
{
"docid": "neg:1840408_7",
"text": "Existing vector space models typically map synonyms and antonyms to similar word vectors, and thus fail to represent antonymy. We introduce a new vector space representation where antonyms lie on opposite sides of a sphere: in the word vector space, synonyms have cosine similarities close to one, while antonyms are close to minus one. We derive this representation with the aid of a thesaurus and latent semantic analysis (LSA). Each entry in the thesaurus – a word sense along with its synonyms and antonyms – is treated as a “document,” and the resulting document collection is subjected to LSA. The key contribution of this work is to show how to assign signs to the entries in the co-occurrence matrix on which LSA operates, so as to induce a subspace with the desired property. We evaluate this procedure with the Graduate Record Examination questions of (Mohammed et al., 2008) and find that the method improves on the results of that study. Further improvements result from refining the subspace representation with discriminative training, and augmenting the training data with general newspaper text. Altogether, we improve on the best previous results by 11 points absolute in F measure.",
"title": ""
},
{
"docid": "neg:1840408_8",
"text": "Open-source intelligence offers value in information security decision making through knowledge of threats and malicious activities that potentially impact business. Open-source intelligence using the internet is common, however, using the darknet is less common for the typical cybersecurity analyst. The challenges to using the darknet for open-source intelligence includes using specialized collection, processing, and analysis tools. While researchers share techniques, there are few publicly shared tools; therefore, this paper explores an open-source intelligence automation toolset that scans across the darknet connecting, collecting, processing, and analyzing. It describes and shares the tools and processes to build a secure darknet connection, and then how to collect, process, store, and analyze data. Providing tools and processes serves as an on-ramp for cybersecurity intelligence analysts to search for threats. Future studies may refine, expand, and deepen this paper's toolset framework. © 2 01 7 T he SA NS In sti tut e, Au tho r R eta ins Fu ll R igh ts © 2017 The SANS Institute Author retains full rights. Data Mining in the Dark 2 Nafziger, Brian",
"title": ""
},
{
"docid": "neg:1840408_9",
"text": "We compare two technological approaches to augmented reality for 3-D medical visualization: optical and video see-through devices. We provide a context to discuss the technology by reviewing several medical applications of augmented-reality re search efforts driven by real needs in the medical field, both in the United States and in Europe. We then discuss the issues for each approach, optical versus video, from both a technology and human-factor point of view. Finally, we point to potentially promising future developments of such devices including eye tracking and multifocus planes capabilities, as well as hybrid optical/video technology.",
"title": ""
},
{
"docid": "neg:1840408_10",
"text": "The genome of a cancer cell carries somatic mutations that are the cumulative consequences of the DNA damage and repair processes operative during the cellular lineage between the fertilized egg and the cancer cell. Remarkably, these mutational processes are poorly characterized. Global sequencing initiatives are yielding catalogs of somatic mutations from thousands of cancers, thus providing the unique opportunity to decipher the signatures of mutational processes operative in human cancer. However, until now there have been no theoretical models describing the signatures of mutational processes operative in cancer genomes and no systematic computational approaches are available to decipher these mutational signatures. Here, by modeling mutational processes as a blind source separation problem, we introduce a computational framework that effectively addresses these questions. Our approach provides a basis for characterizing mutational signatures from cancer-derived somatic mutational catalogs, paving the way to insights into the pathogenetic mechanism underlying all cancers.",
"title": ""
},
{
"docid": "neg:1840408_11",
"text": "A new record conversion efficiency of 24.7% was attained at the research level by using a heterojunction with intrinsic thin-layer structure of practical size (101.8 cm2, total area) at a 98-μm thickness. This is a world height record for any crystalline silicon-based solar cell of practical size (100 cm2 and above). Since we announced our former record of 23.7%, we have continued to reduce recombination losses at the hetero interface between a-Si and c-Si along with cutting down resistive losses by improving the silver paste with lower resistivity and optimization of the thicknesses in a-Si layers. Using a new technology that enables the formation of a-Si layer of even higher quality on the c-Si substrate, while limiting damage to the surface of the substrate, the Voc has been improved from 0.745 to 0.750 V. We also succeeded in improving the fill factor from 0.809 to 0.832.",
"title": ""
},
{
"docid": "neg:1840408_12",
"text": "Fischler PER •Sequence of tokens mapped to word embeddings. •Bidirectional LSTM builds context-dependent representations for each word. •A small feedforward layer encourages generalisation. •Conditional Random Field (CRF) at the top outputs the most optimal label sequence for the sentence. •Using character-based dynamic embeddings (Rei et al., 2016) to capture morphological patterns and unseen words.",
"title": ""
},
{
"docid": "neg:1840408_13",
"text": "The recognition of boundaries, e.g., between chorus and verse, is an important task in music structure analysis. The goal is to automatically detect such boundaries in audio signals so that the results are close to human annotation. In this work, we apply Convolutional Neural Networks to the task, trained directly on mel-scaled magnitude spectrograms. On a representative subset of the SALAMI structural annotation dataset, our method outperforms current techniques in terms of boundary retrieval F -measure at different temporal tolerances: We advance the state-of-the-art from 0.33 to 0.46 for tolerances of±0.5 seconds, and from 0.52 to 0.62 for tolerances of ±3 seconds. As the algorithm is trained on annotated audio data without the need of expert knowledge, we expect it to be easily adaptable to changed annotation guidelines and also to related tasks such as the detection of song transitions.",
"title": ""
},
{
"docid": "neg:1840408_14",
"text": "Contemporary games are making significant strides towards offering complex, immersive experiences for players. We can now explore sprawling 3D virtual environments populated by beautifully rendered characters and objects with autonomous behavior, engage in highly visceral action-oriented experiences offering a variety of missions with multiple solutions, and interact in ever-expanding online worlds teeming with physically customizable player avatars.",
"title": ""
},
{
"docid": "neg:1840408_15",
"text": "Benchmarks have played a vital role in the advancement of visual object recognition and other fields of computer vision (LeCun et al., 1998; Deng et al., 2009; ). The challenges posed by these standard datasets have helped identify and overcome the shortcomings of existing approaches, and have led to great advances of the state of the art. Even the recent massive increase of interest in deep learning methods can be attributed to their success in difficult benchmarks such as ImageNet (Krizhevsky et al., 2012; LeCun et al., 2015). Neuromorphic vision uses silicon retina sensors such as the dynamic vision sensor (DVS; Lichtsteiner et al., 2008). These sensors and their DAVIS (Dynamic and Activepixel Vision Sensor) and ATIS (Asynchronous Time-based Image Sensor) derivatives (Brandli et al., 2014; Posch et al., 2014) are inspired by biological vision by generating streams of asynchronous events indicating local log-intensity brightness changes. They thereby greatly reduce the amount of data to be processed, and their dynamic nature makes them a good fit for domains such as optical flow, object tracking, action recognition, or dynamic scene understanding. Compared to classical computer vision, neuromorphic vision is a younger and much smaller field of research, and lacks benchmarks, which impedes the progress of the field. To address this we introduce the largest event-based vision benchmark dataset published to date, hoping to satisfy a growing demand and stimulate challenges for the community. In particular, the availability of such benchmarks should help the development of algorithms processing event-based vision input, allowing a direct fair comparison of different approaches. We have explicitly chosen mostly dynamic vision tasks such as action recognition or tracking, which could benefit from the strengths of neuromorphic vision sensors, although algorithms that exploit these features are largely missing. A major reason for the lack of benchmarks is that currently neuromorphic vision sensors are only available as R&D prototypes. Nonetheless, there are several datasets already available; see Tan et al. (2015) for an informative review. Unlabeled DVS data was made available around 2007 in the jAER project1 and was used for development of spike timing-based unsupervised feature learning e.g., in Bichler et al. (2012). The first labeled and published event-based neuromorphic vision sensor benchmarks were created from the MNIST digit recognition dataset by jiggling the image on the screen (see Serrano-Gotarredona and Linares-Barranco, 2015 for an informative history) and later to reduce frame artifacts by jiggling the camera view with a pan-tilt unit (Orchard et al., 2015). These datasets automated the scene movement necessary to generate DVS output from the static images, and will be an important step forward for evaluating neuromorphic object recognition systems such as spiking deep networks (Pérez-Carrasco et al., 2013; O’Connor et al., 2013; Cao et al., 2014; Diehl et al., 2015), which so far have been tested mostly on static image datasets converted",
"title": ""
},
{
"docid": "neg:1840408_16",
"text": "This paper presents a comparative survey of research activities and emerging technologies of solid-state fault current limiters for power distribution systems.",
"title": ""
},
{
"docid": "neg:1840408_17",
"text": "In-vehicle electronic equipment aims to increase safety, by detecting risk factors and taking/suggesting corrective actions. This paper presents a knowledge-based framework for assisting a driver via her PDA. Car data extracted under On Board Diagnostics (OBD-II) protocol, data acquired from PDA embedded micro-devices and information retrieved from the Web are properly combined: a simple data fusion algorithm has been devised to collect and semantically annotate relevant safety events. Finally, a logic-based matchmaking allows to infer potential risk factors, enabling the system to issue accurate and timely warnings. The proposed approach has been implemented in a prototypical application for the Apple iPhone platform, in order to provide experimental evaluation in real-world test drives for corroborating the approach. Keywords-Semantic Web; On Board Diagnostics; Ubiquitous Computing; Data Fusion; Intelligent Transportation Systems",
"title": ""
},
{
"docid": "neg:1840408_18",
"text": "Introduction: The tooth mobility due to periodontal bone loss can cause masticatory discomfort, mainly in protrusive movements in the region of the mandibular anterior teeth. Thus, the splinting is a viable alternative to keep them in function satisfactorily. Objective: This study aimed to demonstrate, through a clinical case with medium-term following-up, the clinical application of splinting with glass fiber-reinforced composite resin. Case report: Female patient, 73 years old, complained about masticatory discomfort related to the right mandibular lateral incisor. Clinical and radiographic evaluation showed grade 2 dental mobility, bone loss and increased periodontal ligament space. The proposed treatment was splinting with glass fiber-reinforced composite resin from the right mandibular canine to left mandibular canine. Results: Four-year follow-up showed favorable clinical and radiographic results with respect to periodontal health and maintenance of functional aspects. Conclusion: The splinting with glass fiber-reinforced composite resin is a viable technique and stable over time for the treatment of tooth mobility.",
"title": ""
}
] |
1840409 | Automatic video description generation via LSTM with joint two-stream encoding | [
{
"docid": "pos:1840409_0",
"text": "Real-world videos often have complex dynamics, methods for generating open-domain video descriptions should be sensitive to temporal structure and allow both input (sequence of frames) and output (sequence of words) of variable length. To approach this problem we propose a novel end-to-end sequence-to-sequence model to generate captions for videos. For this we exploit recurrent neural networks, specifically LSTMs, which have demonstrated state-of-the-art performance in image caption generation. Our LSTM model is trained on video-sentence pairs and learns to associate a sequence of video frames to a sequence of words in order to generate a description of the event in the video clip. Our model naturally is able to learn the temporal structure of the sequence of frames as well as the sequence model of the generated sentences, i.e. a language model. We evaluate several variants of our model that exploit different visual features on a standard set of YouTube videos and two movie description datasets (M-VAD and MPII-MD).",
"title": ""
},
{
"docid": "pos:1840409_1",
"text": "Generating descriptions for videos has many applications including assisting blind people and human-robot interaction. The recent advances in image captioning as well as the release of large-scale movie description datasets such as MPII-MD [28] allow to study this task in more depth. Many of the proposed methods for image captioning rely on pre-trained object classifier CNNs and Long-Short Term Memory recurrent networks (LSTMs) for generating descriptions. While image description focuses on objects, we argue that it is important to distinguish verbs, objects, and places in the challenging setting of movie description. In this work we show how to learn robust visual classifiers from the weak annotations of the sentence descriptions. Based on these visual classifiers we learn how to generate a description using an LSTM. We explore different design choices to build and train the LSTM and achieve the best performance to date on the challenging MPII-MD dataset. We compare and analyze our approach and prior work along various dimensions to better understand the key challenges of the movie description task.",
"title": ""
},
{
"docid": "pos:1840409_2",
"text": "We present a new approach for modeling multi-modal data sets, focusing on the specific case of segmented images with associated text. Learning the joint distribution of image regions and words has many applications. We consider in detail predicting words associated with whole images (auto-annotation) and corresponding to particular image regions (region naming). Auto-annotation might help organize and access large collections of images. Region naming is a model of object recognition as a process of translating image regions to words, much as one might translate from one language to another. Learning the relationships between image regions and semantic correlates (words) is an interesting example of multi-modal data mining, particularly because it is typically hard to apply data mining techniques to collections of images. We develop a number of models for the joint distribution of image regions and words, including several which explicitly learn the correspondence between regions and words. We study multi-modal and correspondence extensions to Hofmann’s hierarchical clustering/aspect model, a translation model adapted from statistical machine translation (Brown et al.), and a multi-modal extension to mixture of latent Dirichlet allocation (MoM-LDA). All models are assessed using a large collection of annotated images of real c ©2003 Kobus Barnard, Pinar Duygulu, David Forsyth, Nando de Freitas, David Blei and Michael Jordan. BARNARD, DUYGULU, FORSYTH, DE FREITAS, BLEI AND JORDAN scenes. We study in depth the difficult problem of measuring performance. For the annotation task, we look at prediction performance on held out data. We present three alternative measures, oriented toward different types of task. Measuring the performance of correspondence methods is harder, because one must determine whether a word has been placed on the right region of an image. We can use annotation performance as a proxy measure, but accurate measurement requires hand labeled data, and thus must occur on a smaller scale. We show results using both an annotation proxy, and manually labeled data.",
"title": ""
},
{
"docid": "pos:1840409_3",
"text": "Parameter set learned using all WMT12 data (Callison-Burch et al., 2012): • 100,000 binary rankings covering 8 language directions. •Restrict scoring for all languages to exact and paraphrase matching. Parameters encode human preferences that generalize across languages: •Prefer recall over precision. •Prefer word choice over word order. •Prefer correct translations of content words over function words. •Prefer exact matches over paraphrase matches, while still giving significant credit to paraphrases. Visualization",
"title": ""
}
] | [
{
"docid": "neg:1840409_0",
"text": "Universal, intelligent, and multifunctional devices controlling power distribution and measurement will become the enabling technology of the Smart Grid ICT. In this paper, we report on a novel automation architecture which supports distributed multiagent intelligence, interoperability, and configurability and enables efficient simulation of distributed automation systems. The solution is based on the combination of IEC 61850 object-based modeling and interoperable communication with IEC 61499 function block executable specification. Using the developed simulation environment, we demonstrate the possibility of multiagent control to achieve self-healing grid through collaborative fault location and power restoration.",
"title": ""
},
{
"docid": "neg:1840409_1",
"text": "The increasing popularity of social networks, such as Facebook and Orkut, has raised several privacy concerns. Traditional ways of safeguarding privacy of personal information by hiding sensitive attributes are no longer adequate. Research shows that probabilistic classification techniques can effectively infer such private information. The disclosed sensitive information of friends, group affiliations and even participation in activities, such as tagging and commenting, are considered background knowledge in this process. In this paper, we present a privacy protection tool, called Privometer, that measures the amount of sensitive information leakage in a user profile and suggests self-sanitization actions to regulate the amount of leakage. In contrast to previous research, where inference techniques use publicly available profile information, we consider an augmented model where a potentially malicious application installed in the user's friend profiles can access substantially more information. In our model, merely hiding the sensitive information is not sufficient to protect the user privacy. We present an implementation of Privometer in Facebook.",
"title": ""
},
{
"docid": "neg:1840409_2",
"text": "Storyline detection from news articles aims at summarizing events described under a certain news topic and revealing how those events evolve over time. It is a difficult task because it requires first the detection of events from news articles published in different time periods and then the construction of storylines by linking events into coherent news stories. Moreover, each storyline has different hierarchical structures which are dependent across epochs. Existing approaches often ignore the dependency of hierarchical structures in storyline generation. In this paper, we propose an unsupervised Bayesian model, called dynamic storyline detection model, to extract structured representations and evolution patterns of storylines. The proposed model is evaluated on a large scale news corpus. Experimental results show that our proposed model outperforms several baseline approaches.",
"title": ""
},
{
"docid": "neg:1840409_3",
"text": "In this paper we propose a new semi-supervised GAN architecture (ss-InfoGAN) for image synthesis that leverages information from few labels (as little as 0.22%, max. 10% of the dataset) to learn semantically meaningful and controllable data representations where latent variables correspond to label categories. The architecture builds on Information Maximizing Generative Adversarial Networks (InfoGAN) and is shown to learn both continuous and categorical codes and achieves higher quality of synthetic samples compared to fully unsupervised settings. Furthermore, we show that using small amounts of labeled data speeds-up training convergence. The architecture maintains the ability to disentangle latent variables for which no labels are available. Finally, we contribute an information-theoretic reasoning on how introducing semi-supervision increases mutual information between synthetic and real data.",
"title": ""
},
{
"docid": "neg:1840409_4",
"text": "This paper presents expressions for the waveforms and design equations to satisfy the ZVS/ZDS conditions in the class-E power amplifier, taking into account the MOSFET gate-to-drain linear parasitic capacitance and the drain-to-source nonlinear parasitic capacitance. Expressions are given for power output capability and power conversion efficiency. Design examples are presented along with the PSpice-simulation and experimental waveforms at 2.3 W output power and 4 MHz operating frequency. It is shown from the expressions that the slope of the voltage across the MOSFET gate-to-drain parasitic capacitance during the switch-off state affects the switch-voltage waveform. Therefore, it is necessary to consider the MOSFET gate-to-drain capacitance for achieving the class-E ZVS/ZDS conditions. As a result, the power output capability and the power conversion efficiency are also affected by the MOSFET gate-to-drain capacitance. The waveforms obtained from PSpice simulations and circuit experiments showed the quantitative agreements with the theoretical predictions, which verify the expressions given in this paper.",
"title": ""
},
{
"docid": "neg:1840409_5",
"text": "The development of MEMS actuators is rapidly evolving and continuously new progress in terms of efficiency, power and force output is reported. Pneumatic and hydraulic are an interesting class of microactuators that are easily overlooked. Despite the 20 years of research, and hundreds of publications on this topic, these actuators are only popular in microfluidic systems. In other MEMS applications, pneumatic and hydraulic actuators are rare in comparison with electrostatic, thermal or piezo-electric actuators. However, several studies have shown that hydraulic and pneumatic actuators deliver among the highest force and power densities at microscale. It is believed that this asset is particularly important in modern industrial and medical microsystems, and therefore, pneumatic and hydraulic actuators could start playing an increasingly important role. This paper shows an in-depth overview of the developments in this field ranging from the classic inflatable membrane actuators to more complex piston–cylinder and drag-based microdevices. (Some figures in this article are in colour only in the electronic version)",
"title": ""
},
{
"docid": "neg:1840409_6",
"text": "Estimation of the semantic relatedness between biomedical concepts has utility for many informatics applications. Automated methods fall into two broad categories: methods based on distributional statistics drawn from text corpora, and methods based on the structure of existing knowledge resources. In the former case, taxonomic structure is disregarded. In the latter, semantically relevant empirical information is not considered. In this paper, we present a method that retrofits the context vector representation of MeSH terms by using additional linkage information from UMLS/MeSH hierarchy such that linked concepts have similar vector representations. We evaluated the method relative to previously published physician and coder’s ratings on sets of MeSH terms. Our experimental results demonstrate that the retrofitted word vector measures obtain a higher correlation with physician judgments. The results also demonstrate a clear improvement on the correlation with experts’ ratings from the retrofitted vector representation in comparison to the vector representation without retrofitting.",
"title": ""
},
{
"docid": "neg:1840409_7",
"text": "Although short-range wireless communication explicitly targets local and regional applications, range continues to be a highly important issue. The range directly depends on the so-called link budget, which can be increased by the choice of modulation and coding schemes. The recent transceiver generation in particular comes with extensive and flexible support for software-defined radio (SDR). The SX127× family from Semtech Corp. is a member of this device class and promises significant benefits for range, robust performance, and battery lifetime compared to competing technologies. This contribution gives a short overview of the technologies to support Long Range (LoRa™) and the corresponding Layer 2 protocol (LoRaWAN™). It particularly describes the possibility to combine the Internet Protocol, i.e. IPv6, into LoRaWAN™, so that it can be directly integrated into a full-fledged Internet of Things (IoT). The proposed solution, which we name 6LoRaWAN, has been implemented and tested; results of the experiments are also shown in this paper.",
"title": ""
},
{
"docid": "neg:1840409_8",
"text": "This paper will derive the Black-Scholes pricing model of a European option by calculating the expected value of the option. We will assume that the stock price is log-normally distributed and that the universe is riskneutral. Then, using Ito’s Lemma, we will justify the use of the risk-neutral rate in these initial calculations. Finally, we will prove put-call parity in order to price European put options, and extend the concepts of the Black-Scholes formula to value an option with pricing barriers.",
"title": ""
},
{
"docid": "neg:1840409_9",
"text": "Background\nAbsorbable suture suspension (Silhouette InstaLift, Sinclair Pharma, Irvine, CA) is a novel, minimally invasive system that utilizes a specially manufactured synthetic suture to help address the issues of facial aging, while minimizing the risks associated with historic thread lifting modalities.\n\n\nObjectives\nThe purpose of the study was to assess the safety, efficacy, and patient satisfaction of the absorbable suture suspension system in regards to facial rejuvenation and midface volume enhancement.\n\n\nMethods\nThe first 100 treated patients who underwent absorbable suture suspension, by the senior author, were critically evaluated. Subjects completed anonymous surveys evaluating their experience with the new modality.\n\n\nResults\nSurvey results indicate that absorbable suture suspension is a tolerable (96%) and manageable (89%) treatment that improves age related changes (83%), which was found to be in concordance with our critical review.\n\n\nConclusions\nAbsorbable suture suspension generates high patient satisfaction by nonsurgically lifting mid and lower face and neck skin and has the potential to influence numerous facets of aesthetic medicine. The study provides a greater understanding concerning patient selection, suture trajectory, and possible adjuvant therapies.\n\n\nLevel of Evidence 4",
"title": ""
},
{
"docid": "neg:1840409_10",
"text": "In this paper we present an approach of procedural game content generation that focuses on a gameplay loops formal language (GLFL). In fact, during an iterative game design process, game designers suggest modifications that often require high development costs. The proposed language and its operational semantic allow reducing the gap between game designers' requirement and game developers' needs, enhancing therefore video games productivity. Using gameplay loops concept for game content generation offers a low cost solution to adjust game challenges, objectives and rewards in video games. A pilot experiment have been conducted to study the impact of this approach on game development.",
"title": ""
},
{
"docid": "neg:1840409_11",
"text": "The phase response of noisy speech has largely been ignored, but recent research shows the importance of phase for perceptual speech quality. A few phase enhancement approaches have been developed. These systems, however, require a separate algorithm for enhancing the magnitude response. In this paper, we present a novel framework for performing monaural speech separation in the complex domain. We show that much structure is exhibited in the real and imaginary components of the short-time Fourier transform, making the complex domain appropriate for supervised estimation. Consequently, we define the complex ideal ratio mask (cIRM) that jointly enhances the magnitude and phase of noisy speech. We then employ a single deep neural network to estimate both the real and imaginary components of the cIRM. The evaluation results show that complex ratio masking yields high quality speech enhancement, and outperforms related methods that operate in the magnitude domain or separately enhance magnitude and phase.",
"title": ""
},
{
"docid": "neg:1840409_12",
"text": "The language of deaf and dumb which uses body parts to convey the message is known as sign language. Here, we are doing a study to convert speech into sign language used for conversation. In this area we have many developed method to recognize alphabets and numerals of ISL (Indian sign language). There are various approaches for recognition of ISL and we have done a comparative studies between them [1].",
"title": ""
},
{
"docid": "neg:1840409_13",
"text": "It is difficult to draw sweeping general conclusions about the blastogenesis of CT, principally because so few thoroughly studied cases are reported. It is to be hoped that methods such as painstaking gross or electronic dissection will increase the number of well-documented cases. Nevertheless, the following conclusions can be proposed: 1. Most CT can be classified into a few main anatomic types (or paradigms), and there are also rare transitional types that show gradation between the main types. 2. Most CT have two full notochordal axes (Fig. 5); the ventral organs induced along these axes may be severely disorientated, malformed, or aplastic in the process of being arranged within one body. Reported anatomic types of CT represent those notochordal arrangements that are compatible with reasonably complete embryogenesis. New ventro-lateral axes are formed in many types of CT because of space constriction in the ventral zones. The new structures represent areas of \"mutual recognition and organization\" rather than \"fusion\" (Fig. 17). 3. Orientations of the pairs of axes in the embryonic disc can be deduced from the resulting anatomy. Except for dicephalus, the axes are not side by side. Notochords are usually \"end-on\" or ventro-ventral in orientation (Fig. 5). 4. A single gastrulation event or only partial duplicated gastrulation event seems to occur in dicephalics, despite a full double notochord. 5. The anatomy of diprosopus requires further clarification, particularly in cases with complete crania rather than anencephaly-equivalent. Diprosopus CT offer the best opportunity to study the effects of true forking of the notochord, if this actually occurs. 6. In cephalothoracopagus, thoracopagus, and ischiopagus, remarkably complete new body forms are constructed at right angles to the notochordal axes. The extent of expression of viscera in these types depends on the degree of noncongruity of their ventro-ventral axes (Figs. 4, 11, 15b). 7. Some organs and tissues fail to develop (interaction aplasia) because of conflicting migrational pathways or abnormal concentrations of morphogens in and around the neoaxes. 8. Where the cardiovascular system is discordantly expressed in dicephalus and thoracopagus twins, the right heart is more severely malformed, depending on the degree of interaction of the two embryonic septa transversa. 9. The septum transversum provides mesenchymal components to the heawrt and liver; the epithelial components (derived fro the foregut[s]) may vary in number from the number of mesenchymal septa transversa contributing to the liver of the CT embryo.(ABSTRACT TRUNCATED AT 400 WORDS)",
"title": ""
},
{
"docid": "neg:1840409_14",
"text": "We study a dataset of billions of program binary files that appeared on 100 million computers over the course of 12 months, discovering that 94% of these files were present on a single machine. Though malware polymorphism is one cause for the large number of singleton files, additional factors also contribute to polymorphism, given that the ratio of benign to malicious singleton files is 80:1. The huge number of benign singletons makes it challenging to reliably identify the minority of malicious singletons. We present a large-scale study of the properties, characteristics, and distribution of benign and malicious singleton files. We leverage the insights from this study to build a classifier based purely on static features to identify 92% of the remaining malicious singletons at a 1.4% percent false positive rate, despite heavy use of obfuscation and packing techniques by most malicious singleton files that we make no attempt to de-obfuscate. Finally, we demonstrate robustness of our classifier to important classes of automated evasion attacks.",
"title": ""
},
{
"docid": "neg:1840409_15",
"text": "This paper presents all the stages of development of a solar tracker for a photovoltaic panel. The system was made with a microcontroller which was design as an embedded control. It has a data base of the angles of orientation horizontal axle, therefore it has no sensor inlet signal and it function as an open loop control system. Combined of above mention characteristics in one the tracker system is a new technique of the active type. It is also a rotational robot of 1 degree of freedom.",
"title": ""
},
{
"docid": "neg:1840409_16",
"text": "23% of the total global burden of disease is attributable to disorders in people aged 60 years and older. Although the proportion of the burden arising from older people (≥60 years) is highest in high-income regions, disability-adjusted life years (DALYs) per head are 40% higher in low-income and middle-income regions, accounted for by the increased burden per head of population arising from cardiovascular diseases, and sensory, respiratory, and infectious disorders. The leading contributors to disease burden in older people are cardiovascular diseases (30·3% of the total burden in people aged 60 years and older), malignant neoplasms (15·1%), chronic respiratory diseases (9·5%), musculoskeletal diseases (7·5%), and neurological and mental disorders (6·6%). A substantial and increased proportion of morbidity and mortality due to chronic disease occurs in older people. Primary prevention in adults aged younger than 60 years will improve health in successive cohorts of older people, but much of the potential to reduce disease burden will come from more effective primary, secondary, and tertiary prevention targeting older people. Obstacles include misplaced global health priorities, ageism, the poor preparedness of health systems to deliver age-appropriate care for chronic diseases, and the complexity of integrating care for complex multimorbidities. Although population ageing is driving the worldwide epidemic of chronic diseases, substantial untapped potential exists to modify the relation between chronological age and health. This objective is especially important for the most age-dependent disorders (ie, dementia, stroke, chronic obstructive pulmonary disease, and vision impairment), for which the burden of disease arises more from disability than from mortality, and for which long-term care costs outweigh health expenditure. The societal cost of these disorders is enormous.",
"title": ""
},
{
"docid": "neg:1840409_17",
"text": "Question-Answering Bulletin Boards (QABB), such as Yahoo! Answers and Windows Live QnA, are gaining popularity recently. Communications on QABB connect users, and the overall connections can be regarded as a social network. If the evolution of social networks can be predicted, it is quite useful for encouraging communications among users. This paper describes an improved method for predicting links based on weighted proximity measures of social networks. The method is based on an assumption that proximities between nodes can be estimated better by using both graph proximity measures and the weights of existing links in a social network. In order to show the effectiveness of our method, the data of Yahoo! Chiebukuro (Japanese Yahoo! Answers) are used for our experiments. The results show that our method outperforms previous approaches, especially when target social networks are sufficiently dense.",
"title": ""
},
{
"docid": "neg:1840409_18",
"text": "Spreadsheets contain valuable data on many topics, but they are difficult to integrate with other sources. Converting spreadsheet data to the relational model would allow relational integration tools to be used, but using manual methods to do this requires large amounts of work for each integration candidate. Automatic data extraction would be useful but it is very challenging: spreadsheet designs generally requires human knowledge to understand the metadata being described. Even if it is possible to obtain this metadata information automatically, a single mistake can yield an output relation with a huge number of incorrect tuples. We propose a two-phase semiautomatic system that extracts accurate relational metadata while minimizing user effort. Based on conditional random fields (CRFs), our system enables downstream spreadsheet integration applications. First, the automatic extractor uses hints from spreadsheets’ graphical style and recovered metadata to extract the spreadsheet data as accurately as possible. Second, the interactive repair component identifies similar regions in distinct spreadsheets scattered across large spreadsheet corpora, allowing a user’s single manual repair to be amortized over many possible extraction errors. Through our method of integrating the repair workflow into the extraction system, a human can obtain the accurate extraction with just 31% of the manual operations required by a standard classification based technique. We demonstrate and evaluate our system using two corpora: more than 1,000 spreadsheets published by the US government and more than 400,000 spreadsheets downloaded from the Web.",
"title": ""
},
{
"docid": "neg:1840409_19",
"text": "Music consists of precisely patterned sequences of both movement and sound that engage the mind in a multitude of experiences. We move in response to music and we move in order to make music. Because of the intimate coupling between perception and action, music provides a panoramic window through which we can examine the neural organization of complex behaviors that are at the core of human nature. Although the cognitive neuroscience of music is still in its infancy, a considerable behavioral and neuroimaging literature has amassed that pertains to neural mechanisms that underlie musical experience. Here we review neuroimaging studies of explicit sequence learning and temporal production—findings that ultimately lay the groundwork for understanding how more complex musical sequences are represented and produced by the brain. These studies are also brought into an existing framework concerning the interaction of attention and time-keeping mechanisms in perceiving complex patterns of information that are distributed in time, such as those that occur in music.",
"title": ""
}
] |
1840410 | Recent Advance in Content-based Image Retrieval: A Literature Survey | [
{
"docid": "pos:1840410_0",
"text": "Bag-of-Words (BoW) model based on SIFT has been widely used in large scale image retrieval applications. Feature quantization plays a crucial role in BoW model, which generates visual words from the high dimensional SIFT features, so as to adapt to the inverted file structure for indexing. Traditional feature quantization approaches suffer several problems: 1) high computational cost---visual words generation (codebook construction) is time consuming especially with large amount of features; 2) limited reliability---different collections of images may produce totally different codebooks and quantization error is hard to be controlled; 3) update inefficiency--once the codebook is constructed, it is not easy to be updated. In this paper, a novel feature quantization algorithm, scalar quantization, is proposed. With scalar quantization, a SIFT feature is quantized to a descriptive and discriminative bit-vector, of which the first tens of bits are taken out as code word. Our quantizer is independent of collections of images. In addition, the result of scalar quantization naturally lends itself to adapt to the classic inverted file structure for image indexing. Moreover, the quantization error can be flexibly reduced and controlled by efficiently enumerating nearest neighbors of code words.\n The performance of scalar quantization has been evaluated in partial-duplicate Web image search on a database of one million images. Experiments reveal that the proposed scalar quantization achieves a relatively 42% improvement in mean average precision over the baseline (hierarchical visual vocabulary tree approach), and also outperforms the state-of-the-art Hamming Embedding approach and soft assignment method.",
"title": ""
},
{
"docid": "pos:1840410_1",
"text": "Effective and efficient generation of keypoints from an image is a well-studied problem in the literature and forms the basis of numerous Computer Vision applications. Established leaders in the field are the SIFT and SURF algorithms which exhibit great performance under a variety of image transformations, with SURF in particular considered as the most computationally efficient amongst the high-performance methods to date. In this paper we propose BRISK1, a novel method for keypoint detection, description and matching. A comprehensive evaluation on benchmark datasets reveals BRISK's adaptive, high quality performance as in state-of-the-art algorithms, albeit at a dramatically lower computational cost (an order of magnitude faster than SURF in cases). The key to speed lies in the application of a novel scale-space FAST-based detector in combination with the assembly of a bit-string descriptor from intensity comparisons retrieved by dedicated sampling of each keypoint neighborhood.",
"title": ""
}
] | [
{
"docid": "neg:1840410_0",
"text": "We study a stock dealer’s strategy for submitting bid and ask quotes in a limit order book. The agent faces an inventory risk due to the diffusive nature of the stock’s mid-price and a transactions risk due to a Poisson arrival of market buy and sell orders. After setting up the agent’s problem in a maximal expected utility framework, we derive the solution in a two step procedure. First, the dealer computes a personal indifference valuation for the stock, given his current inventory. Second, he calibrates his bid and ask quotes to the market’s limit order book. We compare this ”inventory-based” strategy to a ”naive” best bid/best ask strategy by simulating stock price paths and displaying the P&L profiles of both strategies. We find that our strategy has a P&L profile that has both a higher return and lower variance than the benchmark strategy.",
"title": ""
},
{
"docid": "neg:1840410_1",
"text": "In this paper we study the problem of learning Rectified Linear Units (ReLUs) which are functions of the form x ↦ max(0, ⟨w,x⟩) with w ∈ R denoting the weight vector. We study this problem in the high-dimensional regime where the number of observations are fewer than the dimension of the weight vector. We assume that the weight vector belongs to some closed set (convex or nonconvex) which captures known side-information about its structure. We focus on the realizable model where the inputs are chosen i.i.d. from a Gaussian distribution and the labels are generated according to a planted weight vector. We show that projected gradient descent, when initialized at 0, converges at a linear rate to the planted model with a number of samples that is optimal up to numerical constants. Our results on the dynamics of convergence of these very shallow neural nets may provide some insights towards understanding the dynamics of deeper architectures.",
"title": ""
},
{
"docid": "neg:1840410_2",
"text": "Characterization of driving maneuvers or driving styles through motion sensors has become a field of great interest. Before now, this characterization used to be carried out with signals coming from extra equipment installed inside the vehicle, such as On-Board Diagnostic (OBD) devices or sensors in pedals. Nowadays, with the evolution and scope of smartphones, these have become the devices for recording mobile signals in many driving characterization applications. Normally multiple available sensors are used, such as accelerometers, gyroscopes, magnetometers or the Global Positioning System (GPS). However, using sensors such as GPS increase significantly battery consumption and, additionally, many current phones do not include gyroscopes. Therefore, we propose the characterization of driving style through only the use of smartphone accelerometers. We propose a deep neural network (DNN) architecture that combines convolutional and recurrent networks to estimate the vehicle movement direction (VMD), which is the forward movement directional vector captured in a phone's coordinates. Once VMD is obtained, multiple applications such as characterizing driving styles or detecting dangerous events can be developed. In the development of the proposed DNN architecture, two different methods are compared. The first one is based on the detection and classification of significant acceleration driving forces, while the second one relies on longitudinal and transversal signals derived from the raw accelerometers. The final success rate of VMD estimation for the best method is of 90.07%.",
"title": ""
},
{
"docid": "neg:1840410_3",
"text": "A hypothesized need to form and maintain strong, stable interpersonal relationships is evaluated in light of the empirical literature. The need is for frequent, nonaversive interactions within an ongoing relational bond. Consistent with the belongingness hypothesis, people form social attachments readily under most conditions and resist the dissolution of existing bonds. Belongingness appears to have multiple and strong effects on emotional patterns and on cognitive processes. Lack of attachments is linked to a variety of ill effects on health, adjustment, and well-being. Other evidence, such as that concerning satiation, substitution, and behavioral consequences, is likewise consistent with the hypothesized motivation. Several seeming counterexamples turned out not to disconfirm the hypothesis. Existing evidence supports the hypothesis that the need to belong is a powerful, fundamental, and extremely pervasive motivation.",
"title": ""
},
{
"docid": "neg:1840410_4",
"text": "The development of e-commerce has increased the popularity of online shopping worldwide. In Malaysia, it was reported that online shopping market size was RM1.8 billion in 2013 and it is estimated to reach RM5 billion by 2015. However, online shopping was rated 11 th out of 15 purposes of using internet in 2012. Consumers’ perceived risks of online shopping becomes a hot topic to research as it will directly influence users’ attitude towards online purchasing, and their attitude will have significant impact to the online purchasing behaviour. The conceptualization of consumers’ perceived risk, attitude and online shopping behaviour of this study provides empirical evidence in the study of consumer online behaviour. Four types of risks product risk, financial, convenience and non-delivery risks were examined in term of their effect on consumers’ online attitude. A web-based survey was employed, and a total of 300 online shoppers of a Malaysia largest online marketplace participated in this study. The findings indicated that product risk, financial and non-delivery risks are hazardous and negatively affect the attitude of online shoppers. Convenience risk was found to have positive effect on consumers’ attitude, denoting that online buyers of this site trusted the online seller and they encountered less troublesome with the site. It also implies that consumers did not really concern on non-convenience aspect of online shopping, such as handling of returned products and examine the quality of products featured in the online seller website. The online buyers’ attitude was significantly and positively affects their online purchasing behaviour. The findings provide useful model for measuring and managing consumers’ perceived risk in internet-based transaction to increase their involvement in online shopping and to reduce their cognitive dissonance in the e-commerce setting.",
"title": ""
},
{
"docid": "neg:1840410_5",
"text": "This review presents one of the eight theories of the quality of life (QOL) used for making the SEQOL (self-evaluation of quality of life) questionnaire or the quality of life as realizing life potential. This theory is strongly inspired by Maslow and the review furthermore serves as an example on how to fulfill the demand for an overall theory of life (or philosophy of life), which we believe is necessary for global and generic quality-of-life research. Whereas traditional medical science has often been inspired by mechanical models in its attempts to understand human beings, this theory takes an explicitly biological starting point. The purpose is to take a close view of life as a unique entity, which mechanical models are unable to do. This means that things considered to be beyond the individual's purely biological nature, notably the quality of life, meaning in life, and aspirations in life, are included under this wider, biological treatise. Our interpretation of the nature of all living matter is intended as an alternative to medical mechanism, which dates back to the beginning of the 20th century. New ideas such as the notions of the human being as nestled in an evolutionary and ecological context, the spontaneous tendency of self-organizing systems for realization and concord, and the central role of consciousness in interpreting, planning, and expressing human reality are unavoidable today in attempts to scientifically understand all living matter, including human life.",
"title": ""
},
{
"docid": "neg:1840410_6",
"text": "Social media is playing an increasingly important role as the sources of health related information. The goal of this study is to investigate the extent social media appear in search engine results in the context of health-related information search. We simulate an information seeker’s use of a search engine for health consultation using a set of pre-defined keywords in combination with 5 types of complaints. The results showed that social media constitute a significant part of the search results, indicating that search engines likely direct information seekers to social media sites. This study confirms the growing importance of social media in health communication. It also provides evidence regarding opportunities and challenges faced by health professionals and general public.",
"title": ""
},
{
"docid": "neg:1840410_7",
"text": "First impressions influence the behavior of people towards a newly encountered person or a human-like agent. Apart from the physical characteristics of the encountered face, the emotional expressions displayed on it, as well as ambient information affect these impressions. In this work, we propose an approach to predict the first impressions people will have for a given video depicting a face within a context. We employ pre-trained Deep Convolutional Neural Networks to extract facial expressions, as well as ambient information. After video modeling, visual features that represent facial expression and scene are combined and fed to Kernel Extreme Learning Machine regressor. The proposed system is evaluated on the ChaLearn Challenge Dataset on First Impression Recognition, where the classification target is the ”Big Five” personality trait labels for each video. Our system achieved an accuracy of 90.94% on the sequestered test set, 0.36% points below the top system in the competition.",
"title": ""
},
{
"docid": "neg:1840410_8",
"text": "ed Log Lines Categorize Bins Figure 3. High-level overview of our approach for abstracting execution logs to execution events. Table III. Log lines used as a running example to explain our approach. 1. Start check out 2. Paid for, item=bag, quality=1, amount=100 3. Paid for, item=book, quality=3, amount=150 4. Check out, total amount is 250 5. Check out done Copyright q 2008 John Wiley & Sons, Ltd. J. Softw. Maint. Evol.: Res. Pract. 2008; 20:249–267 DOI: 10.1002/smr AN AUTOMATED APPROACH FOR ABSTRACTING EXECUTION LOGS 257 Table IV. Running example logs after the anonymize step. 1. Start check out 2. Paid for, item=$v, quality=$v, amount=$v 3. Paid for, item=$v, quality=$v, amount=$v 4. Check out, total amount=$v 5. Check out done Table V. Running example logs after the tokenize step. Bin names (no. of words, no. of parameters) Log lines (3,0) 1. Start check out 5. Check out done (5,1) 4. Check out, total amount=$v (8,3) 2. Paid for, item=$v, quality=$v, amount=$v 3. Paid for, item=$v, quality=$v, amount=$v 4.2.2. The tokenize step The tokenize step separates the anonymized log lines into different groups (i.e., bins) according to the number of words and estimated parameters in each log line. The use of multiple bins limits the search space of the following step (i.e., the categorize step). The use of bins permits us to process large log files in a timely fashion using a limited memory footprint since the analysis is done per bin instead of having to load up all the lines in the log file. We estimate the number of parameters in a log line by counting the number of generic terms (i.e., $v). Log lines with the same number of tokens and parameters are placed in the same bin. Table V shows the sample log lines after the anonymize and tokenize steps. The left column indicates the name of a bin. Each bin is named with a tuple: number of words and number of parameters that are contained in the log line associated with that bin. The right column in Table VI shows the log lines. Each row shows the bin and its corresponding log lines. The second and the third log lines contain 8 words and are likely to contain 3 parameters. Thus, the second and third log lines are grouped together in the (8,3) bin. Similarly, the first and last log lines are grouped together in the (3,0) bin since they both contain 3 words and are likely to contain no parameters. 4.2.3. The categorize step The categorize step compares log lines in each bin and abstracts them to the corresponding execution events. The inferred execution events are stored in an execution events database for future references. The algorithm used in the categorize step is shown below. Our algorithm goes through the log lines Copyright q 2008 John Wiley & Sons, Ltd. J. Softw. Maint. Evol.: Res. Pract. 2008; 20:249–267 DOI: 10.1002/smr 258 Z. M. JIANG ET AL. Table VI. Running example logs after the categorize step. Execution events (word parameter id) Log lines 3 0 1 1. Start check out 3 0 2 5. Check out done 5 1 1 4. Check out, total amount=$v 8 3 1 2. Paid for, item=$v, quality=$v, amount=$v 8 3 1 3. Paid for, item=$v, quality=$v, amount=$v bin by bin. After this step, each log line should be abstracted to an execution event. Table VI shows the results of our working example after the categorize step. for each bin bi for each log line lk in bin bi for each execution event e(bi , j) corresponding to bi in the events DB perform word by word comparison between e(bi , j) and lk if (there is no difference) then lk is of type e(bi , j) break end if end for // advance to next e(bi , j) if ( lk does not have a matching execution event) then lk is a new execution event store an abstracted lk into the execution events DB end if end for // advance to the next log line end for // advance to the next bin We now explain our algorithm using the running example. Our algorithm starts with the (3,0) bin. Initially, there are no execution events that correspond to this bin yet. Therefore, the execution event corresponding to the first log line becomes the first execution event namely 3 0 1. The 1 at the end of 3 0 1 indicates that this is the first execution event to correspond to the bin, which has 3 words and no parameters (i.e., bin 3 0). Then the algorithm moves to the next log line in the (3,0) bin, which contains the fifth log line. The algorithm compares the fifth log line with all the existing execution events in the (3,0) bin. Currently, there is only one execution event: 3 0 1. As the fifth log line is not similar to the 3 0 1 execution event, we create a new execution event 3 0 2 for the fifth log line. With all the log lines in the (3,0) bin processed, we can move on to the (5,1) bin. As there are no execution events that correspond to the (5,1) bin initially, the fourth log line gets assigned to a new execution event 5 1 1. Finally, we move on to the (8,3) bin. First, the second log line gets assigned with a new execution event 8 3 1 since there are no execution events corresponding to this bin yet. As the third log line is the same as the second log line (after the anonymize step), the third log line is categorized as the same execution event as the second log Copyright q 2008 John Wiley & Sons, Ltd. J. Softw. Maint. Evol.: Res. Pract. 2008; 20:249–267 DOI: 10.1002/smr AN AUTOMATED APPROACH FOR ABSTRACTING EXECUTION LOGS 259 line. Table VI shows the sample log lines after the categorize step. The left column is the abstracted execution event. The right column shows the line number together with the corresponding log lines. 4.2.4. The reconcile step Since the anonymize step uses heuristics to identify dynamic information in a log line, there is a chance that we might miss to anonymize some dynamic information. The missed dynamic information will result in the abstraction of several log lines to several execution events that are very similar. Table VII shows an example of dynamic information that was missed by the anonymize step. The table shows five different execution events. However, the user names after ‘for user’ are dynamic information and should have been replaced by the generic token ‘$v’. All the log lines shown in Table VII should have been abstracted to the same execution event after the categorize step. The reconcile step addresses this situation. All execution events are re-examined to identify which ones are to be merged. Execution events are merged if: 1. They belong to the same bin. 2. They differ from each other by one token at the same positions. 3. There exists a few of such execution events. We used a threshold of five events in our case studies. Other values are possibly based on the content of the analyzed log files. The threshold prevents the merging of similar yet different execution events, such as ‘Start processing’ and ‘Stop processing’, which should not be merged. Looking at the execution events in Table VII, we note that they all belong to the ‘5 0’ bin and differ from each other only in the last token. Since there are five of such events, we merged them into one event. Table VIII shows the execution events from Table VII after the reconcile step. Note that if the ‘5 0’ bin contains another execution event: ‘Stop processing for user John’; it will not be merged with the above execution events since it differs by two tokens instead of only the last token. Table VII. Sample logs that the categorize step would fail to abstract. Event IDs Execution events 5 0 1 Start processing for user Jen 5 0 2 Start processing for user Tom 5 0 3 Start processing for user Henry 5 0 4 Start processing for user Jack 5 0 5 Start processing for user Peter Table VIII. Sample logs after the reconcile step. Event IDs Execution events 5 0 1 Start processing for user $v Copyright q 2008 John Wiley & Sons, Ltd. J. Softw. Maint. Evol.: Res. Pract. 2008; 20:249–267 DOI: 10.1002/smr 260 Z. M. JIANG ET AL.",
"title": ""
},
{
"docid": "neg:1840410_9",
"text": "Like every other social practice, journalism cannot now fully be understood apart from globalization. As part of a larger platform of communication media, journalism contributes to this experience of the world-as-a-single-place and thus represents a key component in these social transformations, both as cause and outcome. These issues at the intersection of journalism and globalization define an important and growing field of research, particularly concerning the public sphere and spaces for political discourse. In this essay, I review this intersection of journalism and globalization by considering the communication field’s approach to ‘media globalization’ within a broader interdisciplinary perspective that mixes the sociology of globalization with aspects of geography and social anthropology. By placing the emphasis on social practices, elites, and specific geographical spaces, I introduce a less media-centric approach to media globalization and how journalism fits into the process. Beyond ‘global village journalism,’ this perspective captures the changes globalization has brought to journalism. Like every other social practice, journalism cannot now fully be understood apart from globalization. This process refers to the intensification of social interconnections, which allows apprehending the world as a single place, creating a greater awareness of our own place and its relative location within the range of world experience. As part of a larger platform of communication media, journalism contributes to this experience and thus represents a key component in these social transformations, both as cause and outcome. These issues at the intersection of journalism and globalization define an important and growing field of research, particularly concerning the public sphere and spaces for political discourse. The study of globalization has become a fashionable growth industry, attracting an interdisciplinary assortment of scholars. Journalism, meanwhile, itself has become an important subject in its own right within media studies, with a growing number of projects taking an international perspective (reviewed in Reese 2009). Combining the two areas yields a complex subject that requires some careful sorting out to get beyond the jargon and the easy country–by-country case studies. From the globalization studies side, the media role often seems like an afterthought, a residual category of social change, or a self-evident symbol of the global era–CNN, for example. Indeed, globalization research has been slower to consider the changing role of journalism, compared to the attention devoted to financial and entertainment flows. That may be expected, given that economic and cultural globalization is further along than that of politics, and journalism has always been closely tied to democratic structures, many of which are inherently rooted in local communities. The media-centrism of communication research, on the other hand, may give the media—and the journalism associated with them—too much credit in the globalization process, treating certain media as the primary driver of global connections and the proper object of study. Global connections support new forms of journalism, which create politically significant new spaces within social systems, lead to social change, and privilege certain forms Sociology Compass 4/6 (2010): 344–353, 10.1111/j.1751-9020.2010.00282.x a 2010 The Author Journal Compilation a 2010 Blackwell Publishing Ltd of power. Therefore, we want to know how journalism has contributed to these new spaces, bringing together new combinations of transnational élites, media professionals, and citizens. To what extent are these interactions shaped by a globally consistent shared logic, and what are the consequences for social change and democratic values? Here, however, the discussion often gets reduced to whether a cultural homogenization is taking place, supporting a ‘McWorld’ thesis of a unitary media and journalistic form. But we do not have to subscribe to a one-world media monolith prediction to expect certain transnational logics to emerge to take their place along side existing ones. Journalism at its best contributes to social transparency, which is at the heart of the globalization optimists’ hopes for democracy (e.g. Giddens 2000). The insertion of these new logics into national communities, especially those closed or tightly controlled societies, can bring an important impulse for social change (seen in a number of case studies from China, as in Reese and Dai 2009). In this essay, I will review a few of the issues at the intersection of journalism and globalization and consider a more nuanced view of media within a broader network of actors, particularly in the case of journalism as it helps create emerging spaces for public affairs discourse. Understanding the complex interplay of the global and local requires an interdisciplinary perspective, mixing the sociology of globalization with aspects of geography and social anthropology. This helps avoid equating certain emerging global news forms with a new and distinct public sphere. The globalization of journalism occurs through a multitude of levels, relationships, social actors, and places, as they combine to create new public spaces. Communication research may bring journalism properly to the fore, but it must be considered within the insights into places and relationships provided by these other disciplines. Before addressing these questions, it is helpful to consider how journalism has figured into some larger debates. Media Globalization: Issues of Scale and Homogeneity One major fault line lies within the broader context of ‘media,’ where journalism has been seen as providing flows of information and transnational connections. That makes it a key factor in the phenomenon of ‘media globalization.’ McLuhan gave us the enduring image of the ‘global village,’ a quasi-utopian idea that has seeped into such theorizing about the contribution of media. The metaphor brings expectations of an extensive, unitary community, with a corresponding set of universal, global values, undistorted by parochial interests and propaganda. The interaction of world media systems, however, has not as of yet yielded the kind of transnational media and programs that would support such ‘village’-worthy content (Ferguson 1992; Sparks 2007). In fact, many of the communication barriers show no signs of coming down, with many specialized enclaves becoming stronger. In this respect, changes in media reflect the larger crux of globalization that it simultaneously facilitates certain ‘monoculture’ global standards along with the proliferation of a host of micro-communities that were not possible before. In a somewhat analogous example, the global wine trade has led to convergent trends in internationally desirable tastes but also allowed a number of specialized local wineries to survive and flourish through the ability to reach global markets. The very concept of ‘media globalization’ suggests that we are not quite sure if media lead to globalization or are themselves the result of it. In any case, giving the media a privileged place in shaping a globalized future has led to high expectations for international journalism, satellite television, and other media to provide a workable global public sphere, making them an easy target if they come up short. In his book, Media globalization Journalism and Globalization 345 a 2010 The Author Sociology Compass 4/6 (2010): 344–353, 10.1111/j.1751-9020.2010.00282.x Journal Compilation a 2010 Blackwell Publishing Ltd myth, Kai Hafez (2007) provides that kind of attack. Certainly, much of the discussion has suffered from overly optimistic and under-conceptualized research, with global media technology being a ‘necessary but not sufficient condition for global communication.’ (p. 2) Few truly transnational media forms have emerged that have a more supranational than national allegiance (among newspapers, the International Herald Tribune, Wall St. Journal Europe, Financial Times), and among transnational media even CNN does not present a single version to the world, split as it is into various linguistic viewer zones. Defining cross-border communication as the ‘core phenomenon’ of globalization leads to comparing intrato inter-national communication as the key indicator of globalization. For example, Hafez rejects the internet as a global system of communication, because global connectivity does not exceed local and regional connections. With that as a standard, we may indeed conclude that media globalization has failed to produce true transnational media platforms or dialogs across boundaries. Rather a combination of linguistic and digital divides, along with enduring regional preferences, actually reinforces some boundaries. (The wishful thinking for a global media may be tracked to highly mobile Western scholars, who in Hafez’s ‘hotel thesis’ overestimate the role of such transnational media, because they are available to them in their narrow and privileged travel circles.) Certainly, the foreign news most people receive, even about big international events, is domesticated through the national journalistic lens. Indeed, international reporting, as a key component of the would-be global public sphere, flunks Hafez’s ‘global test,’ incurring the same criticisms others have leveled for years at national journalism: elite-focused, conflictual, and sensational, with a narrow, parochial emphasis. If ‘global’ means giving ‘dialogic’ voices a chance to speak to each other without reproducing national ethnocentrism, then the world’s media still fail to measure up. Conceptualizing the ‘Global’ For many, ‘global’ means big. That goes too for the global village perspective, which emphasizes the scaling dimension and equates the global with ‘bigness,’ part of a nested hierarchy of levels of analysis based on size: beyond local, regional, and nationa",
"title": ""
},
{
"docid": "neg:1840410_10",
"text": "With the advance of the World-Wide Web (WWW) technology, people can easily share content on the Web, including geospatial data and web services. Thus, the “big geospatial data management” issues start attracting attention. Among the big geospatial data issues, this research focuses on discovering distributed geospatial resources. As resources are scattered on the WWW, users cannot find resources of their interests efficiently. While the WWW has Web search engines addressing web resource discovery issues, we envision that the geospatial Web (i.e., GeoWeb) also requires GeoWeb search engines. To realize a GeoWeb search engine, one of the first steps is to proactively discover GeoWeb resources on the WWW. Hence, in this study, we propose the GeoWeb Crawler, an extensible Web crawling framework that can find various types of GeoWeb resources, such as Open Geospatial Consortium (OGC) web services, Keyhole Markup Language (KML) and Environmental Systems Research Institute, Inc (ESRI) Shapefiles. In addition, we apply the distributed computing concept to promote the performance of the GeoWeb Crawler. The result shows that for 10 targeted resources types, the GeoWeb Crawler discovered 7351 geospatial services and 194,003 datasets. As a result, the proposed GeoWeb Crawler framework is proven to be extensible and scalable to provide a comprehensive index of GeoWeb.",
"title": ""
},
{
"docid": "neg:1840410_11",
"text": "Gesture typing is an efficient input method for phones and tablets using continuous traces created by a pointed object (e.g., finger or stylus). Translating such continuous gestures into textual input is a challenging task as gesture inputs exhibit many features found in speech and handwriting such as high variability, co-articulation and elision. In this work, we address these challenges with a hybrid approach, combining a variant of recurrent networks, namely Long Short Term Memories [1] with conventional Finite State Transducer decoding [2]. Results using our approach show considerable improvement relative to a baseline shape-matching-based system, amounting to 4% and 22% absolute improvement respectively for small and large lexicon decoding on real datasets and 2% on a synthetic large scale dataset.",
"title": ""
},
{
"docid": "neg:1840410_12",
"text": "The multiprotein mTORC1 protein kinase complex is the central component of a pathway that promotes growth in response to insulin, energy levels, and amino acids and is deregulated in common cancers. We find that the Rag proteins--a family of four related small guanosine triphosphatases (GTPases)--interact with mTORC1 in an amino acid-sensitive manner and are necessary for the activation of the mTORC1 pathway by amino acids. A Rag mutant that is constitutively bound to guanosine triphosphate interacted strongly with mTORC1, and its expression within cells made the mTORC1 pathway resistant to amino acid deprivation. Conversely, expression of a guanosine diphosphate-bound Rag mutant prevented stimulation of mTORC1 by amino acids. The Rag proteins do not directly stimulate the kinase activity of mTORC1, but, like amino acids, promote the intracellular localization of mTOR to a compartment that also contains its activator Rheb.",
"title": ""
},
{
"docid": "neg:1840410_13",
"text": "Advances in online and computer supported education afford exciting opportunities to revolutionize the classroom, while also presenting a number of new challenges not faced in traditional educational settings. Foremost among these challenges is the problem of accurately and efficiently evaluating learner work as the class size grows, which is directly related to the larger goal of providing quality, timely, and actionable formative feedback. Recently there has been a surge in interest in using peer grading methods coupled with machine learning to accurately and fairly evaluate learner work while alleviating the instructor bottleneck and grading overload. Prior work in peer grading almost exclusively focuses on numerically scored grades -- either real-valued or ordinal. In this work, we consider the implications of peer ranking in which learners rank a small subset of peer work from strongest to weakest, and propose new types of computational analyses that can be applied to this ranking data. We adopt a Bayesian approach to the ranked peer grading problem and develop a novel model and method for utilizing ranked peer-grading data. We additionally develop a novel procedure for adaptively identifying which work should be ranked by particular peers in order to dynamically resolve ambiguity in the data and rapidly resolve a clearer picture of learner performance. We showcase our results on both synthetic and several real-world educational datasets.",
"title": ""
},
{
"docid": "neg:1840410_14",
"text": "We present an accurate stereo matching method using <italic>local expansion moves</italic> based on graph cuts. This new move-making scheme is used to efficiently infer per-pixel 3D plane labels on a pairwise Markov random field (MRF) that effectively combines recently proposed slanted patch matching and curvature regularization terms. The local expansion moves are presented as many <inline-formula><tex-math notation=\"LaTeX\">$\\alpha$</tex-math><alternatives> <inline-graphic xlink:href=\"taniai-ieq1-2766072.gif\"/></alternatives></inline-formula>-expansions defined for small grid regions. The local expansion moves extend traditional expansion moves by two ways: localization and spatial propagation. By localization, we use different candidate <inline-formula><tex-math notation=\"LaTeX\">$\\alpha$</tex-math> <alternatives><inline-graphic xlink:href=\"taniai-ieq2-2766072.gif\"/></alternatives></inline-formula>-labels according to the locations of local <inline-formula><tex-math notation=\"LaTeX\">$\\alpha$</tex-math><alternatives> <inline-graphic xlink:href=\"taniai-ieq3-2766072.gif\"/></alternatives></inline-formula>-expansions. By spatial propagation, we design our local <inline-formula><tex-math notation=\"LaTeX\">$\\alpha$</tex-math><alternatives> <inline-graphic xlink:href=\"taniai-ieq4-2766072.gif\"/></alternatives></inline-formula>-expansions to propagate currently assigned labels for nearby regions. With this localization and spatial propagation, our method can efficiently infer MRF models with a continuous label space using randomized search. Our method has several advantages over previous approaches that are based on fusion moves or belief propagation; it produces <italic>submodular moves </italic> deriving a <italic>subproblem optimality</italic>; it helps find good, smooth, piecewise linear disparity maps; it is suitable for parallelization; it can use cost-volume filtering techniques for accelerating the matching cost computations. Even using a simple pairwise MRF, our method is shown to have best performance in the Middlebury stereo benchmark V2 and V3.",
"title": ""
},
{
"docid": "neg:1840410_15",
"text": "The ransomware nightmare is taking over the internet impacting common users, small businesses and large ones. The interest and investment which are pushed into this market each month, tells us a few things about the evolution of both technical and social engineering and what to expect in the short-coming future from them. In this paper we analyze how ransomware programs developed in the last few years and how they were released in certain market segments throughout the deep web via RaaS, exploits or SPAM, while learning from their own mistakes to bring profit to the next level. We will also try to highlight some mistakes that were made, which allowed recovering the encrypted data, along with the ransomware authors preference for specific encryption types, how they got to distribute, the silent agreement between ransomwares, coin-miners and bot-nets and some edge cases of encryption, which may prove to be exploitable in the short-coming future.",
"title": ""
},
{
"docid": "neg:1840410_16",
"text": "Automotive SoCs are constantly being tested for correct functional operation, even long after they have left fabrication. The testing is done at the start of operation (car ignition) and repeatedly during operation (during the drive) to check for faults. Faults can result from, but are not restricted to, a failure in a part of a semiconductor circuit such as a failed transistor, interconnect failure due to electromigration, or faults caused by soft errors (e.g., an alpha particle switching a bit in a RAM or other circuit element). While the tests can run long after the chip was taped-out, the safety definition and test plan effort is starting as early as the specification definitions. In this paper we give an introduction to functional safety concentrating on the ISO26262 standard and we touch on a couple of approaches to functional safety for an Intellectual Property (IP) part such as a microprocessor, including software self-test libraries and logic BIST. We discuss the additional effort needed for developing a design for the automotive market. Lastly, we focus on our experience of using fault grading as a method for developing a self-test library that periodically tests the circuit operation. We discuss the effect that implementation decisions have on this effort and why it is important to start with this effort early in the design process.",
"title": ""
},
{
"docid": "neg:1840410_17",
"text": "The axial magnetic flux leakage(MFL) inspection tools cannot reliably detect or size axially aligned cracks, such as SCC, longitudinal corrosion, long seam defects, and axially oriented mechanical damage. To focus on this problem, the circumferential MFL inspection tool is introduced. The finite element (FE) model is established by adopting ANSYS software to simulate magnetostatics. The results show that the amount of flux that is diverted out of the pipe depends on the geometry of the defect, the primary variables that affect the flux leakage are the ones that define the volume of the defect. The defect location can significantly affect flux leakage, the magnetic field magnitude arising due to the presence of the defect is immersed in the high field close to the permanent magnets. These results demonstrate the feasibility of detecting narrow axial defects and the practicality of developing a circumferential MFL tool.",
"title": ""
},
{
"docid": "neg:1840410_18",
"text": "In this paper we address the problem of automated classification of isolates, i.e., the problem of determining the family of genomes to which a given genome belongs. Additionally, we address the problem of automated unsupervised hierarchical clustering of isolates according only to their statistical substring properties. For both of these problems we present novel algorithms based on nucleotide n-grams, with no required preprocessing steps such as sequence alignment. Results obtained experimentally are very positive and suggest that the proposed techniques can be successfully used in a variety of related problems. The reported experiments demonstrate better performance than some of the state-of-the-art methods. We report on a new distance measure between n-gram profiles, which shows superior performance compared to many other measures, including commonly used Euclidean distance.",
"title": ""
},
{
"docid": "neg:1840410_19",
"text": "We consider the problem of zero-shot recognition: learning a visual classifier for a category with zero training examples, just using the word embedding of the category and its relationship to other categories, which visual data are provided. The key to dealing with the unfamiliar or novel category is to transfer knowledge obtained from familiar classes to describe the unfamiliar class. In this paper, we build upon the recently introduced Graph Convolutional Network (GCN) and propose an approach that uses both semantic embeddings and the categorical relationships to predict the classifiers. Given a learned knowledge graph (KG), our approach takes as input semantic embeddings for each node (representing visual category). After a series of graph convolutions, we predict the visual classifier for each category. During training, the visual classifiers for a few categories are given to learn the GCN parameters. At test time, these filters are used to predict the visual classifiers of unseen categories. We show that our approach is robust to noise in the KG. More importantly, our approach provides significant improvement in performance compared to the current state-of-the-art results (from 2 ~ 3% on some metrics to whopping 20% on a few).",
"title": ""
}
] |
1840411 | Arbitrary-Oriented Vehicle Detection in Aerial Imagery with Single Convolutional Neural Networks | [
{
"docid": "pos:1840411_0",
"text": "Detecting vehicles in aerial imagery plays an important role in a wide range of applications. The current vehicle detection methods are mostly based on sliding-window search and handcrafted or shallow-learning-based features, having limited description capability and heavy computational costs. Recently, due to the powerful feature representations, region convolutional neural networks (CNN) based detection methods have achieved state-of-the-art performance in computer vision, especially Faster R-CNN. However, directly using it for vehicle detection in aerial images has many limitations: (1) region proposal network (RPN) in Faster R-CNN has poor performance for accurately locating small-sized vehicles, due to the relatively coarse feature maps; and (2) the classifier after RPN cannot distinguish vehicles and complex backgrounds well. In this study, an improved detection method based on Faster R-CNN is proposed in order to accomplish the two challenges mentioned above. Firstly, to improve the recall, we employ a hyper region proposal network (HRPN) to extract vehicle-like targets with a combination of hierarchical feature maps. Then, we replace the classifier after RPN by a cascade of boosted classifiers to verify the candidate regions, aiming at reducing false detection by negative example mining. We evaluate our method on the Munich vehicle dataset and the collected vehicle dataset, with improvements in accuracy and robustness compared to existing methods.",
"title": ""
}
] | [
{
"docid": "neg:1840411_0",
"text": "Fast adaptation of deep neural networks (DNN) is an important research topic in deep learning. In this paper, we have proposed a general adaptation scheme for DNN based on discriminant condition codes, which are directly fed to various layers of a pre-trained DNN through a new set of connection weights. Moreover, we present several training methods to learn connection weights from training data as well as the corresponding adaptation methods to learn new condition code from adaptation data for each new test condition. In this work, the fast adaptation scheme is applied to supervised speaker adaptation in speech recognition based on either frame-level cross-entropy or sequence-level maximum mutual information training criterion. We have proposed three different ways to apply this adaptation scheme based on the so-called speaker codes: i) Nonlinear feature normalization in feature space; ii) Direct model adaptation of DNN based on speaker codes; iii) Joint speaker adaptive training with speaker codes. We have evaluated the proposed adaptation methods in two standard speech recognition tasks, namely TIMIT phone recognition and large vocabulary speech recognition in the Switchboard task. Experimental results have shown that all three methods are quite effective to adapt large DNN models using only a small amount of adaptation data. For example, the Switchboard results have shown that the proposed speaker-code-based adaptation methods may achieve up to 8-10% relative error reduction using only a few dozens of adaptation utterances per speaker. Finally, we have achieved very good performance in Switchboard (12.1% in WER) after speaker adaptation using sequence training criterion, which is very close to the best performance reported in this task (\"Deep convolutional neural networks for LVCSR,\" T. N. Sainath et al., Proc. IEEE Acoust., Speech, Signal Process., 2013).",
"title": ""
},
{
"docid": "neg:1840411_1",
"text": "Nowadays, operational quality and robustness of cellular networks are among the hottest topics wireless communications research. As a response to a growing need in reduction of expenses for mobile operators, 3rd Generation Partnership Project (3GPP) initiated work on Minimization of Drive Tests (MDT). There are several major areas of standardization related to MDT, such as coverage, capacity, mobility optimization and verification of end user quality [1]. This paper presents results of the research devoted to Quality of Service (QoS) verification for MDT. The main idea is to jointly observe the user experienced QoS in terms of throughput, and corresponding radio conditions. Also the necessity to supplement the existing MDT metrics with the new reporting types is elaborated.",
"title": ""
},
{
"docid": "neg:1840411_2",
"text": "The Evolution of Cognitive Bias Despite widespread claims to the contrary, the human mind is not worse than rational… but may often be better than rational. On the surface, cognitive biases appear to be somewhat puzzling when viewed through an evolutionary lens. Because they depart from standards of logic and accuracy, they appear to be design flaws instead of examples of good engineering. Cognitive traits can be evaluated according to any number of performance criteria-logical sufficiency, accuracy, speed of processing, and so on. The value of a criterion depends on the question the scientist is asking. To the evolutionary psychologist, however, the evaluative task is not whether the cognitive feature is accurate or logical, but rather how well it solves a particular problem, and how solving this problem contributed to fitness ancestrally. Viewed in this way, if a cognitive bias positively impacted fitness it is not a design flaw – it is a design feature. This chapter discusses the many biases that are probably not the result of mere constraints on the design of the mind or other mysterious irrationalities, but rather are adaptations that can be studied and better understood from an evolutionary perspective. By cognitive bias, we mean cases in which human cognition reliably produces representations that are systematically distorted compared to some aspect of objective reality. We note that the term bias is used in the literature in a number of different ways (see, We do not seek to make commitments about these definitions here; rather, we use bias throughout this chapter in the relatively noncommittal sense defined above. An evolutionary psychological perspective predicts that the mind is equipped with function-specific mechanisms adapted for special purposes—mechanisms with special design for Cognitive Bias-3 solving problems such as mating, which are separate, at least in part, from those involved in solving problems of food choice, predator avoidance, and social exchange (e. demonstrating domain specificity in solving a particular problem is a part of building a case that the trait has been shaped by selection to perform that function. The evolved function of the eye, for instance, is to facilitate sight because it does this well (it exhibits proficiency), the features of the eye have the common and unique effect of facilitating sight (it exhibits specificity), and there are no plausible alternative hypotheses that account for the eye's features. Some design features that appear to be flaws when viewed in …",
"title": ""
},
{
"docid": "neg:1840411_3",
"text": "PaaS vendors face challenges in efficiently providing services with the growth of their offerings. In this paper, we explore how PaaS vendors are using containers as a means of hosting Apps. The paper starts with a discussion of PaaS Use case and the current adoption of Container based PaaS architectures with the existing vendors. We explore various container implementations - Linux Containers, Docker, Warden Container, lmctfy and OpenVZ. We look at how each of this implementation handle Process, FileSystem and Namespace isolation. We look at some of the unique features of each container and how some of them reuse base Linux Container implementation or differ from it. We also explore how IaaSlayer itself has started providing support for container lifecycle management along with Virtual Machines. In the end, we look at factors affecting container implementation choices and some of the features missing from the existing implementations for the next generation PaaS.",
"title": ""
},
{
"docid": "neg:1840411_4",
"text": "To realize the vision of Internet-of-Things (IoT), numerous IoT devices have been developed for improving daily lives, in which smart home devices are among the most popular ones. Smart locks rely on smartphones to ease the burden of physical key management and keep tracking the door opening/close status, the security of which have aroused great interests from the security community. As security is of utmost importance for the IoT environment, we try to investigate the security of IoT by examining smart lock security. Specifically, we focus on analyzing the security of August smart lock. The threat models are illustrated for attacking August smart lock. We then demonstrate several practical attacks based on the threat models toward August smart lock including handshake key leakage, owner account leakage, personal information leakage, and denial-of-service (DoS) attacks. We also propose the corresponding defense methods to counteract these attacks.",
"title": ""
},
{
"docid": "neg:1840411_5",
"text": "This paper presents a robust approach for road marking detection and recognition from images captured by an embedded camera mounted on a car. Our method is designed to cope with illumination changes, shadows, and harsh meteorological conditions. Furthermore, the algorithm can effectively group complex multi-symbol shapes into an individual road marking. For this purpose, the proposed technique relies on MSER features to obtain candidate regions which are further merged using density-based clustering. Finally, these regions of interest are recognized using machine learning approaches. Worth noting, the algorithm is versatile since it does not utilize any prior information about lane position or road space. The proposed method compares favorably to other existing works through a large number of experiments on an extensive road marking dataset.",
"title": ""
},
{
"docid": "neg:1840411_6",
"text": "With increasing volumes in data and more sophisticated Machine Learning algorithms, the demand for fast and energy efficient computation systems is also growing. The combination of classical CPU systems with more specialized hardware such as FPGAs offer one way to meet this demand. FPGAs are fast and energy efficient reconfigurable hardware devices allowing new design explorations for algorithms and their implementations. This report briefly discusses FPGAs as computational hardware and their application in the domain of Machine Learning, specifically in combination with Gaussian Processes.",
"title": ""
},
{
"docid": "neg:1840411_7",
"text": "Multi-level marketing is a marketing approach that motivates its participants to promote a certain product among their friends. The popularity of this approach increases due to the accessibility of modern social networks, however, it existed in one form or the other long before the Internet age began (the infamous Pyramid scheme that dates back at least a century is in fact a special case of multi-level marketing). This paper lays foundations for the study of reward mechanisms in multi-level marketing within social networks. We provide a set of desired properties for such mechanisms and show that they are uniquely satisfied by geometric reward mechanisms. The resilience of mechanisms to false-name manipulations is also considered; while geometric reward mechanisms fail against such manipulations, we exhibit other mechanisms which are false-name-proof.",
"title": ""
},
{
"docid": "neg:1840411_8",
"text": "Casticin, a polymethoxyflavone occurring in natural plants, has been shown to have anticancer activities. In the present study, we aims to investigate the anti-skin cancer activity of casticin on melanoma cells in vitro and the antitumor effect of casticin on human melanoma xenografts in nu/nu mice in vivo. A flow cytometric assay was performed to detect expression of viable cells, cell cycles, reactive oxygen species production, levels of [Formula: see text] and caspase activity. A Western blotting assay and confocal laser microscope examination were performed to detect expression of protein levels. In the in vitro studies, we found that casticin induced morphological cell changes and DNA condensation and damage, decreased the total viable cells, and induced G2/M phase arrest. Casticin promoted reactive oxygen species (ROS) production, decreased the level of [Formula: see text], and promoted caspase-3 activities in A375.S2 cells. The induced G2/M phase arrest indicated by the Western blotting assay showed that casticin promoted the expression of p53, p21 and CHK-1 proteins and inhibited the protein levels of Cdc25c, CDK-1, Cyclin A and B. The casticin-induced apoptosis indicated that casticin promoted pro-apoptotic proteins but inhibited anti-apoptotic proteins. These findings also were confirmed by the fact that casticin promoted the release of AIF and Endo G from mitochondria to cytosol. An electrophoretic mobility shift assay (EMSA) assay showed that casticin inhibited the NF-[Formula: see text]B binding DNA and that these effects were time-dependent. In the in vivo studies, results from immuno-deficient nu/nu mice bearing the A375.S2 tumor xenograft indicated that casticin significantly suppressed tumor growth based on tumor size and weight decreases. Early G2/M arrest and mitochondria-dependent signaling contributed to the apoptotic A375.S2 cell demise induced by casticin. In in vivo experiments, A375.S2 also efficaciously suppressed tumor volume in a xenotransplantation model. Therefore, casticin might be a potential therapeutic agent for the treatment of skin cancer in the future.",
"title": ""
},
{
"docid": "neg:1840411_9",
"text": "OBJECTIVE\nTo assess whether frequent marijuana use is associated with residual neuropsychological effects.\n\n\nDESIGN\nSingle-blind comparison of regular users vs infrequent users of marijuana.\n\n\nPARTICIPANTS\nTwo samples of college undergraduates: 65 heavy users, who had smoked marijuana a median of 29 days in the last 30 days (range, 22 to 30 days) and who also displayed cannabinoids in their urine, and 64 light users, who had smoked a median of 1 day in the last 30 days (range, 0 to 9 days) and who displayed no urinary cannabinoids.\n\n\nINTERVENTION\nSubjects arrived at 2 PM on day 1 of their study visit, then remained at our center overnight under supervision. Neuropsychological tests were administered to all subjects starting at 9 AM on day 2. Thus, all subjects were abstinent from marijuana and other drugs for a minimum of 19 hours before testing.\n\n\nMAIN OUTCOME MEASURES\nSubjects received a battery of standard neuropsychological tests to assess general intellectual functioning, abstraction ability, sustained attention, verbal fluency, and ability to learn and recall new verbal and visuospatial information.\n\n\nRESULTS\nHeavy users displayed significantly greater impairment than light users on attention/executive functions, as evidenced particularly by greater perseverations on card sorting and reduced learning of word lists. These differences remained after controlling for potential confounding variables, such as estimated levels of premorbid cognitive functioning, and for use of alcohol and other substances in the two groups.\n\n\nCONCLUSIONS\nHeavy marijuana use is associated with residual neuropsychological effects even after a day of supervised abstinence from the drug. However, the question remains open as to whether this impairment is due to a residue of drug in the brain, a withdrawal effect from the drug, or a frank neurotoxic effect of the drug. from marijuana",
"title": ""
},
{
"docid": "neg:1840411_10",
"text": "uted to the discovery and characterization of new materials. The discovery of semiconductors laid the foundation for modern electronics, while the formulation of new molecules allows us to treat diseases previously thought incurable. Looking into the future, some of the largest problems facing humanity now are likely to be solved by the discovery of new materials. In this article, we explore the techniques materials scientists are using and show how our novel artificial intelligence system, Phase-Mapper, allows materials scientists to quickly solve material systems to infer their underlying crystal structures and has led to the discovery of new solar light absorbers. Articles",
"title": ""
},
{
"docid": "neg:1840411_11",
"text": "This study explores the role of speech register and prosody for the task of word segmentation. Since these two factors are thought to play an important role in early language acquisition, we aim to quantify their contribution for this task. We study a Japanese corpus containing both infantand adult-directed speech and we apply four different word segmentation models, with and without knowledge of prosodic boundaries. The results showed that the difference between registers is smaller than previously reported and that prosodic boundary information helps more adultthan infant-directed speech.",
"title": ""
},
{
"docid": "neg:1840411_12",
"text": "Scenario-based specifications such as Message Sequence Charts (MSCs) are useful as part of a requirements specification. A scenario is a partial story, describing how system components, the environment, and users work concurrently and interact in order to provide system level functionality. Scenarios need to be combined to provide a more complete description of system behavior. Consequently, scenario synthesis is central to the effective use of scenario descriptions. How should a set of scenarios be interpreted? How do they relate to one another? What is the underlying semantics? What assumptions are made when synthesizing behavior models from multiple scenarios? In this paper, we present an approach to scenario synthesis based on a clear sound semantics, which can support and integrate many of the existing approaches to scenario synthesis. The contributions of the paper are threefold. We first define an MSC language with sound abstract semantics in terms of labeled transition systems and parallel composition. The language integrates existing approaches based on scenario composition by using high-level MSCs (hMSCs) and those based on state identification by introducing explicit component state labeling. This combination allows stakeholders to break up scenario specifications into manageable parts and reuse scenarios using hMCSs; it also allows them to introduce additional domainspecific information and general assumptions explicitly into the scenario specification using state labels. Second, we provide a sound synthesis algorithm which translates scenarios into a behavioral specification in the form of Finite Sequential Processes. This specification can be analyzed with the Labeled Transition System Analyzer using model checking and animation. Finally, we demonstrate how many of the assumptions embedded in existing synthesis approaches can be made explicit and modeled in our approach. Thus, we provide the basis for a common approach to scenario-based specification, synthesis, and analysis.",
"title": ""
},
{
"docid": "neg:1840411_13",
"text": "By asking users of career-oriented social networking sites I investigated their job search behavior. For further IS-theorizing I integrated the number of a user's contacts as an own construct into Venkatesh's et al. UTAUT2 model, which substantially rose its predictive quality from 19.0 percent to 80.5 percent concerning the variance of job search success. Besides other interesting results I found a substantial negative relationship between the number of contacts and job search success, which supports the experience of practitioners but contradicts scholarly findings. The results are useful for scholars and practitioners.",
"title": ""
},
{
"docid": "neg:1840411_14",
"text": "In this paper, a simple algorithm for detecting the range and shape of tumor in brain MR Images is described. Generally, CT scan or MRI that is directed into intracranial cavity produces a complete image of brain. This image is visually examined by the physician for detection and diagnosis of brain tumor. To avoid that, this project uses computer aided method for segmentation (detection) of brain tumor based on the combination of two algorithms. This method allows the segmentation of tumor tissue with accuracy and reproducibility comparable to manual segmentation. In addition, it also reduces the time for analysis. At the end of the process the tumor is extracted from the MR image and its exact position and the shape also determined. The stage of the tumor is displayed based on the amount of area calculated from the cluster.",
"title": ""
},
{
"docid": "neg:1840411_15",
"text": "Light scattering and color change are two major sources of distortion for underwater photography. Light scattering is caused by light incident on objects reflected and deflected multiple times by particles present in the water before reaching the camera. This in turn lowers the visibility and contrast of the image captured. Color change corresponds to the varying degrees of attenuation encountered by light traveling in the water with different wavelengths, rendering ambient underwater environments dominated by a bluish tone. No existing underwater processing techniques can handle light scattering and color change distortions suffered by underwater images, and the possible presence of artificial lighting simultaneously. This paper proposes a novel systematic approach to enhance underwater images by a dehazing algorithm, to compensate the attenuation discrepancy along the propagation path, and to take the influence of the possible presence of an artifical light source into consideration. Once the depth map, i.e., distances between the objects and the camera, is estimated, the foreground and background within a scene are segmented. The light intensities of foreground and background are compared to determine whether an artificial light source is employed during the image capturing process. After compensating the effect of artifical light, the haze phenomenon and discrepancy in wavelength attenuation along the underwater propagation path to camera are corrected. Next, the water depth in the image scene is estimated according to the residual energy ratios of different color channels existing in the background light. Based on the amount of attenuation corresponding to each light wavelength, color change compensation is conducted to restore color balance. The performance of the proposed algorithm for wavelength compensation and image dehazing (WCID) is evaluated both objectively and subjectively by utilizing ground-truth color patches and video downloaded from the Youtube website. Both results demonstrate that images with significantly enhanced visibility and superior color fidelity are obtained by the WCID proposed.",
"title": ""
},
{
"docid": "neg:1840411_16",
"text": "Mosquitoes represent the major arthropod vectors of human disease worldwide transmitting malaria, lymphatic filariasis, and arboviruses such as dengue virus and Zika virus. Unfortunately, no treatment (in the form of vaccines or drugs) is available for most of these diseases andvectorcontrolisstillthemainformofprevention. Thelimitationsoftraditionalinsecticide-based strategies, particularly the development of insecticide resistance, have resulted in significant efforts to develop alternative eco-friendly methods. Biocontrol strategies aim to be sustainable and target a range of different mosquito species to reduce the current reliance on insecticide-based mosquito control. In thisreview, weoutline non-insecticide basedstrategiesthat havebeenimplemented orare currently being tested. We also highlight the use of mosquito behavioural knowledge that can be exploited for control strategies.",
"title": ""
},
{
"docid": "neg:1840411_17",
"text": "The initial focus of recombinant protein production by filamentous fungi related to exploiting the extraordinary extracellular enzyme synthesis and secretion machinery of industrial strains, including Aspergillus, Trichoderma, Penicillium and Rhizopus species, was to produce single recombinant protein products. An early recognized disadvantage of filamentous fungi as hosts of recombinant proteins was their common ability to produce homologous proteases which could degrade the heterologous protein product and strategies to prevent proteolysis have met with some limited success. It was also recognized that the protein glycosylation patterns in filamentous fungi and in mammals were quite different, such that filamentous fungi are likely not to be the most suitable microbial hosts for production of recombinant human glycoproteins for therapeutic use. By combining the experience gained from production of single recombinant proteins with new scientific information being generated through genomics and proteomics research, biotechnologists are now poised to extend the biomanufacturing capabilities of recombinant filamentous fungi by enabling them to express genes encoding multiple proteins, including, for example, new biosynthetic pathways for production of new primary or secondary metabolites. It is recognized that filamentous fungi, most species of which have not yet been isolated, represent an enormously diverse source of novel biosynthetic pathways, and that the natural fungal host harboring a valuable biosynthesis pathway may often not be the most suitable organism for biomanufacture purposes. Hence it is expected that substantial effort will be directed to transforming other fungal hosts, non-fungal microbial hosts and indeed non microbial hosts to express some of these novel biosynthetic pathways. But future applications of recombinant expression of proteins will not be confined to biomanufacturing. Opportunities to exploit recombinant technology to unravel the causes of the deleterious impacts of fungi, for example as human, mammalian and plant pathogens, and then to bring forward solutions, is expected to represent a very important future focus of fungal recombinant protein technology.",
"title": ""
},
{
"docid": "neg:1840411_18",
"text": "This paper describes an underwater sensor network with dual communication and support for sensing and mobility. The nodes in the system are connected acoustically for broadcast communication using an acoustic modem we developed. For higher point to point communication speed the nodes are networked optically using custom built optical modems. We describe the hardware details of the underwater sensor node and the communication and networking protocols. Finally, we present and discuss the results from experiments with this system.",
"title": ""
},
{
"docid": "neg:1840411_19",
"text": "OBJECTIVE\nThe Montreal Cognitive Assessment (MoCA; Nasreddine et al., 2005) is a cognitive screening tool that aims to differentiate healthy cognitive aging from Mild Cognitive Impairment (MCI). Several validation studies have been conducted on the MoCA, in a variety of clinical populations. Some studies have indicated that the originally suggested cutoff score of 26/30 leads to an inflated rate of false positives, particularly for those of older age and/or lower education. We conducted a systematic review and meta-analysis of the literature to determine the diagnostic accuracy of the MoCA for differentiating healthy cognitive aging from possible MCI.\n\n\nMETHODS\nOf the 304 studies identified, nine met inclusion criteria for the meta-analysis. These studies were assessed across a range of cutoff scores to determine the respective sensitivities, specificities, positive and negative predictive accuracies, likelihood ratios for positive and negative results, classification accuracies, and Youden indices.\n\n\nRESULTS\nMeta-analysis revealed a cutoff score of 23/30 yielded the best diagnostic accuracy across a range of parameters.\n\n\nCONCLUSIONS\nA MoCA cutoff score of 23, rather than the initially recommended score of 26, lowers the false positive rate and shows overall better diagnostic accuracy. We recommend the use of this cutoff score going forward. Copyright © 2017 John Wiley & Sons, Ltd.",
"title": ""
}
] |
1840412 | Mobile bin picking with an anthropomorphic service robot | [
{
"docid": "pos:1840412_0",
"text": "Assistive mobile robots that autonomously manipulate objects within everyday settings have the potential to improve the lives of the elderly, injured, and disabled. Within this paper, we present the most recent version of the assistive mobile manipulator EL-E with a focus on the subsystem that enables the robot to retrieve objects from and deliver objects to flat surfaces. Once provided with a 3D location via brief illumination with a laser pointer, the robot autonomously approaches the location and then either grasps the nearest object or places an object. We describe our implementation in detail, while highlighting design principles and themes, including the use of specialized behaviors, task-relevant features, and low-dimensional representations. We also present evaluations of EL-E’s performance relative to common forms of variation. We tested EL-E’s ability to approach and grasp objects from the 25 object categories that were ranked most important for robotic retrieval by motor-impaired patients from the Emory ALS Center. Although reliability varied, EL-E succeeded at least once with objects from 21 out of 25 of these categories. EL-E also approached and grasped a cordless telephone on 12 different surfaces including floors, tables, and counter tops with 100% success. The same test using a vitamin pill (ca. 15mm ×5mm ×5mm) resulted in 58% success.",
"title": ""
},
{
"docid": "pos:1840412_1",
"text": "Unstructured human environments present a substantial challenge to effective robotic operation. Mobile manipulation in typical human environments requires dealing with novel unknown objects, cluttered workspaces, and noisy sensor data. We present an approach to mobile pick and place in such environments using a combination of 2D and 3D visual processing, tactile and proprioceptive sensor data, fast motion planning, reactive control and monitoring, and reactive grasping. We demonstrate our approach by using a two-arm mobile manipulation system to pick and place objects. Reactive components allow our system to account for uncertainty arising from noisy sensors, inaccurate perception (e.g. object detection or registration) or dynamic changes in the environment. We also present a set of tools that allow our system to be easily configured within a short time for a new robotic system.",
"title": ""
}
] | [
{
"docid": "neg:1840412_0",
"text": "PROBLEM AND METHOD\nThis paper takes a critical look at the present state of bicycle infrastructure treatment safety research, highlighting data needs. Safety literature relating to 22 bicycle treatments is examined, including findings, study methodologies, and data sources used in the studies. Some preliminary conclusions related to research efficacy are drawn from the available data and findings in the research.\n\n\nRESULTS AND DISCUSSION\nWhile the current body of bicycle safety literature points toward some defensible conclusions regarding the safety and effectiveness of certain bicycle treatments, such as bike lanes and removal of on-street parking, the vast majority treatments are still in need of rigorous research. Fundamental questions arise regarding appropriate exposure measures, crash measures, and crash data sources.\n\n\nPRACTICAL APPLICATIONS\nThis research will aid transportation departments with regard to decisions about bicycle infrastructure and guide future research efforts toward understanding safety impacts of bicycle infrastructure.",
"title": ""
},
{
"docid": "neg:1840412_1",
"text": "The objective of this paper is to present an approach to electromagnetic field simulation based on the systematic use of the global (i.e. integral) quantities. In this approach, the equations of electromagnetism are obtained directly in a finite form starting from experimental laws without resorting to the differential formulation. This finite formulation is the natural extension of the network theory to electromagnetic field and it is suitable for computational electromagnetics.",
"title": ""
},
{
"docid": "neg:1840412_2",
"text": "Autoregressive sequence models based on deep neural networks, such as RNNs, Wavenet and the Transformer attain state-of-the-art results on many tasks. However, they are difficult to parallelize and are thus slow at processing long sequences. RNNs lack parallelism both during training and decoding, while architectures like WaveNet and Transformer are much more parallelizable during training, yet still operate sequentially during decoding. We present a method to extend sequence models using discrete latent variables that makes decoding much more parallelizable. We first autoencode the target sequence into a shorter sequence of discrete latent variables, which at inference time is generated autoregressively, and finally decode the output sequence from this shorter latent sequence in parallel. To this end, we introduce a novel method for constructing a sequence of discrete latent variables and compare it with previously introduced methods. Finally, we evaluate our model end-to-end on the task of neural machine translation, where it is an order of magnitude faster at decoding than comparable autoregressive models. While lower in BLEU than purely autoregressive models, our model achieves higher scores than previously proposed non-autoregressive translation models.",
"title": ""
},
{
"docid": "neg:1840412_3",
"text": "Chemistry and biology are intimately connected sciences yet the chemistry-biology interface remains problematic and central issues regarding the very essence of living systems remain unresolved. In this essay we build on a kinetic theory of replicating systems that encompasses the idea that there are two distinct kinds of stability in nature-thermodynamic stability, associated with \"regular\" chemical systems, and dynamic kinetic stability, associated with replicating systems. That fundamental distinction is utilized to bridge between chemistry and biology by demonstrating that within the parallel world of replicating systems there is a second law analogue to the second law of thermodynamics, and that Darwinian theory may, through scientific reductionism, be related to that second law analogue. Possible implications of these ideas to the origin of life problem and the relationship between chemical emergence and biological evolution are discussed.",
"title": ""
},
{
"docid": "neg:1840412_4",
"text": "Regenerative endodontics has been defined as \"biologically based procedure designed to replace damaged structures, including dentin and root structures, as well as cells of the pulp-dentin complex.\" This is an exciting and rapidly evolving field of human endodontics for the treatment of immature permanent teeth with infected root canal systems. These procedures have shown to be able not only to resolve pain and apical periodontitis but continued root development, thus increasing the thickness and strength of the previously thin and fracture-prone roots. In the last decade, over 80 case reports, numerous animal studies, and series of regenerative endodontic cases have been published. However, even with multiple successful case reports, there are still some remaining questions regarding terminology, patient selection, and procedural details. Regenerative endodontics provides the hope of converting a nonvital tooth into vital one once again.",
"title": ""
},
{
"docid": "neg:1840412_5",
"text": "Prior to the start of cross-sex hormone therapy (CSH), androgenic progestins are often used to induce amenorrhea in female to male (FtM) pubertal adolescents with gender dysphoria (GD). The aim of this single-center study is to report changes in anthropometry, side effects, safety parameters, and hormone levels in a relatively large cohort of FtM adolescents with a diagnosis of GD at Tanner stage B4 or further, who were treated with lynestrenol (Orgametril®) monotherapy and in combination with testosterone esters (Sustanon®). A retrospective analysis of clinical and biochemical data obtained during at least 6 months of hormonal treatment in FtM adolescents followed at our adolescent gender clinic since 2010 (n = 45) was conducted. McNemar’s test to analyze reported side effects over time was performed. A paired Student’s t test or a Wilcoxon signed-ranks test was performed, as appropriate, on anthropometric and biochemical data. For biochemical analyses, all statistical tests were done in comparison with baseline parameters. Patients who were using oral contraceptives (OC) at intake were excluded if a Mann-Whitney U test indicated influence of OC. Metrorrhagia and acne were most pronounced during the first months of monotherapy and combination therapy respectively and decreased thereafter. Headaches, hot flushes, and fatigue were the most reported side effects. Over the course of treatment, an increase in musculature, hemoglobin, hematocrit, creatinine, and liver enzymes was seen, progressively sliding into male reference ranges. Lipid metabolism shifted to an unfavorable high-density lipoprotein (HDL)/low-density lipoprotein (LDL) ratio; glucose metabolism was not affected. Sex hormone-binding globulin (SHBG), total testosterone, and estradiol levels decreased, and free testosterone slightly increased during monotherapy; total and free testosterone increased significantly during combination therapy. Gonadotropins were only fully suppressed during combination therapy. Anti-Müllerian hormone (AMH) remained stable throughout the treatment. Changes occurred in the first 6 months of treatment and remained mostly stable thereafter. Treatment of FtM gender dysphoric adolescents with lynestrenol monotherapy and in combination with testosterone esters is effective, safe, and inexpensive; however, suppression of gonadotropins is incomplete. Regular blood controls allow screening for unphysiological changes in safety parameters or hormonal levels and for medication abuse.",
"title": ""
},
{
"docid": "neg:1840412_6",
"text": "Swarms of robots will revolutionize many industrial applications, from targeted material delivery to precision farming. However, several of the heterogeneous characteristics that make them ideal for certain future applications — robot autonomy, decentralized control, collective emergent behavior, etc. — hinder the evolution of the technology from academic institutions to real-world problems. Blockchain, an emerging technology originated in the Bitcoin field, demonstrates that by combining peer-topeer networks with cryptographic algorithms a group of agents can reach an agreement on a particular state of affairs and record that agreement without the need for a controlling authority. The combination of blockchain with other distributed systems, such as robotic swarm systems, can provide the necessary capabilities to make robotic swarm operations more secure, autonomous, flexible and even profitable. This work explains how blockchain technology can provide innovative solutions to four emergent issues in the swarm robotics research field. New security, decision making, behavior differentiation and business models for swarm robotic systems are described by providing case scenarios and examples. Finally, limitations and possible future problems that arise from the combination of these two technologies are described. I. THE BLOCKCHAIN: A DISRUPTIVE",
"title": ""
},
{
"docid": "neg:1840412_7",
"text": "Automatic License Plate Recognition (ALPR) systems capture a vehicle‟s license plate and recognize the license number and other required information from the captured image. ALPR systems have numbers of significant applications: law enforcement, public safety agencies, toll gate systems, etc. The goal of these systems is to recognize the characters and state on the license plate with high accuracy. ALPR has been implemented using various techniques. Traditional recognition methods use handcrafted features for obtaining features from the image. Unlike conventional methods, deep learning techniques automatically select features and are one of the game changing technologies in the field of computer vision, automatic recognition tasks and natural language processing. Some of the most successful deep learning methods involve Convolutional Neural Networks. This technique applies deep learning techniques to the ALPR problem of recognizing the state and license number from the USA license plate. Existing ALPR systems include three stages of",
"title": ""
},
{
"docid": "neg:1840412_8",
"text": "Deep neural networks are being used increasingly to automate data analysis and decision making, yet their decision-making process is largely unclear and is difficult to explain to the end users. In this paper, we address the problem of Explainable AI for deep neural networks that take images as input and output a class probability. We propose an approach called RISE that generates an importance map indicating how salient each pixel is for the model’s prediction. In contrast to white-box approaches that estimate pixel importance using gradients or other internal network state, RISE works on blackbox models. It estimates importance empirically by probing the model with randomly masked versions of the input image and obtaining the corresponding outputs. We compare our approach to state-of-the-art importance extraction methods using both an automatic deletion/insertion metric and a pointing metric based on human-annotated object segments. Extensive experiments on several benchmark datasets show that our approach matches or exceeds the performance of other methods, including white-box approaches.",
"title": ""
},
{
"docid": "neg:1840412_9",
"text": "Recently, there has been a great attention to develop feature selection methods on the microarray high dimensional datasets. In this paper, an innovative method based on Maximum Relevancy and Minimum Redundancy (MRMR) approach by using Hesitant Fuzzy Sets (HFSs) is proposed to deal with feature subset selection; the method is called MRMR-HFS. MRMR-HFS is a novel filterbased feature selection algorithm that selects features by ensemble of ranking algorithms (as the measure of feature-class relevancy that must be maximized) and similarity measures (as the measure of feature-feature redundancy that must be minimized). The combination of ranking algorithms and similarity measures are done by using the fundamental concepts of information energies of HFSs. The proposed method has been inspired from Correlation based Feature Selection (CFS) within the sequential forward search in order to present a robust feature selection tool to solve high dimensional problems. To evaluate the effectiveness of the MRMR-HFS, several experimental results are carried out on nine well-known microarray high dimensional datasets. The obtained results are compared with those of other similar state-of-the-art algorithms including Correlation-based Feature Selection (CFS), Fast Correlation-based Filter (FCBF), Intract (INT), and Maximum Relevancy Minimum Redundancy (MRMR). The outcomes of comparison carried out via some non-parametric statistical tests confirm that the MRMR-HFS is effective for feature subset selection in high dimensional datasets in terms of accuracy, sensitivity, specificity, G-mean, and number of selected features.",
"title": ""
},
{
"docid": "neg:1840412_10",
"text": "Faced with changing markets and tougher competition, more and more companies realize that to compete effectively they must transform how they function. But while senior managers understand the necessity of change, they often misunderstand what it takes to bring it about. They assume that corporate renewal is the product of company-wide change programs and that in order to transform employee behavior, they must alter a company's formal structure and systems. Both these assumptions are wrong, say these authors. Using examples drawn from their four-year study of organizational change at six large corporations, they argue that change programs are, in fact, the greatest obstacle to successful revitalization and that formal structures and systems are the last thing a company should change, not the first. The most successful change efforts begin at the periphery of a corporation, in a single plant or division. Such efforts are led by general managers, not the CEO or corporate staff people. And these general managers concentrate not on changing formal structures and systems but on creating ad hoc organizational arrangements to solve concrete business problems. This focuses energy for change on the work itself, not on abstractions such as \"participation\" or \"culture.\" Once general managers understand the importance of this grass-roots approach to change, they don't have to wait for senior management to start a process of corporate renewal. The authors describe a six-step change process they call the \"critical path.\"",
"title": ""
},
{
"docid": "neg:1840412_11",
"text": "Autonomous vehicles require a reliable perception of their environment to operate in real-world conditions. Awareness of moving objects is one of the key components for the perception of the environment. This paper proposes a method for detection and tracking of moving objects (DATMO) in dynamic environments surrounding a moving road vehicle equipped with a Velodyne laser scanner and GPS/IMU localization system. First, at every time step, a local 2.5D grid is built using the last sets of sensor measurements. Along time, the generated grids combined with localization data are integrated into an environment model called local 2.5D map. In every frame, a 2.5D grid is compared with an updated 2.5D map to compute a 2.5D motion grid. A mechanism based on spatial properties is presented to suppress false detections that are due to small localization errors. Next, the 2.5D motion grid is post-processed to provide an object level representation of the scene. The detected moving objects are tracked over time by applying data association and Kalman filtering. The experiments conducted on different sequences from KITTI dataset showed promising results, demonstrating the applicability of the proposed method.",
"title": ""
},
{
"docid": "neg:1840412_12",
"text": "Many applications of unmanned aerial vehicles (UAVs) require the capability to navigate to some goal and to perform precise and safe landing. In this paper, we present a visual navigation system as an alternative pose estimation method for environments and situations in which GPS is unavailable. The developed visual odometer is an incremental procedure that estimates the vehicle's ego-motion by extracting and tracking visual features, using an onboard camera. For more robustness and accuracy, the visual estimates are fused with measurements from an Inertial Measurement Unit (IMU) and a Pressure Sensor Altimeter (PSA) in order to provide accurate estimates of the vehicle's height, velocity and position relative to a given location. These estimates are then exploited by a nonlinear hierarchical controller for achieving various navigation tasks such as take-off, landing, hovering, target tracking, etc. In addition to the odometer description, the paper presents validation results from autonomous flights using a small quadrotor UAV.",
"title": ""
},
{
"docid": "neg:1840412_13",
"text": "Image features such as step edges, lines and Mach bands all give rise to points where the Fourier components of the image are maximally in phase. The use of phase congruency for marking features has signiicant advantages over gradient based methods. It is a dimension-less quantity that is invariant to changes in image brightness or contrast, hence it provides an absolute measure of the signiicance of feature points. This allows the use of universal threshold values that can be applied over wide classes of images. This paper presents a new way of calculating phase congruency through the use of wavelets. The existing theory that has been developed for 1D signals is extended to allow the calculation of phase congruency in 2D images. It is shown that for good localization it is important to consider the spread of frequencies present at a point of phase congruency. An eeective method for identifying, and compensating for, the level of noise in an image is presented. Finally, it is argued that high-pass ltering should be used to obtain image information at diierent scales. With this approach the choice of scale only aaects the relative signiicance of features without degrading their localization. Abstract Image features such as step edges, lines and Mach bands all give rise to points where the Fourier components of the image are maximally in phase. The use of phase congruency for marking features has signiicant advantages over gradient based methods. It is a dimensionless quantity that is invariant to changes in image brightness or contrast, hence it provides an absolute measure of the signiicance of feature points. This allows the use of universal threshold values that can be applied over wide classes of images. This paper presents a new way of calculating phase congruency through the use of wavelets. The existing theory that has been developed for 1D signals is extended to allow the calculation of phase congruency in 2D images. It is shown that for good localization it is important to consider the spread of frequencies present at a point of phase congruency. An eeective method for identifying, and compensating for, the level of noise in an image is presented. Finally, it is argued that high-pass ltering should be used to obtain image information at diierent scales. With this approach the choice of scale only aaects the relative signiicance of features without degrading their localization.",
"title": ""
},
{
"docid": "neg:1840412_14",
"text": "This essay develops the philosophical foundations for design research in the Technology of Information Systems (TIS). Traditional writings on philosophy of science cannot fully describe this mode of research, which dares to intervene and improve to realize alternative futures instead of explaining or interpreting the past to discover truth. Accordingly, in addition to philosophy of science, the essay draws on writings about the act of designing, philosophy of technology and the substantive (IS) discipline. I define design research in TIS as in(ter)vention in the representational world defined by the hierarchy of concerns following semiotics. The complementary nature of the representational (internal) and real (external) environments provides the basis to articulate the dual ontological and epistemological bases. Understanding design research in TIS in this manner suggests operational principles in the internal world as the form of knowledge created by design researchers, and artifacts that embody these are seen as situated instantiations of normative theories that affect the external phenomena of interest. Throughout the paper, multiple examples illustrate the arguments. Finally, I position the resulting ‘method’ for design research vis-à-vis existing research methods and argue for its legitimacy as a viable candidate for research in the IS discipline.",
"title": ""
},
{
"docid": "neg:1840412_15",
"text": "Motivated by formation control of multiple non-holonomic mobile robots, this paper presents a trajectory tracking control scheme design for nonholonomic mobile robots that are equipped with low-level linear and angular velocities control systems. The design includes a nonlinear kinematic trajectory tracking control law and a tracking control gains selection method that provide a means to implement the nonlinear tracking control law systematically based on the dynamic control performance of the robot's low-level control systems. In addition, the proposed scheme, by design, enables the mobile robot to execute reference trajectories that are represented by time-parameterized waypoints. This feature provides the scheme a generic interface with higher-level trajectory planners. The trajectory tracking control scheme is validated using an iRobot Packbot's parameteric model estimated from experimental data.",
"title": ""
},
{
"docid": "neg:1840412_16",
"text": "A plasmid is defined as a double stranded, circular DNA molecule capable of autonomous replication. By definition, plasmids do not carry genes essential for the growth of host cells under non-stressed conditions but they have systems which guarantee their autonomous replication also controlling the copy number and ensuring stable inheritance during cell division. Most of the plasmids confer positively selectable phenotypes by the presence of antimicrobial resistance genes. Plasmids evolve as an integral part of the bacterial genome, providing resistance genes that can be easily exchanged among bacteria of different origin and source by conjugation. A multidisciplinary approach is currently applied to study the acquisition and spread of antimicrobial resistance in clinically relevant bacterial pathogens and the established surveillance can be implemented by replicon typing of plasmids. Particular plasmid families are more frequently detected among Enterobacteriaceae and play a major role in the diffusion of specific resistance genes. For instance, IncFII, IncA/C, IncL/M, IncN and IncI1 plasmids carrying extended-spectrum beta-lactamase genes and acquired AmpC genes are currently considered to be \"epidemic resistance plasmids\", being worldwide detected in Enterobacteriaceae of different origin and sources. The recognition of successful plasmids is an essential first step to design intervention strategies preventing their spread.",
"title": ""
},
{
"docid": "neg:1840412_17",
"text": "In this paper, a multi-agent optimization algorithm (MAOA) is proposed for solving the resourceconstrained project scheduling problem (RCPSP). In the MAOA, multiple agents work in a grouped environment where each agent represents a feasible solution. The evolution of agents is achieved by using four main elements in the MAOA, including social behavior, autonomous behavior, self-learning, and environment adjustment. The social behavior includes the global one and the local one for performing exploration. Through the global social behavior, the leader agent in every group is guided by the global best leader. Through the local social behavior, each agent is guided by its own leader agent. Through the autonomous behavior, each agent exploits its own neighborhood. Through the self-learning, the best agent performs an intensified search to further exploit the promising region. Meanwhile, some agents perform migration among groups to adjust the environment dynamically for information sharing. The implementation of the MAOA for solving the RCPSP is presented in detail, and the effect of key parameters of the MAOA is investigated based on the Taguchi method of design of experiment. Numerical testing results are provided by using three sets of benchmarking instances. The comparisons to the existing algorithms demonstrate the effectiveness of the proposed MAOA for solving the RCPSP. 2015 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "neg:1840412_18",
"text": "Scripts define knowledge about how everyday scenarios (such as going to a restaurant) are expected to unfold. One of the challenges to learning scripts is the hierarchical nature of the knowledge. For example, a suspect arrested might plead innocent or guilty, and a very different track of events is then expected to happen. To capture this type of information, we propose an autoencoder model with a latent space defined by a hierarchy of categorical variables. We utilize a recently proposed vector quantization based approach, which allows continuous embeddings to be associated with each latent variable value. This permits the decoder to softly decide what portions of the latent hierarchy to condition on by attending over the value embeddings for a given setting. Our model effectively encodes and generates scripts, outperforming a recent language modeling-based method on several standard tasks, and allowing the autoencoder model to achieve substantially lower perplexity scores compared to the previous language modelingbased method.",
"title": ""
},
{
"docid": "neg:1840412_19",
"text": "Despite the tremendous empirical success of neural models in natural language processing, many of them lack the strong intuitions that accompany classical machine learning approaches. Recently, connections have been shown between convolutional neural networks (CNNs) and weighted finite state automata (WFSAs), leading to new interpretations and insights. In this work, we show that some recurrent neural networks also share this connection to WFSAs. We characterize this connection formally, defining rational recurrences to be recurrent hidden state update functions that can be written as the Forward calculation of a finite set of WFSAs. We show that several recent neural models use rational recurrences. Our analysis provides a fresh view of these models and facilitates devising new neural architectures that draw inspiration from WFSAs. We present one such model, which performs better than two recent baselines on language modeling and text classification. Our results demonstrate that transferring intuitions from classical models like WFSAs can be an effective approach to designing and understanding neural models.",
"title": ""
}
] |
Subsets and Splits